SR-G
(Serge)
December 28, 2025, 3:16pm
1
1. The problem I’m having:
I have CADDY configured and used as a reverse proxy, for several (homelab) websites. All are working fine, except the ones that are (i think) using “websockets” or that kind of protocols (for example, the “handbrake” web package).
For these ones, i have with a regular frequency some “disconnects” during 1-2 seconds, then everything is fine again.
I have the feeling that this is happening every single time i have this in logs :
{"level":"info","ts":1766934510.9174817,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1766934510.9176507,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1766934510.9187803,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
(and i have 0 idea from where this is coming from and how to disable it, knowing that i disabled the “admin” component)
2. Error messages and/or full log output:
No specific error messages
3. Caddy version:
v2.10.2 h1:g/gTYjGMD0dec+UgMw8SnfmJ3I9+M2TdvoRL/Ovu6U8=
4. How I installed and ran Caddy:
a. System environment:
LINUX + DOCKER CONTAINERS
b. Command:
No commands.
c. Service/unit/compose file:
docker
d. My complete Caddy config:
{
metrics
order cache before rewrite
cache
admin off
}
:2020 {
metrics
}
handbrake.************* {
reverse_proxy 192.168.8.140:5800
}
Symptoms in handbrake (and nothing in Chrome Developer Tools) :
(then it reconnects automatically after 2 seconds)
ferrybig
(Fernando van Loenhout)
December 30, 2025, 2:45pm
2
You are using caddy-docker-proxy.
This extension updates your caddy configuration every one in a while, then reloads caddy.
When caddy reloads, all existing connections are killed after the grace period, then the reload finishes
Looking through the bug reports of caddy-docker-proxy., it seems that using {{ upstream }} in combination with a container that has multiple ip’s seem to trigger a non determistic config generation, which causes the config to change, which triggers a reload
opened 04:47AM - 27 Feb 25 UTC
For the past couple months that I've been using caddy-docker-proxy it's been fan… tastic, this is a great project. I'd notice certain things, particularly applications utilizing websockets had issues, seemingly random disconnects. I finally put some time into troubleshooting this and discovered that every 30-seconds caddy-docker-proxy was reloading the config with a new Caddyfile. This corresponded to the default polling interval for checks in changes of the Caddyfile. When checking the different Caddyfiles from one 30-second interval to the next I noticed this.
Caddyfile A
```
test.example.com {
import common_config
reverse_proxy 192.168.131.2:80 192.168.130.2:80
}
```
Caddyfile B (30-seconds later)
```
test.example.com {
import common_config
reverse_proxy 192.168.130.2:80 192.168.131.2:80
}
```
The reverse_proxy directive contains the same upstream targets, they're just sorted differently. Seems to only happen on my containers that are attached to more than one docker network. This looks like it's enough of a change that caddy-docker-file would consider it to be a new Caddyfile and reload the server, and I believe that's what was breaking my websocket applications. Anyway, Claude 3.7 helped me out and I believe this can be fixed by sorting the targets in `go.cmd` before they get put into the generated Caddyfile. I forked the repo, implemented a quick sort of the targets, rebuild caddy with my fork, and I haven't seen a single unexpected reload since.
Not sure if this is worth integrating, but I figured it'd be worth posting here. The fix is to just sort the targets in `labels.go` before they get transformed, not sure what else this might impact, but it appears to have solved this for me. I just imported `sort` and sorted the targets on line 18 before they get transformed.
`labels.go`
```
package generator
import (
"strconv"
"strings"
"text/template"
"sort"
"github.com/lucaslorentz/caddy-docker-proxy/v2/caddyfile"
)
type targetsProvider func() ([]string, error)
func labelsToCaddyfile(labels map[string]string, templateData interface{}, getTargets targetsProvider) (*caddyfile.Container, error) {
funcMap := template.FuncMap{
"upstreams": func(options ...interface{}) (string, error) {
targets, err := getTargets()
sort.Strings(targets)
transformed := []string{}
for _, target := range targets {
for _, param := range options {
if protocol, isProtocol := param.(string); isProtocol {
target = protocol + "://" + target
} else if port, isPort := param.(int); isPort {
target = target + ":" + strconv.Itoa(port)
}
}
transformed = append(transformed, target)
}
sort.Strings(transformed)
return strings.Join(transformed, " "), err
},
"http": func() string {
return "http"
},
"https": func() string {
return "https"
},
"h2c": func() string {
return "h2c"
},
}
return caddyfile.FromLabels(labels, templateData, funcMap)
}
```
2 Likes
SR-G
(Serge)
December 30, 2025, 5:46pm
3
Yeah forgot to update this post, i found the issue in the meanwhile - the “reload” was triggered every 30 seconds (hence the regular frequency) because of a unspotted failling container.
As a consequence of the reload (that could/should have been transparent) the websockets are paused (or closed/resumed), hence the issue at app level (like in the web flavor of handbrake).
matt
(Matt Holt)
December 31, 2025, 3:51pm
4
There is an open issue to preserve websockets through reloads:
opened 07:18AM - 29 Aug 25 UTC
### Issue Details
We run multiple apps behind a single Caddy instance. Some of … these are Blazor Server apps, which rely entirely on websockets (even for the UI).
The problem: every time we reload the caddy config, all websocket connections drop, even if the routes those clients are on didn’t change at all.
Example:
```
app-v1 running on port 5000
Later we add app-v2 on port 5001 via a new route
```
As soon as we push that new route, everyone connected to app-v1 gets kicked. The same thing happens if we update unrelated routes that don’t even use websockets — for example, if we add an unrelated REST API service to the config, connections to websocket apps still get dropped.
We do have reconnect logic, but because Blazor Server depends fully on websockets, the constant reconnects hurt the UX. We’d really prefer connections for unchanged routes to just stay alive.
Would it be possible for Caddy to preserve websocket connections on reload when their route/handler hasn’t changed?
**
We noticed a similar discussion in [#6420](https://github.com/caddyserver/caddy/issues/6420), but the conversation there remained unfinished, so it wasn’t clear whether preserving connections across reloads is planned.
It also contains a workaround you can use in the meantime.
4 Likes
system
(system)
Closed
January 30, 2026, 3:52pm
5
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.