"Forced refreshes" happening for website using websockets

1. The problem I’m having:

I have CADDY configured and used as a reverse proxy, for several (homelab) websites. All are working fine, except the ones that are (i think) using “websockets” or that kind of protocols (for example, the “handbrake” web package).
For these ones, i have with a regular frequency some “disconnects” during 1-2 seconds, then everything is fine again.
I have the feeling that this is happening every single time i have this in logs :

{"level":"info","ts":1766934510.9174817,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1766934510.9176507,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1766934510.9187803,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}

(and i have 0 idea from where this is coming from and how to disable it, knowing that i disabled the “admin” component)

2. Error messages and/or full log output:

No specific error messages

3. Caddy version:

v2.10.2 h1:g/gTYjGMD0dec+UgMw8SnfmJ3I9+M2TdvoRL/Ovu6U8=

4. How I installed and ran Caddy:

a. System environment:

LINUX + DOCKER CONTAINERS

b. Command:

No commands.

c. Service/unit/compose file:

docker

d. My complete Caddy config:

{
metrics
order cache before rewrite
cache
admin off
}
:2020 {
metrics
}
handbrake.************* {
reverse_proxy 192.168.8.140:5800
}

Symptoms in handbrake (and nothing in Chrome Developer Tools) :

(then it reconnects automatically after 2 seconds)

You are using caddy-docker-proxy.

This extension updates your caddy configuration every one in a while, then reloads caddy.

When caddy reloads, all existing connections are killed after the grace period, then the reload finishes

Looking through the bug reports of caddy-docker-proxy., it seems that using {{ upstream }} in combination with a container that has multiple ip’s seem to trigger a non determistic config generation, which causes the config to change, which triggers a reload

2 Likes

Yeah forgot to update this post, i found the issue in the meanwhile - the “reload” was triggered every 30 seconds (hence the regular frequency) because of a unspotted failling container.

As a consequence of the reload (that could/should have been transparent) the websockets are paused (or closed/resumed), hence the issue at app level (like in the web flavor of handbrake).

There is an open issue to preserve websockets through reloads:

It also contains a workaround you can use in the meantime.

4 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.