1. Caddy version (caddy version
):
v2.4.6 h1:HGkGICFGvyrodcqOOclHKfvJC0qTU7vny/7FhYp9hNw=
2. How I run Caddy:
Downloaded binary from the website and activated with systemd.
a. System environment:
Ubuntu 20.04 LTS
b. Command:
sudo systemctl start caddy
(it’s enabled so it runs by default)
c. Service/unit/compose file:
;
; Ansible managed
;
; source: https://github.com/mholt/caddy/blob/master/dist/init/linux-systemd/caddy.service
; version: 6be0386
; changes: Set variables via Ansible
[Unit]
Description=Caddy HTTP/2 web server
Documentation=https://caddyserver.com/docs
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service
[Service]
Restart=on-failure
StartLimitInterval=86400
StartLimitBurst=5
; User and group the process will run as.
User=www-data
Group=www-data
; Letsencrypt-issued certificates will be written to this directory.
Environment=CADDYPATH=/etc/ssl/caddy
ExecStart="/usr/local/bin/caddy" run --environ --config "/etc/caddy/Caddyfile"
ExecReload="/usr/local/bin/caddy" reload --config "/etc/caddy/Caddyfile"
; Limit the number of file descriptors; see `man systemd.exec` for more limit settings.
LimitNOFILE=1048576
; Use private /tmp and /var/tmp, which are discarded after caddy stops.
PrivateTmp=true
; Use a minimal /dev
PrivateDevices=true
; Hide /home, /root, and /run/user. Nobody will steal your SSH-keys.
ProtectHome=false
; Make /usr, /boot, /etc and possibly some more folders read-only.
ProtectSystem=full
; … except /etc/ssl/caddy, because we want Letsencrypt-certificates there.
; This merely retains r/w access rights, it does not add any new. Must still be writable on the host!
ReadWriteDirectories=/etc/ssl/caddy /var/log/caddy
; The following additional security directives only work with systemd v229 or later.
; They further retrict privileges that can be gained by caddy.
; Note that you may have to add capabilities required by any plugins in use.
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_BIND_SERVICE
NoNewPrivileges=true
[Install]
WantedBy=multi-user.target
d. My complete Caddyfile or JSON config:
:8080 {
reverse_proxy localhost:8081 {
transport http {
keepalive_interval 65s
}
}
}
3. The problem I’m having:
I am receiving TCP keepalive packets every 15 seconds (if there haven’t been any other packets in that duration). I expect to receive them every 65 seconds (or, since my application is sending websocket ping/pong faster than that, I shouldn’t receive any tcp keepalives at all).
4. Error messages and/or full log output:
The following is the wireshark summary:
No. Time Source Protocol Length Info
27 72.148176926 48628 HTTP 412 GET /test/ws?since=1656775973 HTTP/1.1
28 72.148223335 8080 TCP 86 8080 → 48628 [ACK] Seq=1 Ack=327 Win=64000 Len=0 TSval=3094866771 TSecr=6435782
29 72.151626305 8080 HTTP 230 HTTP/1.1 101 Switching Protocols
30 72.151963175 8080 WebSocket 158 WebSocket Text [FIN]
31 72.153697178 48628 TCP 86 48628 → 8080 [ACK] Seq=327 Ack=145 Win=86528 Len=0 TSval=6435782 TSecr=3094866775
32 72.154024249 48628 TCP 86 48628 → 8080 [ACK] Seq=327 Ack=217 Win=86528 Len=0 TSval=6435782 TSecr=3094866775
33 87.666293523 8080 TCP 86 [TCP Keep-Alive] 8080 → 48628 [ACK] Seq=216 Ack=327 Win=64128 Len=0 TSval=3094882289 TSecr=6435782
34 87.706096262 48628 TCP 86 [TCP Keep-Alive ACK] 48628 → 8080 [ACK] Seq=327 Ack=217 Win=86528 Len=0 TSval=6437337 TSecr=3094866775
35 103.026345910 8080 TCP 86 [TCP Keep-Alive] 8080 → 48628 [ACK] Seq=216 Ack=327 Win=64128 Len=0 TSval=3094897650 TSecr=6437337
36 103.062308046 48628 TCP 86 [TCP Keep-Alive ACK] 48628 → 8080 [ACK] Seq=327 Ack=217 Win=86528 Len=0 TSval=6438873 TSecr=3094866775
37 117.153146331 8080 WebSocket 88 WebSocket Ping [FIN]
38 117.220795649 48628 TCP 86 48628 → 8080 [ACK] Seq=327 Ack=219 Win=86528 Len=0 TSval=6440287 TSecr=3094911776
39 117.221098923 48628 WebSocket 92 WebSocket Pong [FIN] [MASKED]
40 117.221148901 8080 TCP 86 8080 → 48628 [ACK] Seq=219 Ack=333 Win=64128 Len=0 TSval=3094911844 TSecr=6440287
41 132.162149274 48628 WebSocket 92 WebSocket Ping [FIN] [MASKED]
42 132.162260954 8080 TCP 86 8080 → 48628 [ACK] Seq=219 Ack=339 Win=64128 Len=0 TSval=3094926785 TSecr=6441783
43 132.162751004 8080 WebSocket 88 WebSocket Pong [FIN]
44 132.201040062 48628 TCP 86 48628 → 8080 [ACK] Seq=339 Ack=221 Win=86528 Len=0 TSval=6441787 TSecr=3094926786
45 147.399617161 8080 TCP 86 [TCP Keep-Alive] 8080 → 48628 [ACK] Seq=220 Ack=339 Win=64128 Len=0 TSval=3094942023 TSecr=6441787
46 147.514169666 48628 TCP 86 [TCP Keep-Alive ACK] 48628 → 8080 [ACK] Seq=339 Ack=221 Win=86528 Len=0 TSval=6443318 TSecr=3094926786
47 162.159970283 8080 WebSocket 88 WebSocket Ping [FIN]
48 162.261729577 48628 TCP 86 48628 → 8080 [ACK] Seq=339 Ack=223 Win=86528 Len=0 TSval=6444792 TSecr=3094956783
49 162.262402747 48628 WebSocket 92 WebSocket Pong [FIN] [MASKED]
50 162.262422115 8080 TCP 86 8080 → 48628 [ACK] Seq=223 Ack=345 Win=64128 Len=0 TSval=3094956886 TSecr=6444792
51 178.119675152 8080 TCP 86 [TCP Keep-Alive] 8080 → 48628 [ACK] Seq=222 Ack=345 Win=64128 Len=0 TSval=3094972743 TSecr=6444792
52 178.122020596 48628 TCP 86 [TCP Keep-Alive ACK] 48628 → 8080 [ACK] Seq=345 Ack=223 Win=86528 Len=0 TSval=6446378 TSecr=3094956886
The following are my logs:
2022/07/02 17:07:01.976 INFO using provided configuration {"config_file": "Caddyfile", "config_adapter": ""}
2022/07/02 17:07:01.980 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["//localhost:2019", "//[::1]:2019", "//127.0.0.1:2019"]}
2022/07/02 17:07:01.981 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc000430f50"}
2022/07/02 17:07:01.981 DEBUG http starting server loop {"address": "[::]:8080", "http3": false, "tls": false}
2022/07/02 17:07:01.981 INFO tls cleaning storage unit {"description": "FileStorage:/home/karmanyaahm/.local/share/caddy"}
2022/07/02 17:07:01.981 INFO tls finished cleaning storage units
2022/07/02 17:07:01.981 INFO autosaved config (load with --resume flag) {"file": "/home/karmanyaahm/.local/share/caddy/autosave.json"}
2022/07/02 17:07:01.981 INFO serving initial configuration
2022/07/02 17:07:11.758 DEBUG http.handlers.reverse_proxy selected upstream {"dial": "localhost:8081", "total_upstreams": 1}
2022/07/02 17:07:11.761 DEBUG http.handlers.reverse_proxy upstream roundtrip {"upstream": "localhost:8081", "duration": 0.002632688, "request": {"remote_ip": "fd73:ac06:d336:6780:45c7:ccad:8a1e:729b", "remote_port": "48628", "proto": "HTTP/1.1", "method": "GET", "host": "karmanyaahsarch.homenet.malhotra.cc:8080", "uri": "/test/ws?since=1656775973", "headers": {"Sec-Websocket-Version": ["13"], "X-Forwarded-For": ["fd73:ac06:d336:6780:45c7:ccad:8a1e:729b"], "X-Forwarded-Proto": ["http"], "X-Forwarded-Host": ["karmanyaahsarch.homenet.malhotra.cc:8080"], "Accept-Encoding": ["gzip"], "Upgrade": ["websocket"], "Sec-Websocket-Key": ["WCm5Mz872IAOPbaTfupLSA=="], "User-Agent": ["ntfy/1.10.0 (fdroid; Android 10; SDK 29)"], "Connection": ["Upgrade"], "Sec-Websocket-Extensions": ["permessage-deflate"]}}, "headers": {"Sec-Websocket-Accept": ["PttZ6SVgw+WyjZ7VBIwteet3tkw="], "Upgrade": ["websocket"], "Connection": ["Upgrade"]}, "status": 101}
2022/07/02 17:07:11.761 DEBUG http.handlers.reverse_proxy upgrading connection {"upstream": "localhost:8081", "duration": 0.002632688, "request": {"remote_ip": "fd73:ac06:d336:6780:45c7:ccad:8a1e:729b", "remote_port": "48628", "proto": "HTTP/1.1", "method": "GET", "host": "karmanyaahsarch.homenet.malhotra.cc:8080", "uri": "/test/ws?since=1656775973", "headers": {"Sec-Websocket-Version": ["13"], "X-Forwarded-For": ["fd73:ac06:d336:6780:45c7:ccad:8a1e:729b"], "X-Forwarded-Proto": ["http"], "X-Forwarded-Host": ["karmanyaahsarch.homenet.malhotra.cc:8080"], "Accept-Encoding": ["gzip"], "Upgrade": ["websocket"], "Sec-Websocket-Key": ["WCm5Mz872IAOPbaTfupLSA=="], "User-Agent": ["ntfy/1.10.0 (fdroid; Android 10; SDK 29)"], "Connection": ["Upgrade"], "Sec-Websocket-Extensions": ["permessage-deflate"]}}}
5. What I already tried:
I tried setting transport.http.keepalive_interval
to 65 seconds. However, according to the Caddy docs transport defines how to communicate with the backend. Default is http
. Transport communicates with the backend, not the client.
I cannot find any setting to set the keepalive_interval from caddy to the client (my mobile device).
6. Links to relevant resources:
This is very similar to my query, but it was never solved. Mqtt/tcp keepalive when proxying websocket