1. Output of caddy version
:
v2.5.2 h1:eCJdLyEyAGzuQTa5Mh3gETnYWDClo1LjtQm2q9RNZrs=
2. How I run Caddy:
I run Caddy in a slightly modified version of the official Docker image, booted with the command and config file below.
a. System environment:
Docker/Compose on Ubuntu
b. Command:
from my Dockerfile:
caddy run --config /etc/caddy/default_config.json --resume
c. Service/unit/compose file:
FROM caddy:2-alpine
WORKDIR /
RUN ip route add 10.13.13.0/24 via 172.20.0.2
COPY ./default_config.json /etc/caddy/
RUN echo "#!/bin/ash" > /init
RUN echo "ip route add 10.13.13.0/24 via 172.20.0.2" >> /init
RUN echo "caddy run --config /etc/caddy/default_config.json --resume" >> /init
RUN chmod +x /init
EXPOSE 80
EXPOSE 443
CMD ["ash", "/init"]
---
version: "3.3"
services:
wireguard:
build:
context: wg
dockerfile: Dockerfile
container_name: wg
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- SERVERURL=${HOSTNAME}.nativeplanet.live
- SERVERPORT=51820
- PEERS=2
- PEERDNS=1.1.1.1
- INTERNAL_SUBNET=10.13.13.0
- ALLOWEDIPS=0.0.0.0/0
- LOG_CONFS=true
volumes:
- ${PWD}/config:/config
- /lib/modules:/lib/modules
ports:
- 51820:51820/udp
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
networks:
wgnet:
ipv4_address: 172.20.0.2
restart: unless-stopped
caddy:
build:
context: caddy
dockerfile: Dockerfile
cap_add:
- NET_ADMIN
container_name: caddy
ports:
- 80:80
- 443:443
volumes:
- ./caddy/data:/data
- ./caddy/config:/config/caddy
- ./caddy/www:/www
networks:
wgnet:
ipv4_address: 172.20.0.4
restart: unless-stopped
api:
build:
context: api
dockerfile: Dockerfile
depends_on:
- wireguard
- caddy
container_name: api
volumes:
- ${PWD}/config:/etc/wireguard/
networks:
wgnet:
ipv4_address: 172.20.0.3
restart: unless-stopped
networks:
wgnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.20.0.0/24
gateway: 172.20.0.1
d. My complete Caddy config:
{
"admin": {
"listen": "0.0.0.0:2019",
"origins": ["172.20.0.3","127.0.0.1"],
"enforce_origin": false
},
"apps": {
"http": {
"grace_period":2000000000,
"servers": {
"srv0": {
"listen": [":443",":80"],
"routes": [
],
"automatic_https": {
"disable": false,
"disable_redirects": false
}
}
}
}
}
}
3. The problem I’m having:
I’m running a stack of Wireguard + Caddy + a Python script that controls the Caddy API. Caddy reverse proxies connections through the WG tunnel, and Python registers subdomains and assigns them upstream addresses on different routes. Each has its own container, and Python uses urllib.request
to communicate with Caddy.
It works for a while, but eventually Caddy stops responding and prints endless error messages about TCP i/o timeouts. I think maybe what is happening is that too many connections are left opened by Python? I’m not sure how to troubleshoot this or figure out what is going wrong though, I’m a bit of a novice to this. Any help is appreciated.
4. Error messages and/or full log output:
After a certain point, I get many hundreds of lines of the following in the Caddy logs:
{"level":"info","ts":1662308551.5569315,"msg":"http: Accept error: accept tcp [::]:2019: i/o timeout; retrying in 1s"}
{"level":"info","ts":1662308552.557097,"msg":"http: Accept error: accept tcp [::]:2019: i/o timeout; retrying in 1s"}
{"level":"info","ts":1662308553.55731,"msg":"http: Accept error: accept tcp [::]:2019: i/o timeout; retrying in 1s"}
Once this begins showing up in the logs, Caddy’s API becomes totally unresponsive
5. What I already tried:
I’ve tried adding a 2 second timeout on the Python requests side, and I’ve tried adding a grace_period
param to my Caddy config. I’m not sure what is causing this so I’m not sure what to try.