1. The problem I’m having:
I’m using the standard caddy docker container launched via docker-compose:
caddy:
image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
I’m trying to modify the global bind settings but caddy doesn’t seem to be picking up my caddy file.
I’m mounting the caddyfile from the host via:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /opt/kuma-monitor/caddydata/:/data
- /opt/kuma-monitor/caddyconfig/:/config
- /opt/kuma-monitor/Caddyfile:/etc/caddy/Caddyfile
If I start the container up and cd to /etc/caddy I can see the Caddyfile has been mounted into the docker container and the contents are correct:.
docker exec -it kuma-monitor_caddy_1 /bin/sh
/ # cd /etc/caddy/
/etc/caddy # cat Caddyfile
{
debug
default_bind [::]
}
/etc/caddy #
However when caddy starts it is still trying to bind to ipv4 which makes me assume that it isn’t reading the Caddyfile.
I’ve tried putting some garbage into the caddy file to force an error (I was hoping that an error would prove that caddy was reading my Caddyfile).
But I seen no errors in the caddy logs:
My bad caddyfile
lkla 223!@#%!@!# garbage to break caddy
{
debug
default_bind [::]
}
The logs with the bad caddyfile:
docker-compose up
Creating network "kuma-monitor_default" with the default driver
Creating network "kuma-monitor_ip6net" with the default driver
Creating kuma-monitor_caddy_1 ... done
Creating kuma-monitor_uptime-kuma_1 ... done
Attaching to kuma-monitor_uptime-kuma_1, kuma-monitor_caddy_1
uptime-kuma_1 | ==> Performing startup jobs and maintenance tasks
uptime-kuma_1 | ==> Starting application with user 0 group 0
uptime-kuma_1 | Welcome to Uptime Kuma
uptime-kuma_1 | Your Node.js version: 18.19.0
caddy_1 | {"level":"info","ts":1709765422.9876347,"logger":"docker-proxy","msg":"Running caddy proxy server"}
caddy_1 | {"level":"info","ts":1709765422.9891837,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]}
caddy_1 | {"level":"info","ts":1709765422.9894218,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy_1 | {"level":"info","ts":1709765422.9894392,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
caddy_1 | {"level":"info","ts":1709765422.990294,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","EnvFile":"","LabelPrefix":"caddy","PollingInterval":30,"ProxyServiceTasks":true,"ProcessCaddyfile":true,"ScanStoppedContainers":false,"IngressNetworks":"[proxy_network]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
caddy_1 | {"level":"info","ts":1709765422.9910564,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
caddy_1 | {"level":"info","ts":1709765422.9924078,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[ad44255a0712950d2b5ea4e85bc9d6dcc74603308d672a0112bcd8b1ede7a0f6:true proxy_network:true]"}
caddy_1 | {"level":"info","ts":1709765423.0031345,"logger":"docker-proxy","msg":"Swarm is available","new":false}
caddy_1 | {"level":"warn","ts":1709765423.005465,"logger":"docker-proxy","msg":"Container is not in same network as caddy","container":"dcbdbbe7f3f60a705d29bfe15bb36ddbbf9ccd839a791f7977c492a256a520a2","container id":"dcbdbbe7f3f60a705d29bfe15bb36ddbbf9ccd839a791f7977c492a256a520a2"}
caddy_1 | {"level":"info","ts":1709765423.0068781,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"status.onepub.dev {\n\treverse_proxy *\n}\n"}
caddy_1 | {"level":"info","ts":1709765423.0073905,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"status.onepub.dev\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\"}]}]}],\"terminal\":true}]}}}}}"}
caddy_1 | {"level":"info","ts":1709765423.0074458,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
caddy_1 | {"level":"info","ts":1709765423.009097,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"51350","headers":{"Accept-Encoding":["gzip"],"Content-Length":["254"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
caddy_1 | {"level":"info","ts":1709765423.0105212,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//[::1]:2019","//127.0.0.1:2019","//localhost:2019"]}
caddy_1 | {"level":"info","ts":1709765423.0108244,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy_1 | {"level":"info","ts":1709765423.01086,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy_1 | {"level":"info","ts":1709765423.0112484,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000344880"}
caddy_1 | {"level":"info","ts":1709765423.0124874,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
caddy_1 | {"level":"info","ts":1709765423.0127246,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
caddy_1 | {"level":"info","ts":1709765423.0131774,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy_1 | {"level":"info","ts":1709765423.013352,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy_1 | {"level":"info","ts":1709765423.0134156,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["status.onepub.dev"]}
caddy_1 | {"level":"info","ts":1709765423.0155587,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy_1 | {"level":"info","ts":1709765423.01569,"logger":"admin.api","msg":"load complete"}
caddy_1 | {"level":"info","ts":1709765423.01602,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
caddy_1 | {"level":"warn","ts":1709765423.016368,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"295fd264-38cf-4f7b-a3c5-3ca6486a7163","try_again":1709851823.0163653,"try_again_in":86399.999999476}
caddy_1 | {"level":"info","ts":1709765423.0165293,"logger":"tls","msg":"finished cleaning storage units"}
caddy_1 | {"level":"info","ts":1709765423.0193162,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
uptime-kuma_1 | 2024-03-06T22:50:23Z [SERVER] INFO: Welcome to Uptime Kuma
uptime-kuma_1 | 2024-03-06T22:50:23Z [SERVER] INFO: Node Env: production
uptime-kuma_1 | 2024-03-06T22:50:23Z [SERVER] INFO: Inside Container: true
uptime-kuma_1 | 2024-03-06T22:50:23Z [SERVER] INFO: Importing Node libraries
uptime-kuma_1 | 2024-03-06T22:50:23Z [SERVER] INFO: Importing 3rd-party libraries
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Creating express and socket.io instance
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Server Type: HTTP
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Importing this project modules
uptime-kuma_1 | 2024-03-06T22:50:24Z [NOTIFICATION] INFO: Prepare Notification Providers
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Version: 1.23.11
uptime-kuma_1 | 2024-03-06T22:50:24Z [DB] INFO: Data Dir: ./data/
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Connecting to the Database
uptime-kuma_1 | 2024-03-06T22:50:24Z [DB] INFO: SQLite config:
uptime-kuma_1 | [ { journal_mode: 'wal' } ]
uptime-kuma_1 | [ { cache_size: -12000 } ]
uptime-kuma_1 | 2024-03-06T22:50:24Z [DB] INFO: SQLite Version: 3.41.1
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Connected
uptime-kuma_1 | 2024-03-06T22:50:24Z [DB] INFO: Your database version: 10
uptime-kuma_1 | 2024-03-06T22:50:24Z [DB] INFO: Latest database version: 10
uptime-kuma_1 | 2024-03-06T22:50:24Z [DB] INFO: Database patch not needed
uptime-kuma_1 | 2024-03-06T22:50:24Z [DB] INFO: Database Patch 2.0 Process
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Load JWT secret from database.
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: No user, need setup
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Adding route
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Adding socket handler
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Init the server
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Listening on 3001
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVICES] INFO: Starting nscd
2. Error messages and/or full log output:
as above
PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.
3. Caddy version:
uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Version: 1.23.11
This information really should be at the top of the caddy log output.
4. How I installed and ran Caddy:
As above using docker-compose and the standard caddy docker image.
a. System environment:
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.4 LTS
Release: 22.04
Codename: jammy
docker --version
Docker version 24.0.5, build 24.0.5-0ubuntu1~22.04.1
b. Command:
docker-compose up
c. Service/unit/compose file:
networks:
default:
name: 'proxy_network'
services:
uptime-kuma:
image: louislam/uptime-kuma:1
restart: unless-stopped
volumes:
- /opt/kuma-monitor/kumadata:/app/data
ports:
- 2052:3001
labels:
caddy: status.onepub.dev
caddy.reverse_proxy: "* {{upstreams 2052}}"
caddy:
image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
ports:
- "80:80"
- "443:443"
- "443:443/udp" # I assume for quic
networks:
- ip6net
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /opt/kuma-monitor/caddydata/:/data
- /opt/kuma-monitor/caddyconfig/:/config
- /opt/kuma-monitor/Caddyfile:/etc/caddy/Caddyfile
restart: unless-stopped
environment:
- CADDY_INGRESS_NETWORKS=proxy_network
#- CADDY_DOCKER_CADDYFILE_PATH=/data/CaddyFile
networks:
ip6net:
enable_ipv6: true
ipam:
config:
#- subnet: 2401:da80:1000:2::/64
- subnet: 2600:1900:4180:bfa7:0:0:0:0/64
</pre>
d. My complete Caddy config:
lkla 223!@#%!@!# garbage to break caddy
{
debug
default_bind [::]
}
5. Links to relevant resources:
docker for the caddy image.
https://hub.docker.com/_/caddy
I’m following the below section of the documentation:
To override the default Caddyfile, you can mount a new one at /etc/caddy/Caddyfile:
$ docker run -d -p 80:80 \
-v $PWD/Caddyfile:/etc/caddy/Caddyfile \
-v caddy_data:/data \
caddy
Of course I translated this into docker-compose volume mounts.