1. The problem I’m having:
I am looking at moving from Traefik to Caddy. I am trying to reverse proxy to a service running on my Unraid server. Caddy is running on a different server, same LAN. I have no problems proxying any services running in docker on the same host. I have over 30 docker containers running various services at the moment, and all run fine. I have no issues with my current Traefik setup and can access everything via subdomains. I have 5 services running on a separate Unraid box that I would also like to access via subdomains as well.
2. Error messages and/or full log output:
I can’t see any errors related to my problem in the log. Curl gives no errors, it just times out.
{"level":"info","ts":1688364383.0752215,"logger":"docker-proxy","msg":"Running caddy proxy server"}
{"level":"info","ts":1688364383.075937,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1688364383.0760732,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1688364383.076091,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
{"level":"info","ts":1688364383.0766711,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"/Caddyfile","LabelPrefix":"caddy","PollingInterval":30,"ProcessCaddyfile":true,"ProxyServiceTasks":true,"IngressNetworks":"[caddy]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
{"level":"info","ts":1688364383.0774283,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
{"level":"info","ts":1688364383.0776498,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[5851579be9e276c752eb0fca0aa3ddfeaa05123eaa99b0a588c3e161ba0e04cb:true caddy:true]"}
{"level":"info","ts":1688364383.081964,"logger":"docker-proxy","msg":"Swarm is available","new":false}
{"level":"info","ts":1688364383.0852675,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"{\n\tdebug\n}\nhttp://plex.stalepopcorn.me {\n\treverse_proxy https://192.168.0.69:32400\n}\nsonarr.stalepopcorn.me {\n\treverse_proxy 192.168.90.7:8989\n}\nstalepopcorn.me {\n\treverse_proxy 192.168.90.5:3000\n}\nwhoami.stalepopcorn.me {\n\treverse_proxy 192.168.90.2\n}\n"}
{"level":"info","ts":1688364383.085629,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"logging\":{\"logs\":{\"default\":{\"level\":\"DEBUG\"}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"sonarr.stalepopcorn.me\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\"192.168.90.7:8989\"}]}]}]}],\"terminal\":true},{\"match\":[{\"host\":[\"whoami.stalepopcorn.me\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\"192.168.90.2:80\"}]}]}]}],\"terminal\":true},{\"match\":[{\"host\":[\"stalepopcorn.me\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\"192.168.90.5:3000\"}]}]}]}],\"terminal\":true}]},\"srv1\":{\"listen\":[\":80\"],\"routes\":[{\"match\":[{\"host\":[\"plex.stalepopcorn.me\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"transport\":{\"protocol\":\"http\",\"tls\":{}},\"upstreams\":[{\"dial\":\"192.168.0.69:32400\"}]}]}]}],\"terminal\":true}]}}}}}"}
{"level":"info","ts":1688364383.0856652,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1688364383.086132,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"41534","headers":{"Accept-Encoding":["gzip"],"Content-Length":["998"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1688364383.08653,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1688364383.0866427,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1688364383.0866542,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"warn","ts":1688364383.086663,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv1","http_port":80}
{"level":"info","ts":1688364383.0867417,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0001a50a0"}
{"level":"info","ts":1688364383.0870187,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1688364383.0870671,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
{"level":"debug","ts":1688364383.0871084,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":true}
{"level":"info","ts":1688364383.0871172,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"debug","ts":1688364383.0871463,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
{"level":"info","ts":1688364383.087167,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
{"level":"info","ts":1688364383.087176,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["stalepopcorn.me","sonarr.stalepopcorn.me","whoami.stalepopcorn.me"]}
{"level":"debug","ts":1688364383.087359,"logger":"tls","msg":"loading managed certificate","domain":"stalepopcorn.me","expiration":1696131462,"issuer_key":"acme-v02.api.letsencrypt.org-directory","storage":"FileStorage:/data/caddy"}
{"level":"debug","ts":1688364383.0875108,"logger":"tls.cache","msg":"added certificate to cache","subjects":["stalepopcorn.me"],"expiration":1696131462,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"881befff315408f4cb38e7a03779131ab9f398959794971eae94ce73897a1534","cache_size":1,"cache_capacity":10000}
{"level":"debug","ts":1688364383.08754,"logger":"events","msg":"event","name":"cached_managed_cert","id":"82bb73fc-dafd-4741-a1e8-e79342e708d9","origin":"tls","data":{"sans":["stalepopcorn.me"]}}
{"level":"debug","ts":1688364383.0876942,"logger":"tls","msg":"loading managed certificate","domain":"sonarr.stalepopcorn.me","expiration":1696131462,"issuer_key":"acme-v02.api.letsencrypt.org-directory","storage":"FileStorage:/data/caddy"}
{"level":"debug","ts":1688364383.0878313,"logger":"tls.cache","msg":"added certificate to cache","subjects":["sonarr.stalepopcorn.me"],"expiration":1696131462,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"238cdd472ddb91fb820fde1bd8df0dc484d29fa118315d4668fa96f1a5952f99","cache_size":2,"cache_capacity":10000}
{"level":"debug","ts":1688364383.087842,"logger":"events","msg":"event","name":"cached_managed_cert","id":"ba6d3814-2fc7-40f5-818d-c50915d1a503","origin":"tls","data":{"sans":["sonarr.stalepopcorn.me"]}}
{"level":"debug","ts":1688364383.0879793,"logger":"tls","msg":"loading managed certificate","domain":"whoami.stalepopcorn.me","expiration":1696131462,"issuer_key":"acme-v02.api.letsencrypt.org-directory","storage":"FileStorage:/data/caddy"}
{"level":"debug","ts":1688364383.088096,"logger":"tls.cache","msg":"added certificate to cache","subjects":["whoami.stalepopcorn.me"],"expiration":1696131462,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"99d321a275dd825c39d74c80ee3d20b42c8d2a3976a25a20ca89519ae70525fe","cache_size":3,"cache_capacity":10000}
{"level":"debug","ts":1688364383.0881128,"logger":"events","msg":"event","name":"cached_managed_cert","id":"98387635-e431-4f76-99a5-28d5050c7723","origin":"tls","data":{"sans":["whoami.stalepopcorn.me"]}}
{"level":"info","ts":1688364383.0881982,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1688364383.0882165,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1688364383.0882306,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1688364383.0883698,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1688364383.0887082,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1688364383.0905068,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"debug","ts":1688364413.0776412,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"debug","ts":1688364413.0826175,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"debug","ts":1688364443.0783508,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"debug","ts":1688364443.0833337,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"debug","ts":1688364473.087966,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"debug","ts":1688364473.0926056,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
3. Caddy version:
v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8=
4. How I installed and ran Caddy:
Installed via Docker Compose. I am using Caddy Docker Proxy
caddy:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
container_name: caddy
ports:
- 80:80
- 443:443
environment:
- CADDY_INGRESS_NETWORKS=caddy
- CADDY_DOCKER_CADDYFILE_PATH=/Caddyfile
networks:
- caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./appdata/caddy/data:/data
- ./appdata/caddy/config:/config
- ./appdata/caddy/Caddyfile:/Caddyfile
restart: unless-stopped
a. System environment:
Ubuntu 23.04
Docker CE 24.0.2
b. Command:
docker compose up -d
c. Service/unit/compose file:
caddy:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
container_name: caddy
ports:
- 80:80
- 443:443
environment:
- CADDY_INGRESS_NETWORKS=caddy
- CADDY_DOCKER_CADDYFILE_PATH=/Caddyfile
networks:
- caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./appdata/caddy/data:/data
- ./appdata/caddy/config:/config
- ./appdata/caddy/Caddyfile:/Caddyfile
restart: unless-stopped
d. My complete Caddy config:
Plex is accessible on the LAN IP and port below
{
debug
}
plex.stalepopcorn.me {
reverse_proxy 192.168.0.69:32400
}