Wildcard Certificate using Docker Container with DNS Challenge

1. The problem I’m having:

I would like to have reverse proxy working for my services (proxmox, jellyfin, etc) and access from domain name instead of IP Address and proxy without exposing any ports.

I have created A name record in my cloudflare as follows

I have set up Caddy using docker containers. Test it using gitea containers and I expect can access gitea through git.brewx.my.id
However, it does not work with following caddy logs

2. Error messages and/or full log output:


pi@rasppi:~/containers/caddy_project/caddy $ docker logs caddy
{"level":"info","ts":1710918444.5609567,"logger":"docker-proxy","msg":"Running caddy proxy server"}
{"level":"info","ts":1710918444.565335,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1710918444.565869,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1710918444.5659153,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
{"level":"info","ts":1710918444.5697014,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","EnvFile":"","LabelPrefix":"caddy","PollingInterval":30,"ProxyServiceTasks":true,"ProcessCaddyfile":true,"ScanStoppedContainers":true,"IngressNetworks":"[]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
{"level":"info","ts":1710918444.5751677,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
{"level":"info","ts":1710918444.576002,"logger":"docker-proxy","msg":"Caddy ContainerID","ID":"02ea27fb49ed5b0a210a2ecd39bb7d680311be6a720a34d6f3f5a6dc0d010b85"}
{"level":"info","ts":1710918444.589329,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[8a5cc6a3aeb032c4848382bd118d424a749131680235e00df1804144b50bd99c:true caddy-network:true]"}
{"level":"info","ts":1710918444.6207874,"logger":"docker-proxy","msg":"Swarm is available","new":false}
{"level":"info","ts":1710918444.6453142,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"*.brewx.my.id {\n\t@vaultwarden host vault.brewx.my.id\n\t@gitea host git.brewx.my.id\n\thandle @vaultwarden {\n\t\treverse_proxy :80\n\t}\n\thandle @gitea {\n\t\treverse_proxy 172.18.0.3:3000\n\t}\n\tencode gzip\n\theader {\n\t\t-Last-Modified\n\t\t-Server\n\t\t-X-Powered-By\n\t\tStrict-Transport-Security max-age=31536000;\n\t\tX-Content-Type-Options nosniff\n\t\tX-Frame-Options SAMEORIGIN\n\t\tX-Robots-Tag noindex, nofollow\n\t\tX-XSS-Protection 1; mode=block\n\t}\n}\n"}
{"level":"info","ts":1710918444.6495988,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"*.brewx.my.id\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"headers\",\"response\":{\"deferred\":true,\"delete\":[\"Last-Modified\",\"Server\",\"X-Powered-By\"],\"replace\":{\"X-Robots-Tag\":[{\"replace\":\"nofollow\",\"search_regexp\":\"noindex,\"}],\"X-XSS-Protection\":[{\"replace\":\"mode=block\",\"search_regexp\":\"1;\"}]},\"set\":{\"Strict-Transport-Security\":[\"max-age=31536000;\"],\"X-Content-Type-Options\":[\"nosniff\"],\"X-Frame-Options\":[\"SAMEORIGIN\"]}}},{\"encodings\":{\"gzip\":{}},\"handler\":\"encode\",\"prefer\":[\"gzip\"]}]},{\"group\":\"group2\",\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\":80\"}]}]}]}],\"match\":[{\"host\":[\"vault.brewx.my.id\"]}]},{\"group\":\"group2\",\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\"172.18.0.3:3000\"}]}]}]}],\"match\":[{\"host\":[\"git.brewx.my.id\"]}]}]}],\"terminal\":true}]}}}}}"}
{"level":"info","ts":1710918444.6498141,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1710918444.6525726,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"53360","headers":{"Accept-Encoding":["gzip"],"Content-Length":["1021"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1710918444.6563332,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1710918444.657117,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1710918444.6571681,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x400078a600"}
{"level":"info","ts":1710918444.6572075,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1710918444.660419,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1710918444.660871,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1710918444.661576,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1710918444.661926,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1710918444.661974,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["*.brewx.my.id"]}
{"level":"info","ts":1710918444.6647954,"logger":"tls.obtain","msg":"acquiring lock","identifier":"*.brewx.my.id"}
{"level":"info","ts":1710918444.666001,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1710918444.6660635,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1710918444.6675334,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1710918444.6740205,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"info","ts":1710918444.677343,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
{"level":"info","ts":1710918444.6799412,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1710918444.6804833,"logger":"tls.obtain","msg":"lock acquired","identifier":"*.brewx.my.id"}
{"level":"info","ts":1710918444.681314,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"*.brewx.my.id"}
{"level":"info","ts":1710918446.0278885,"logger":"http","msg":"waiting on internal rate limiter","identifiers":["*.brewx.my.id"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}
{"level":"info","ts":1710918446.027991,"logger":"http","msg":"done waiting on internal rate limiter","identifiers":["*.brewx.my.id"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}
{"level":"error","ts":1710918446.8841522,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.brewx.my.id","issuer":"acme-v02.api.letsencrypt.org-directory","error":"[*.brewx.my.id] solving challenges: *.brewx.my.id: no solvers available for remaining challenges (configured=[http-01 tls-alpn-01] offered=[dns-01] remaining=[dns-01]) (order=https://acme-v02.api.letsencrypt.org/acme/order/1627876817/253835023237) (ca=https://acme-v02.api.letsencrypt.org/directory)"}
{"level":"warn","ts":1710918446.8846185,"logger":"http","msg":"missing email address for ZeroSSL; it is strongly recommended to set one for next time"}
{"level":"info","ts":1710918448.6534657,"logger":"http","msg":"generated EAB credentials","key_id":"CONfkdYZhl-JAsr7_bpHQA"}

3. Caddy version:

Docker containers-alpine

4. How I installed and ran Caddy:

a. System environment:

Hardware : Raspberry Pi 4B
OS : Raspberry Pi OS Lite 64bit
Installing the caddy in docker container

b. Command:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

c. Service/unit/compose file:

services:
  docker-proxy:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: caddy
    restart: unless-stopped
    env_file: .env
    ports:
      - 80:80
      - 443:443
    networks:
      - caddy-network
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./data:/data/caddy
      - ./config:/config/caddy
    deploy:
      labels:
        caddy.email: "email@gmail.com"
        caddy: "*.brewx.my.id"
        caddy.tls.dns: "cloudflare $CF_API_TOKEN"

volumes:
  data: {}

networks:
  caddy-network:
    external: true

d. My complete Caddy config:

Dockerfile :

FROM --platform=linux/arm64/v8 caddy:builder-alpine AS builder

RUN xcaddy build \
    --with github.com/lucaslorentz/caddy-docker-proxy/v2 \
    --with github.com/caddy-dns/cloudflare

FROM caddy:alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

CMD ["caddy", "docker-proxy"]

.env file

# Cloudflare API token should be scoped:
# - Zone.Zone: Read
# - Zone.DNS: Edit
CF_API_TOKEN="PnBBlbHIDDENA6KaCt"

I tested it using gitea container with following compose file


services:
  server:
    image: gitea/gitea:latest
    container_name: gitea
    restart: always
    environment:
      - USER_UID=1000
      - USER_GID=1000
    networks:
      - caddy-network
    volumes:
      - ./data:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    labels:
      caddy: "*.brewx.my.id"
      caddy.2_handle: "@gitea"
      caddy.2_@gitea: "host git.brewx.my.id"
      caddy.2_handle.reverse_proxy: "{{upstreams 3000}}"
      caddy_2_handle.reverse_proxy_0: "{{upstreams 2222}}"

networks:
  caddy-network:
    external: true

5. Links to relevant resources:

https://mijo.remotenode.io/posts/tailscale-caddy-docker/

The env var syntax is {$CF_API_TOKEN}, you need the braces around it.

I don’t see tls in here though. I think the labels on your caddy container aren’t taking effect. Are you sure you restarted the docker stack after adding the labels?

I have changed the compose as follow

docker-compose
services:
  docker-proxy:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: caddy
    restart: unless-stopped
    env_file: .env
    ports:
      - 80:80
      - 443:443
    networks:
      - caddy-network
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./data:/data/caddy
      - ./config:/config/caddy
    deploy:
      labels:
        caddy.email: "email@gmail.com"
        caddy: "*.mine.my.id"
        caddy.tls.dns: "cloudflare {$CF_API_TOKEN}"

volumes:
  data: {}

networks:
  caddy-network:
    external: true

and keep everything the same (.env file, gitea compose)

.env file
# Cloudflare API token should be scoped:
# - Zone.Zone: Read
# - Zone.DNS: Edit
CF_API_TOKEN="PnBBlbHIDt"
gitea compose
services:
  server:
    image: gitea/gitea:latest
    container_name: gitea
    restart: always
    environment:
      - USER_UID=1000
      - USER_GID=1000
    networks:
      - caddy-network
    volumes:
      - ./data:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    labels:
      caddy: "*.mine.my.id"
      caddy.2_handle: "@gitea"
      caddy.2_@gitea: "host git.mine.my.id"
      caddy.2_handle.reverse_proxy: "{{upstreams 3000}}"
      caddy_2_handle.reverse_proxy_0: "{{upstreams 2222}}"

networks:
  caddy-network:
    external: true

I even remove the caddy and gitea containers, then remove directory created by the containers, and restart the containers by docker compose up for caddy and gitea

I still have this log below

docker logs caddy
pi@rasppi:~/containers/caddy_project/caddy $ docker logs caddy -t -f
2024-03-21T02:04:10.362620915Z {"level":"info","ts":1710986650.3605306,"logger":"docker-proxy","msg":"Running caddy proxy server"}
2024-03-21T02:04:10.367880151Z {"level":"info","ts":1710986650.367465,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
2024-03-21T02:04:10.368321794Z {"level":"info","ts":1710986650.3680744,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
2024-03-21T02:04:10.368378423Z {"level":"info","ts":1710986650.3681278,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
2024-03-21T02:04:10.370989116Z {"level":"info","ts":1710986650.3706486,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","EnvFile":"","LabelPrefix":"caddy","PollingInterval":30,"ProxyServiceTasks":true,"ProcessCaddyfile":true,"ScanStoppedContainers":true,"IngressNetworks":"[]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
2024-03-21T02:04:10.372637560Z {"level":"info","ts":1710986650.3723154,"logger":"docker-proxy","msg":"Caddy ContainerID","ID":"6edfcd1e4e25f3dd9687e5067a1eb8bc8d950aab1df3ab5c514a3e228f035af1"}
2024-03-21T02:04:10.373610753Z {"level":"info","ts":1710986650.373348,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
2024-03-21T02:04:10.386235201Z {"level":"info","ts":1710986650.3857012,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[afd524d542b76ceb7ea6c1d4745e96c81af1256b8207f3066ac64c11dd554a1e:true caddy-network:true]"}
2024-03-21T02:04:10.418639777Z {"level":"info","ts":1710986650.4181955,"logger":"docker-proxy","msg":"Swarm is available","new":false}
2024-03-21T02:04:10.428810754Z {"level":"info","ts":1710986650.428503,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"# Empty caddyfile"}
2024-03-21T02:04:10.430078295Z {"level":"warn","ts":1710986650.429803,"logger":"docker-proxy","msg":"Caddyfile to json warning","warn":"[Caddyfile:1: Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies]"}
2024-03-21T02:04:10.430143479Z {"level":"info","ts":1710986650.4298625,"logger":"docker-proxy","msg":"New Config JSON","json":"{}"}
2024-03-21T02:04:10.430517882Z {"level":"info","ts":1710986650.4301841,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
2024-03-21T02:04:10.433214037Z {"level":"info","ts":1710986650.4327834,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"49530","headers":{"Accept-Encoding":["gzip"],"Content-Length":["41"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
2024-03-21T02:04:10.433657810Z {"level":"info","ts":1710986650.4334323,"msg":"config is unchanged"}
2024-03-21T02:04:10.433703957Z {"level":"info","ts":1710986650.4334788,"logger":"admin.api","msg":"load complete"}
2024-03-21T02:04:10.434339450Z {"level":"info","ts":1710986650.4339507,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
2024-03-21T02:04:25.597687636Z {"level":"warn","ts":1710986665.593551,"logger":"docker-proxy","msg":"Container is not in same network as caddy","container":"70cf747bb81e924950705b8c665c2ffb4971a39dc25195e26bb728deb7c9ba88","container id":"70cf747bb81e924950705b8c665c2ffb4971a39dc25195e26bb728deb7c9ba88"}
2024-03-21T02:04:25.602831781Z {"level":"info","ts":1710986665.600389,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy\n\t}\n}\n"}
2024-03-21T02:04:25.607041899Z {"level":"info","ts":1710986665.6063676,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"*.mine.my.id\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\"}]}]}],\"match\":[{\"host\":[\"git.mine.my.id\"]}]}]}],\"terminal\":true}]}}}}}"}
2024-03-21T02:04:25.607210731Z {"level":"info","ts":1710986665.606833,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
2024-03-21T02:04:25.610715432Z {"level":"info","ts":1710986665.6083107,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"49530","headers":{"Accept-Encoding":["gzip"],"Content-Length":["336"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
2024-03-21T02:04:25.613862377Z {"level":"info","ts":1710986665.6123972,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]}
2024-03-21T02:04:25.615020494Z {"level":"info","ts":1710986665.6146472,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40001b0080"}
2024-03-21T02:04:25.615433526Z {"level":"info","ts":1710986665.6151812,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
2024-03-21T02:04:25.615522044Z {"level":"info","ts":1710986665.6153088,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
2024-03-21T02:04:25.626777730Z {"level":"info","ts":1710986665.6259406,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
2024-03-21T02:04:25.627011709Z {"level":"info","ts":1710986665.6268191,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
2024-03-21T02:04:25.631276161Z {"level":"info","ts":1710986665.6294267,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
2024-03-21T02:04:25.632792365Z {"level":"info","ts":1710986665.62985,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
2024-03-21T02:04:25.632854087Z {"level":"info","ts":1710986665.6298995,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["*.mine.my.id"]}
2024-03-21T02:04:25.633687726Z {"level":"info","ts":1710986665.633306,"logger":"tls.obtain","msg":"acquiring lock","identifier":"*.mine.my.id"}
2024-03-21T02:04:25.637238741Z {"level":"info","ts":1710986665.6368895,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
2024-03-21T02:04:25.638739798Z {"level":"info","ts":1710986665.6376088,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
2024-03-21T02:04:25.641261251Z {"level":"info","ts":1710986665.6376824,"logger":"admin.api","msg":"load complete"}
2024-03-21T02:04:25.641303176Z {"level":"info","ts":1710986665.637884,"logger":"tls","msg":"finished cleaning storage units"}
2024-03-21T02:04:25.641326602Z {"level":"info","ts":1710986665.6384826,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
2024-03-21T02:04:25.645032282Z {"level":"info","ts":1710986665.644717,"logger":"tls.obtain","msg":"lock acquired","identifier":"*.mine.my.id"}
2024-03-21T02:04:25.645845976Z {"level":"info","ts":1710986665.6456296,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"*.mine.my.id"}
2024-03-21T02:04:25.656649649Z {"level":"info","ts":1710986665.6562936,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
2024-03-21T02:04:26.557239931Z {"level":"info","ts":1710986666.5564878,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy 172.20.0.3:3000\n\t}\n}\n"}
2024-03-21T02:04:26.560203361Z {"level":"info","ts":1710986666.5597537,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"*.mine.my.id\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\"172.20.0.3:3000\"}]}]}]}],\"match\":[{\"host\":[\"git.mine.my.id\"]}]}]}],\"terminal\":true}]}}}}}"}
2024-03-21T02:04:26.560654763Z {"level":"info","ts":1710986666.559957,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
2024-03-21T02:04:26.565219470Z {"level":"info","ts":1710986666.5634825,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"54290","headers":{"Accept-Encoding":["gzip"],"Content-Length":["377"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
2024-03-21T02:04:26.575289541Z {"level":"info","ts":1710986666.5748198,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
2024-03-21T02:04:26.579240255Z {"level":"info","ts":1710986666.5787165,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
2024-03-21T02:04:26.581501970Z {"level":"info","ts":1710986666.5810702,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
2024-03-21T02:04:26.585711626Z {"level":"info","ts":1710986666.5847526,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
2024-03-21T02:04:26.586597319Z {"level":"info","ts":1710986666.5853577,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
2024-03-21T02:04:26.588163061Z {"level":"info","ts":1710986666.5870647,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
2024-03-21T02:04:26.588254226Z {"level":"info","ts":1710986666.5872312,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["*.mine.my.id"]}
2024-03-21T02:04:26.588328633Z {"level":"info","ts":1710986666.5881472,"logger":"http","msg":"servers shutting down with eternal grace period"}
2024-03-21T02:04:26.590262259Z {"level":"info","ts":1710986666.5897229,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
2024-03-21T02:04:26.590360036Z {"level":"info","ts":1710986666.5898309,"logger":"admin.api","msg":"load complete"}
2024-03-21T02:04:26.591962221Z {"level":"info","ts":1710986666.5899656,"logger":"tls.obtain","msg":"acquiring lock","identifier":"*.mine.my.id"}
2024-03-21T02:04:26.592412623Z {"level":"warn","ts":1710986666.5897017,"logger":"http.acme_client","msg":"HTTP request failed; retrying","url":"https://acme-v02.api.letsencrypt.org/acme/new-nonce","error":"performing request: Head \"https://acme-v02.api.letsencrypt.org/acme/new-nonce\": context canceled"}
2024-03-21T02:04:26.592486456Z {"level":"error","ts":1710986666.59074,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.mine.my.id","issuer":"acme-v02.api.letsencrypt.org-directory","error":"registering account [] with server: fetching new nonce from server: context canceled"}
2024-03-21T02:04:26.592520493Z {"level":"info","ts":1710986666.591332,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
2024-03-21T02:04:26.592546844Z {"level":"warn","ts":1710986666.5915432,"logger":"http","msg":"missing email address for ZeroSSL; it is strongly recommended to set one for next time"}
2024-03-21T02:04:26.592700787Z {"level":"error","ts":1710986666.5922887,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.mine.my.id","issuer":"acme.zerossl.com-v2-DV90","error":"account pre-registration callback: performing EAB credentials request: Post \"https://api.zerossl.com/acme/eab-credentials-email\": context canceled"}
2024-03-21T02:04:26.593357724Z {"level":"error","ts":1710986666.5929976,"logger":"tls.obtain","msg":"will retry","error":"[*.mine.my.id] Obtain: account pre-registration callback: performing EAB credentials request: Post \"https://api.zerossl.com/acme/eab-credentials-email\": context canceled","attempt":1,"retrying_in":60,"elapsed":0.948204962,"max_duration":2592000}
2024-03-21T02:04:26.593495907Z {"level":"info","ts":1710986666.59327,"logger":"tls.obtain","msg":"releasing lock","identifier":"*.mine.my.id"}
2024-03-21T02:04:26.594454471Z {"level":"error","ts":1710986666.5940819,"logger":"tls","msg":"job failed","error":"*.mine.my.id: obtaining certificate: context canceled"}
2024-03-21T02:04:26.596881665Z {"level":"info","ts":1710986666.596516,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
2024-03-21T02:04:31.951897817Z {"level":"info","ts":1710986671.9507916,"logger":"tls.obtain","msg":"lock acquired","identifier":"*.mine.my.id"}
2024-03-21T02:04:31.952180610Z {"level":"info","ts":1710986671.9519014,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"*.mine.my.id"}
2024-03-21T02:04:32.816237623Z {"level":"info","ts":1710986672.8155484,"logger":"http","msg":"waiting on internal rate limiter","identifiers":["*.mine.my.id"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}
2024-03-21T02:04:32.816356307Z {"level":"info","ts":1710986672.8157263,"logger":"http","msg":"done waiting on internal rate limiter","identifiers":["*.mine.my.id"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}
2024-03-21T02:04:33.469008912Z {"level":"error","ts":1710986673.4666672,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.mine.my.id","issuer":"acme-v02.api.letsencrypt.org-directory","error":"[*.mine.my.id] solving challenges: *.mine.my.id: no solvers available for remaining challenges (configured=[http-01 tls-alpn-01] offered=[dns-01] remaining=[dns-01]) (order=https://acme-v02.api.letsencrypt.org/acme/order/1629365207/254054223587) (ca=https://acme-v02.api.letsencrypt.org/directory)"}
2024-03-21T02:04:33.469152447Z {"level":"warn","ts":1710986673.4674954,"logger":"http","msg":"missing email address for ZeroSSL; it is strongly recommended to set one for next time"}
2024-03-21T02:04:35.173878276Z {"level":"info","ts":1710986675.1733856,"logger":"http","msg":"generated EAB credentials","key_id":"gHhhr6yRWkGd4hG0ouMY4A"}
2024-03-21T02:04:39.753387703Z {"level":"info","ts":1710986679.7530103,"logger":"http","msg":"waiting on internal rate limiter","identifiers":["*.mine.my.id"],"ca":"https://acme.zerossl.com/v2/DV90","account":""}
2024-03-21T02:04:39.753501868Z {"level":"info","ts":1710986679.753106,"logger":"http","msg":"done waiting on internal rate limiter","identifiers":["*.mine.my.id"],"ca":"https://acme.zerossl.com/v2/DV90","account":""}
2024-03-21T02:04:44.276026852Z {"level":"error","ts":1710986684.2756488,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.mine.my.id","issuer":"acme.zerossl.com-v2-DV90","error":"[*.mine.my.id] solving challenges: *.mine.my.id: no solvers available for remaining challenges (configured=[http-01 tls-alpn-01] offered=[dns-01] remaining=[dns-01]) (order=https://acme.zerossl.com/v2/DV90/order/359YKV8-iF2vwlSzp4dK6Q) (ca=https://acme.zerossl.com/v2/DV90)"}
2024-03-21T02:04:44.276170924Z {"level":"error","ts":1710986684.2758305,"logger":"tls.obtain","msg":"will retry","error":"[*.mine.my.id] Obtain: [*.mine.my.id] solving challenges: *.mine.my.id: no solvers available for remaining challenges (configured=[http-01 tls-alpn-01] offered=[dns-01] remaining=[dns-01]) (order=https://acme.zerossl.com/v2/DV90/order/359YKV8-iF2vwlSzp4dK6Q) (ca=https://acme.zerossl.com/v2/DV90)","attempt":1,"retrying_in":60,"elapsed":12.324830934,"max_duration":2592000}

Could you please help…?

I think the problem is you have deploy: here, so the labels aren’t actually on the service itself. Remove deploy: then un-indent the labels stuff, should fix it. Compare it with your other docker-compose which doesn’t have deploy:.

Modify the compose file as follow

docker compose yml
services:
  docker-proxy:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: caddy
    restart: unless-stopped
    env_file: .env
    ports:
      - 80:80
      - 443:443
    networks:
      - caddy-network
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./data:/data/caddy
      - ./config:/config/caddy
    labels:
      caddy.email: "myemail@gmail.com"
      caddy: "*.mine.my.id"
      caddy.tls.dns: "cloudflare {$CF_API_TOKEN}"



volumes:
  data: {}

networks:
  caddy-network:
    external: true

But I have new error logs as follow

Caddy logs
pi@rasppi:~/containers/caddy_project/caddy $ docker logs caddy -f
{"level":"info","ts":1710989300.082276,"logger":"docker-proxy","msg":"Running caddy proxy server"}
{"level":"info","ts":1710989300.0869105,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//[::1]:2019","//127.0.0.1:2019","//localhost:2019"]}
{"level":"info","ts":1710989300.0874388,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1710989300.0874887,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
{"level":"info","ts":1710989300.0898015,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","EnvFile":"","LabelPrefix":"caddy","PollingInterval":30,"ProxyServiceTasks":true,"ProcessCaddyfile":true,"ScanStoppedContainers":true,"IngressNetworks":"[]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
{"level":"info","ts":1710989300.091714,"logger":"docker-proxy","msg":"Caddy ContainerID","ID":"6e09f213422ba3558f93e162f6745dfbe5d2f0135f981dc4a001148373c99c42"}
{"level":"info","ts":1710989300.0924883,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
{"level":"info","ts":1710989300.105203,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[afd524d542b76ceb7ea6c1d4745e96c81af1256b8207f3066ac64c11dd554a1e:true caddy-network:true]"}
{"level":"info","ts":1710989300.1367235,"logger":"docker-proxy","msg":"Swarm is available","new":false}
{"level":"info","ts":1710989300.1476536,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR]  Removing invalid block: Caddyfile:2: unrecognized directive: email\n*.mine.my.id {\n\temail mine@gmail.com\n\ttls {\n\t\tdns cloudflare {PnBBlbp8ESjqUPuWTjM6A6KaCt}\n\t}\n}\n\n"}
{"level":"info","ts":1710989300.147757,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"# Empty caddyfile"}
{"level":"warn","ts":1710989300.1489003,"logger":"docker-proxy","msg":"Caddyfile to json warning","warn":"[Caddyfile:1: Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies]"}
{"level":"info","ts":1710989300.1489592,"logger":"docker-proxy","msg":"New Config JSON","json":"{}"}
{"level":"info","ts":1710989300.1490853,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1710989300.151815,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"57688","headers":{"Accept-Encoding":["gzip"],"Content-Length":["41"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1710989300.152052,"msg":"config is unchanged"}
{"level":"info","ts":1710989300.1520922,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1710989300.1525784,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"warn","ts":1710989308.779666,"logger":"docker-proxy","msg":"Container is not in same network as caddy","container":"97ba4be341e13b5bec04837bbd5a8722167b038131ec3ed4a8c08e5a965e6640","container id":"97ba4be341e13b5bec04837bbd5a8722167b038131ec3ed4a8c08e5a965e6640"}
{"level":"info","ts":1710989308.7865355,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR]  Removing invalid block: Caddyfile:6: unrecognized directive: email\n*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy\n\t}\n\temail myemail@gmail.com\n\ttls {\n\t\tdns cloudflare {PnBBlbpSjqUPuWTjM6A6KaCt}\n\t}\n}\n\n"}
{"level":"info","ts":1710989309.7061207,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR]  Removing invalid block: Caddyfile:6: unrecognized directive: email\n*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy 172.20.0.3:3000\n\t}\n\temail myemail@gmail.com\n\ttls {\n\t\tdns cloudflare {PnBBlbp6ujqUPuWTjM6A6KaCt}\n\t}\n}\n\n"}
{"level":"info","ts":1710989339.6972227,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR]  Removing invalid block: Caddyfile:6: unrecognized directive: email\n*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy 172.20.0.3:3000\n\t}\n\temail myemail@gmail.com\n\ttls {\n\t\tdns cloudflare {PnBBlbSjqUPuWTjM6A6KaCt}\n\t}\n}\n\n"}
{"level":"info","ts":1710989369.736351,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR]  Removing invalid block: Caddyfile:6: unrecognized directive: email\n*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy 172.20.0.3:3000\n\t}\n\temail myemail@gmail.com\n\ttls {\n\t\tdns cloudflare {PnBBlbjqUPuWTjM6A6KaCt}\n\t}\n}\n\n"}
{"level":"info","ts":1710989399.715038,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR]  Removing invalid block: Caddyfile:6: unrecognized directive: email\n*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy 172.20.0.3:3000\n\t}\n\temail myemail@gmail.com\n\ttls {\n\t\tdns cloudflare {PnBBlbp8ESjqUPuWTjM6A6KaCt}\n\t}\n}\n\n"}
{"level":"info","ts":1710989429.7188601,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR]  Removing invalid block: Caddyfile:6: unrecognized directive: email\n*.mine.my.id {\n\t@gitea host git.mine.my.id\n\thandle @gitea {\n\t\treverse_proxy 172.20.0.3:3000\n\t}\n\temail myemail@gmail.com\n\ttls {\n\t\tdns cloudflare {PnBBlbpSjqUPuWTjM6A6KaCt}\n\t}\n}\n\n"}

Any idea what is wrong?
Many thanks for your help

That’s weird.

The email is optional, so you could just remove that label for now. I’m not sure why it’s not being made into global options though.

I have not set up open port 80 443. and do any action on my router.
Using dns.providers.cloudflare method don’t need to do that, right?

What I did is set up A record in cloudflare, generate API token, build caddy container using dns provider cloudflare. This should work without any set up outside it. right?

Correct, but your A record needs to be an IP that’s routable, so if you used your WAN IP it won’t work because your router is blocking connections. You could use your LAN IP if you don’t plan to have any connections from outside.

Ok somehow I got it working
But I don’t know exactly the problem
What I did is

  1. change the A record IP Address to my local caddy machine

  2. Return the caddy.tls.dns: “cloudflare {$CF_API_TOKEN}” to be caddy.tls.dns: “cloudflare $CF_API_TOKEN” without braces, remove the email as follow

Summary
services:
  docker-proxy:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: caddy
    restart: unless-stopped
    env_file: .env
    ports:
      - 80:80
      - 443:443
    networks:
      - caddy-network
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./data:/data/caddy
      - ./config:/config/caddy
    labels:
      caddy: "*.mine.my.id"
      caddy.tls.dns: "cloudflare $CF_API_TOKEN"


volumes:
  data: {}

networks:
  caddy-network:
    external: true

If using label method (not using Caddyfile), do you know how to set up reverse proxy for the services that can’t be labelled (because it is other machine, or other host, for example proxmox)?

Can I combine it? Caddyfile to use for other machine or host, and Labels to build containers?
In this case, should I add Caddyfile in volumes caddy container?

That’s weird. Maybe Docker Compose is replacing the env var before it reaches Caddy. You could also use {env.CF_API_TOKEN} syntax. See Caddyfile Concepts — Caddy Documentation

Add labels to your Caddy container, or use a base Caddyfile. You can tell CDP to use a Caddyfile and append/merge config into it with the CADDY_DOCKER_CADDYFILE_PATH env var.

I have changed the method to original file without caddy docker proxy
Many thanks for your support !

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.