mTLS between two Caddy reverse proxies

1. The problem I’m having:

I’m running a homelab which consists of several hosts running services like Docker containers and can - so far - only be reached from outside via a VPN. I decided to move from nginx proxy manager to caddy because I wanted a more powerful reverse proxy method, especially to minimise http in my homelab.

I think the best topology to achieve my goal is this:

  1. a frontend caddy instance in its own VM, which manages LE certificates for my domain dragofer.com and is targeted by my split-DNS and my client devices
  2. one backend caddy instance on each host, which is able to give access to services that are exposed on localhost only, and
  3. mutual TLS between frontend and backend caddy instances to pass on client requests

I couldn’t find an official example for mTLS between two reverse proxies, so what I managed to piece together so far from documentation + previous forum posts is:

  • obtain LE certificates via a custom cloudflare dns module in caddy
  • get my topology to work with plain http between the reverse proxies
  • setup an acme server on the frontend with a ca that is trusted by both the frontend and backend caddy instances
  • set the frontend as a trusted proxy of the backends

However, after lengthy experimentation with my configs I’m still running into a mixed bag of errors whenever I try to access a service, shown below for uptimekuma.dragofer.com. Thanks in advance for any advice or feedback!

2. Error messages and/or full log output:

  • With the current config, an error for “502 bad gateway” on the frontend and “no certificate found for 192.168.22.106” on the backend proxy:
    Frontend:
Feb 09 19:37:20 debian13reverseproxy systemd[1]: Starting caddy.service - Caddy...
░░ Subject: A start job for unit caddy.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit caddy.service has begun execution.
░░
░░ The job identifier is 3465.
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.1302211,"msg":"maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.1303673,"msg":"GOMEMLIMIT is updated","package":"github.com/KimMachineGun/automemlimit/memlimit","GOMEMLIMIT":1861993267,"previous":9223372036854775807}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: caddy.HomeDir=/var/lib/caddy
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: caddy.AppDataDir=/var/lib/caddy/.local/share/caddy
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: caddy.AppConfigDir=/var/lib/caddy/.config/caddy
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: caddy.ConfigAutosavePath=/var/lib/caddy/.config/caddy/autosave.json
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: caddy.Version=v2.10.2 h1:g/gTYjGMD0dec+UgMw8SnfmJ3I9+M2TdvoRL/Ovu6U8=
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: runtime.GOOS=linux
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: runtime.GOARCH=amd64
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: runtime.Compiler=gc
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: runtime.NumCPU=4
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: runtime.GOMAXPROCS=4
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: runtime.Version=go1.25.0
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: os.Getwd=/
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: LANG=en_GB.UTF-8
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: LANGUAGE=en_GB:en
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: NOTIFY_SOCKET=/run/systemd/notify
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: USER=caddy
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: LOGNAME=caddy
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: HOME=/var/lib/caddy
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: INVOCATION_ID=9a0f4db6d4394ad9b8a412322c638c49
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: JOURNAL_STREAM=9:17295
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: SYSTEMD_EXEC_PID=4059
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: MEMORY_PRESSURE_WATCH=/sys/fs/cgroup/system.slice/caddy.service/memory.pressure
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: MEMORY_PRESSURE_WRITE=c29tZSAyMDAwMDAgMjAwMDAwMAA=
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: CF_API_TOKEN=...
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.130463,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.13164,"msg":"adapted config to JSON","adapter":"caddyfile"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"warn","ts":1770662240.1316504,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.1326525,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.1330097,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.1330369,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662240.1330612,"logger":"http.auto_https","msg":"adjusted config","tls":{"automation":{"policies":[{"subjects":["homeassistant.dragofer.com","filebrowser.dragofer.com","zigbee2mqtt.dragofer.com","uptimekuma.dragofer.com","ntfy-elite.dragofer.com","syncthing.dragofer.com","mqtt.dragofer.com","plex.dragofer.com","omv.dragofer.com"]},{"subjects":["*.dragofer.com"]},{}]}},"http":{"servers":{"remaining_auto_https_redirects":{"listen":[":80"],"routes":[{},{}]},"srv0":{"listen":[":443"],"routes":[{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.28.101:8123"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.22.110:8035"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.28.101:8485"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","transport":{"protocol":"http","tls":{}},"upstreams":[{"dial":"192.168.22.106:443"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.22.106:8500"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.22.110:8384"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.28.101:8883"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.22.110:32400"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.22.110:443"}]}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"ca":"central","handler":"acme_server"}]}]}],"terminal":true}],"tls_connection_policies":[{}],"automatic_https":{}}}}}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.133016,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000b9b80"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2732933,"logger":"pki.ca.central","msg":"root certificate is already trusted by system","path":"storage:pki/authorities/central/root.crt"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662240.273383,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"warn","ts":1770662240.2734005,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"warn","ts":1770662240.273404,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2734065,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662240.2734275,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":false}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2734308,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2735574,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2735674,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["filebrowser.dragofer.com","ntfy-elite.dragofer.com","syncthing.dragofer.com","*.dragofer.com","mqtt.dragofer.com","omv.dragofer.com","homeassistant.dragofer.com","zigbee2mqtt.dragofer.com","uptimekuma.dragofer.com","plex.dragofer.com"]}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662240.27383,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [*.dragofer.com]: no OCSP server specified in certificate","identifiers":["*.dragofer.com"]}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662240.273875,"logger":"tls.cache","msg":"added certificate to cache","subjects":["*.dragofer.com"],"expiration":1778349088,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"3337b1da36a6c5f55dfd13834efbf5747498f33206a2a1f0ce16b80ce5ab3c6d","cache_size":1,"cache_capacity":10000}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662240.2738996,"logger":"events","msg":"event","name":"cached_managed_cert","id":"0fb1d8e4-0d1f-4245-abe6-a78f102ba33c","origin":"tls","data":{"sans":["*.dragofer.com"]}}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662240.2739265,"logger":"events","msg":"event","name":"started","id":"3e77ea7a-5c88-4054-9fb6-7096d6486c76","origin":"","data":null}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2739966,"msg":"autosaved config (load with --resume flag)","file":"/var/lib/caddy/.config/caddy/autosave.json"}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2740471,"msg":"serving initial configuration"}
Feb 09 19:37:20 debian13reverseproxy systemd[1]: Started caddy.service - Caddy.
░░ Subject: A start job for unit caddy.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit caddy.service has finished successfully.
░░
░░ The job identifier is 3465.
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.278914,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/var/lib/caddy/.local/share/caddy","instance":"bd296d5b-7c5a-4284-86d3-764e0f1de672","try_again":1770748640.2789133,"try_again_in":86399.999999674}
Feb 09 19:37:20 debian13reverseproxy caddy[4059]: {"level":"info","ts":1770662240.2789626,"logger":"tls","msg":"finished cleaning storage units"}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662253.3610554,"logger":"events","msg":"event","name":"tls_get_certificate","id":"1da2358d-249e-4664-84a2-86ed3152eadf","origin":"tls","data":{"client_hello":{"CipherSuites":[4865,4867,4866,49195,49199,52393,52392,49196,49200,49162,49161,49171,49172,156,157,47,53],"ServerName":"uptimekuma.dragofer.com","SupportedCurves":[4588,29,23,24,25,256,257],"SupportedPoints":"AA==","SignatureSchemes":[1027,1283,1539,2052,2053,2054,1025,1281,1537,515,513],"SupportedProtos":["h2","http/1.1"],"SupportedVersions":[772,771],"RemoteAddr":{"IP":"192.168.23.101","Port":64996,"Zone":""},"LocalAddr":{"IP":"192.168.22.100","Port":443,"Zone":""}}}}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662253.3612237,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"uptimekuma.dragofer.com"}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662253.3612354,"logger":"tls.handshake","msg":"choosing certificate","identifier":"*.dragofer.com","num_choices":1}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662253.3612401,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"*.dragofer.com","subjects":["*.dragofer.com"],"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"3337b1da36a6c5f55dfd13834efbf5747498f33206a2a1f0ce16b80ce5ab3c6d"}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662253.3612459,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"192.168.23.101","remote_port":"64996","subjects":["*.dragofer.com"],"managed":true,"expiration":1778349088,"hash":"3337b1da36a6c5f55dfd13834efbf5747498f33206a2a1f0ce16b80ce5ab3c6d"}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662253.3665516,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"192.168.22.106:443","total_upstreams":1}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"debug","ts":1770662253.3677468,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"192.168.22.106:443","duration":0.001156172,"request":{"remote_ip":"192.168.23.101","remote_port":"64996","client_ip":"192.168.23.101","proto":"HTTP/2.0","method":"GET","host":"uptimekuma.dragofer.com","uri":"/","headers":{"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Sec-Fetch-User":["?1"],"Priority":["u=0, i"],"Sec-Fetch-Site":["none"],"Accept-Language":["en-US,en;q=0.9"],"X-Forwarded-Host":["uptimekuma.dragofer.com"],"X-Forwarded-For":["192.168.23.101"],"X-Forwarded-Proto":["https"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Upgrade-Insecure-Requests":["1"],"Te":["trailers"],"Sec-Fetch-Mode":["navigate"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:147.0) Gecko/20100101 Firefox/147.0"],"Via":["2.0 Caddy"],"Sec-Fetch-Dest":["document"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"uptimekuma.dragofer.com"}},"error":"remote error: tls: internal error"}
Feb 09 19:37:33 debian13reverseproxy caddy[4059]: {"level":"error","ts":1770662253.3678076,"logger":"http.log.error","msg":"remote error: tls: internal error","request":{"remote_ip":"192.168.23.101","remote_port":"64996","client_ip":"192.168.23.101","proto":"HTTP/2.0","method":"GET","host":"uptimekuma.dragofer.com","uri":"/","headers":{"Sec-Fetch-User":["?1"],"Te":["trailers"],"Accept-Language":["en-US,en;q=0.9"],"Sec-Fetch-Mode":["navigate"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:147.0) Gecko/20100101 Firefox/147.0"],"Sec-Fetch-Site":["none"],"Priority":["u=0, i"],"Sec-Fetch-Dest":["document"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Upgrade-Insecure-Requests":["1"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"uptimekuma.dragofer.com"}},"duration":0.001286413,"status":502,"err_id":"es6ziv6g6","err_trace":"reverseproxy.statusError (reverseproxy.go:1390)"}

Backend:

Feb 09 19:37:23 debian13docker systemd[1]: Starting caddy.service - Caddy...
░░ Subject: A start job for unit caddy.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit caddy.service has begun execution.
░░
░░ The job identifier is 3669.
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1085618,"msg":"maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1086798,"msg":"GOMEMLIMIT is updated","package":"github.com/KimMachineGun/automemlimit/memlimit","GOMEMLIMIT":1861989580,"previous":9223372036854775807}
Feb 09 19:37:23 debian13docker caddy[55335]: caddy.HomeDir=/var/lib/caddy
Feb 09 19:37:23 debian13docker caddy[55335]: caddy.AppDataDir=/var/lib/caddy/.local/share/caddy
Feb 09 19:37:23 debian13docker caddy[55335]: caddy.AppConfigDir=/var/lib/caddy/.config/caddy
Feb 09 19:37:23 debian13docker caddy[55335]: caddy.ConfigAutosavePath=/var/lib/caddy/.config/caddy/autosave.json
Feb 09 19:37:23 debian13docker caddy[55335]: caddy.Version=v2.10.2 h1:g/gTYjGMD0dec+UgMw8SnfmJ3I9+M2TdvoRL/Ovu6U8=
Feb 09 19:37:23 debian13docker caddy[55335]: runtime.GOOS=linux
Feb 09 19:37:23 debian13docker caddy[55335]: runtime.GOARCH=amd64
Feb 09 19:37:23 debian13docker caddy[55335]: runtime.Compiler=gc
Feb 09 19:37:23 debian13docker caddy[55335]: runtime.NumCPU=4
Feb 09 19:37:23 debian13docker caddy[55335]: runtime.GOMAXPROCS=4
Feb 09 19:37:23 debian13docker caddy[55335]: runtime.Version=go1.25.0
Feb 09 19:37:23 debian13docker caddy[55335]: os.Getwd=/
Feb 09 19:37:23 debian13docker caddy[55335]: LANG=en_GB.UTF-8
Feb 09 19:37:23 debian13docker caddy[55335]: LANGUAGE=en_GB:en
Feb 09 19:37:23 debian13docker caddy[55335]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Feb 09 19:37:23 debian13docker caddy[55335]: NOTIFY_SOCKET=/run/systemd/notify
Feb 09 19:37:23 debian13docker caddy[55335]: USER=caddy
Feb 09 19:37:23 debian13docker caddy[55335]: LOGNAME=caddy
Feb 09 19:37:23 debian13docker caddy[55335]: HOME=/var/lib/caddy
Feb 09 19:37:23 debian13docker caddy[55335]: INVOCATION_ID=ea1747fa12524dff8b3eb51cbc7db339
Feb 09 19:37:23 debian13docker caddy[55335]: JOURNAL_STREAM=9:307887
Feb 09 19:37:23 debian13docker caddy[55335]: SYSTEMD_EXEC_PID=55335
Feb 09 19:37:23 debian13docker caddy[55335]: MEMORY_PRESSURE_WATCH=/sys/fs/cgroup/system.slice/caddy.service/memory.pressure
Feb 09 19:37:23 debian13docker caddy[55335]: MEMORY_PRESSURE_WRITE=c29tZSAyMDAwMDAgMjAwMDAwMAA=
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.10876,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"warn","ts":1770662243.108993,"msg":"The 'trusted_ca_cert_file' field is deprecated. Use the 'trust_pool' field instead."}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1093364,"msg":"adapted config to JSON","adapter":"caddyfile"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"warn","ts":1770662243.109344,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.110298,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1104364,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"debug","ts":1770662243.1104615,"logger":"http.auto_https","msg":"adjusted config","tls":{"automation":{"policies":[{"subjects":["uptimekuma.dragofer.com"]},{}]}},"http":{"servers":{"remaining_auto_https_redirects":{"listen":[":80"],"routes":[{},{}]},"srv0":{"listen":[":443"],"routes":[{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"192.168.22.106:3001"}]}]}]}],"terminal":true}],"tls_connection_policies":[{"match":{"sni":["uptimekuma.dragofer.com"]},"client_authentication":{"ca":{"provider":"inline","trusted_ca_certs":["MIIBpTCCAUqgAwIBAgIRAO+FEqj1+mViKpjSpZtB06UwCgYIKoZIzj0EAwIwMDEuMCwGA1UEAxMlQ2FkZHkgTG9jYWwgQXV0aG9yaXR5IC0gMjAyNiBFQ0MgUm9vdDAeFw0yNjAyMDgyMjE2NDNaFw0zNTEyMTgyMjE2NDNaMDAxLjAsBgNVBAMTJUNhZGR5IExvY2FsIEF1dGhvcml0eSAtIDIwMjYgRUNDIFJvb3QwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATVxgcSpIys+7fOz68BZ4FBH0gYg7uFYGlX30jpxK3k1D5BVrFJI/JFlOQ1itubT6pALArcoeCT4RM9yLJTziwto0UwQzAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBATAdBgNVHQ4EFgQUmnmbfIXGfdJEWXWwYayNnX6Pv2kwCgYIKoZIzj0EAwIDSQAwRgIhAO4myujsSJ/IJ1k9WrL8BJkE7klIeblSpDMfF19zmB3hAiEApVZCJ4TvV18N65SITdV180xzmHGo1SAaRFXzajoYl6M="]}}},{}],"automatic_https":{},"trusted_proxies":{"ranges":["192.168.22.100"],"source":"static"}}}}}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"warn","ts":1770662243.1104913,"logger":"http","msg":"enabling strict SNI-Host enforcement because TLS client auth is configured","server_id":"srv0"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1106262,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0003ea300"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"debug","ts":1770662243.1106527,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":false}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.110666,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1107514,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"debug","ts":1770662243.1107757,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"warn","ts":1770662243.1107848,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"warn","ts":1770662243.1107876,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1107903,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1107936,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["uptimekuma.dragofer.com"]}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"warn","ts":1770662243.1109915,"logger":"tls","msg":"stapling OCSP","identifiers":["uptimekuma.dragofer.com"]}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"debug","ts":1770662243.1110656,"logger":"tls.cache","msg":"added certificate to cache","subjects":["uptimekuma.dragofer.com"],"expiration":1770697616,"managed":true,"issuer_key":"acme.dragofer.com-acme-central-directory","hash":"130768225451b7bc32b264fc2beefa9b5ab8cb5f54910c64ecd4b035a24b1f51","cache_size":1,"cache_capacity":10000}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"debug","ts":1770662243.1110826,"logger":"events","msg":"event","name":"cached_managed_cert","id":"2c468dc8-052d-4dbb-9b95-0eb5106f9aab","origin":"tls","data":{"sans":["uptimekuma.dragofer.com"]}}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"debug","ts":1770662243.1111047,"logger":"events","msg":"event","name":"started","id":"5cc996e7-69ae-429c-b144-1b653bcb5ec1","origin":"","data":null}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1114013,"msg":"autosaved config (load with --resume flag)","file":"/var/lib/caddy/.config/caddy/autosave.json"}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.111433,"msg":"serving initial configuration"}
Feb 09 19:37:23 debian13docker systemd[1]: Started caddy.service - Caddy.
░░ Subject: A start job for unit caddy.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit caddy.service has finished successfully.
░░
░░ The job identifier is 3669.
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1709433,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/var/lib/caddy/.local/share/caddy","instance":"70b4c091-ca6d-4e41-9bca-510ddc299280","try_again":1770748643.170942,"try_again_in":86399.999999727}
Feb 09 19:37:23 debian13docker caddy[55335]: {"level":"info","ts":1770662243.1709993,"logger":"tls","msg":"finished cleaning storage units"}
Feb 09 19:37:33 debian13docker caddy[55335]: {"level":"debug","ts":1770662253.36692,"logger":"events","msg":"event","name":"tls_get_certificate","id":"3b1248d9-1e10-423d-95ce-9c1ba7782300","origin":"tls","data":{"client_hello":{"CipherSuites":[52393,52392,49195,49199,49196,49200,49161,49171,49162,49172,4867,4865,4866],"ServerName":"","SupportedCurves":[4588,29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2052,1027,2055,2053,2054,1025,1281,1537,1283,1539],"SupportedProtos":["h2","http/1.1"],"SupportedVersions":[772,771],"RemoteAddr":{"IP":"192.168.22.100","Port":38276,"Zone":""},"LocalAddr":{"IP":"192.168.22.106","Port":443,"Zone":""}}}}
Feb 09 19:37:33 debian13docker caddy[55335]: {"level":"debug","ts":1770662253.367003,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"192.168.22.106"}
Feb 09 19:37:33 debian13docker caddy[55335]: {"level":"debug","ts":1770662253.3670115,"logger":"tls.handshake","msg":"no certificate matching TLS ClientHello","remote_ip":"192.168.22.100","remote_port":"38276","server_name":"","remote":"192.168.22.100:38276","identifier":"192.168.22.106","cipher_suites":[52393,52392,49195,49199,49196,49200,49161,49171,49162,49172,4867,4865,4866],"cert_cache_fill":0.0001,"load_or_obtain_if_necessary":true,"on_demand":false}
Feb 09 19:37:33 debian13docker caddy[55335]: {"level":"debug","ts":1770662253.3670642,"logger":"http.stdlib","msg":"http: TLS handshake error from 192.168.22.100:38276: no certificate available for '192.168.22.106'"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"info","ts":1770662520.006396,"logger":"tls.obtain","msg":"lock acquired","identifier":"uptimekuma.dragofer.com"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"info","ts":1770662520.006442,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"uptimekuma.dragofer.com"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"debug","ts":1770662520.0064561,"logger":"events","msg":"event","name":"cert_obtaining","id":"f86dc6f1-34c5-44f8-ace5-e02655de5261","origin":"tls","data":{"identifier":"uptimekuma.dragofer.com"}}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"debug","ts":1770662520.0065272,"logger":"tls","msg":"created CSR","identifiers":["uptimekuma.dragofer.com"],"san_dns_names":["uptimekuma.dragofer.com"],"san_emails":[],"common_name":"","extra_extensions":0}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"debug","ts":1770662520.006721,"logger":"tls.obtain","msg":"trying issuer 1/1","issuer":"dragofer.com-acme-central-directory"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"info","ts":1770662520.0067592,"logger":"tls.issuance.acme","msg":"creating new account because no account for configured email is known to us","email":"","ca":"https://dragofer.com/acme/central/directory","error":"open /var/lib/caddy/.local/share/caddy/acme/dragofer.com-acme-central-directory/users/default/default.json: no such file or directory"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"info","ts":1770662520.006778,"logger":"tls.issuance.acme","msg":"ACME account has empty status; registering account with ACME server","contact":[],"location":""}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"info","ts":1770662520.0114012,"logger":"tls.issuance.acme","msg":"creating new account because no account for configured email is known to us","email":"","ca":"https://dragofer.com/acme/central/directory","error":"open /var/lib/caddy/.local/share/caddy/acme/dragofer.com-acme-central-directory/users/default/default.json: no such file or directory"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"warn","ts":1770662520.015267,"msg":"HTTP request failed; retrying","url":"https://dragofer.com/acme/central/directory","error":"performing request: Get \"https://dragofer.com/acme/central/directory\": remote error: tls: internal error"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"warn","ts":1770662520.2704945,"msg":"HTTP request failed; retrying","url":"https://dragofer.com/acme/central/directory","error":"performing request: Get \"https://dragofer.com/acme/central/directory\": remote error: tls: internal error"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"warn","ts":1770662520.5264685,"msg":"HTTP request failed; retrying","url":"https://dragofer.com/acme/central/directory","error":"performing request: Get \"https://dragofer.com/acme/central/directory\": remote error: tls: internal error"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"error","ts":1770662520.526633,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"uptimekuma.dragofer.com","issuer":"dragofer.com-acme-central-directory","error":"registering account [] with server: provisioning client: performing request: Get \"https://dragofer.com/acme/central/directory\": remote error: tls: internal error"}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"debug","ts":1770662520.5266638,"logger":"events","msg":"event","name":"cert_failed","id":"dafa356e-200a-49bb-a144-61c213c37275","origin":"tls","data":{"error":{},"identifier":"uptimekuma.dragofer.com","issuers":["dragofer.com-acme-central-directory"],"renewal":false}}
Feb 09 19:42:00 debian13docker caddy[55601]: {"level":"error","ts":1770662520.5267348,"logger":"tls.obtain","msg":"will retry","error":"[uptimekuma.dragofer.com] Obtain: registering account [] with server: provisioning client: performing request: Get \"https://dragofer.com/acme/central/directory\": remote error: tls: internal error","attempt":1,"retrying_in":60,"elapsed":0.520329205,"max_duration":2592000}

3. Caddy version:

Both frontend and backend:
v2.10.2 h1:g/gTYjGMD0dec+UgMw8SnfmJ3I9+M2TdvoRL/Ovu6U8=

4. How I installed and ran Caddy:

Frontend:

# In Proxmox, create a new VM debian13reverseproxy
# In OpenWrt, assign a static IP 192.168.22.100 to its MAC, then launch the VM
# Install nftables and allow tcp ports 80 and 443 in inbound chain
sudo nano /etc/nftables.conf
                tcp dport { 80,443 } accept
sudo nft flush ruleset
sudo nft -f /etc/nftables.conf
# Install Caddy and Xcaddy from cloudsmith repos
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/xcaddy/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-xcaddy-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/xcaddy/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-xcaddy.list
sudo chmod o+r /usr/share/keyrings/caddy-stable-archive-keyring.gpg
sudo chmod o+r /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy xcaddy 
# Rebuild the binary
sudo xcaddy build \
    --with github.com/caddy-dns/cloudflare
# Divert Debian package
sudo dpkg-divert --divert /usr/bin/caddy.default --rename /usr/bin/caddy
sudo mv ./caddy /usr/bin/caddy.custom
sudo update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.default 10
sudo update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.custom 50
sudo systemctl restart caddy
# Setup Cloudflare API token as systemd variable. Enter the API key into a secret file
(key has edit permission for DNS zone, zone resource includes dragofer.com
sudo nano /etc/caddy/.env
      CF_API_TOKEN=...
sudo chown caddy:caddy /etc/caddy/.env
# Point systemd to the keyfile
sudo systemctl edit caddy
> paste in:
[Service]
EnvironmentFile=/etc/caddy/.env


sudo systemctl restart caddy
# Configure caddyfile and restart
sudo nano /etc/caddy/Caddyfile
sudo systemctl restart caddy
# Use sudo to add central.crt to trust store
sudo cp /var/lib/caddy/.local/share/caddy/pki/authorities/central/root.crt /usr/local/share/ca-certificates/central-ca.crt
sudo update-ca-certificates
sudo systemctl restart caddy

Backend:

# On an existing Debian13 VM, debian13docker, with static IP 192.168.22.106
# Add firewall rules to nftables for ports 80 and 443
sudo nano /etc/nftables.conf
        tcp dport { 80,443 } accept
sudo nft flush ruleset
sudo nft -f /etc/nftables.conf
# Install Caddy from Cloudsmith repo
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo chmod o+r /usr/share/keyrings/caddy-stable-archive-keyring.gpg
sudo chmod o+r /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
# Install certificate from primary proxy. path on primary proxy: less /var/lib/caddy/.local/share/caddy/pki/authorities/central/root.crt
sudo mkdir -p /keys/caddy
sudo nano /keys/caddy/central.crt
> paste in contents of .crt
sudo chown -R caddy:caddy /keys/caddy
sudo chmod -R 700 /keys/caddy
# Configure the Caddyfile
sudo nano /etc/caddy/Caddyfile
sudo systemctl restart caddy

a. System environment:

OS: Debian 13
Architecture: x86
Firewall: nftables
Process manager: systemd
DNS provider: Cloudflare

b. Command:

On both frontend and backend:

sudo systemctl restart caddy
# Open uptimekuma.dragofer.com in Firefox
sudo journalctl -xeu caddy.service --no-pager

c. Service/unit/compose file:

Frontend: /etc/systemd/system/caddy.service.d/override.conf:

[Service]
EnvironmentFile=/etc/caddy/.env

Frontend: /etc/caddy/.env

CF_API_TOKEN=...

Frontend: /usr/lib/systemd/system/caddy.service

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target

[Service]
Type=notify
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force
TimeoutStopSec=5s
LimitNOFILE=1048576
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

Backend: /usr/lib/systemd/system/caddy.service

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target

[Service]
Type=notify
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force
TimeoutStopSec=5s
LimitNOFILE=1048576
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

d. My complete Caddy config:

Frontend (I’m only trying to configure uptimekuma.dragofer.com at the moment):

{
    debug
}

*.dragofer.com {
    tls {
        dns cloudflare {$CF_API_TOKEN}
        resolvers 1.1.1.1
    }

    acme_server {
        ca central
    }
}

uptimekuma.dragofer.com {
    reverse_proxy https://192.168.22.106
}

syncthing.dragofer.com {
    reverse_proxy 192.168.22.110:8384
}

filebrowser.dragofer.com {
    reverse_proxy 192.168.22.110:8035
}

homeassistant.dragofer.com {
    reverse_proxy 192.168.28.101:8123
}

mqtt.dragofer.com {
    reverse_proxy 192.168.28.101:8883
}

ntfy-elite.dragofer.com {
    reverse_proxy 192.168.22.106:8500
}

omv.dragofer.com {
    reverse_proxy 192.168.22.110:443
}

plex.dragofer.com {
    reverse_proxy 192.168.22.110:32400
}

zigbee2mqtt.dragofer.com {
    reverse_proxy 192.168.28.101:8485
}

Backend (uptimekuma is still reachable on the LAN interface, I will move it to localhost-only once mTLS works):

{
    debug
    servers {
        trusted_proxies static 192.168.22.100
    }
}

uptimekuma.dragofer.com {
    reverse_proxy 192.168.22.106:3001

    tls {
       ca https://acme.dragofer.com/acme/central/directory
       client_auth {
          trusted_ca_cert_file /keys/caddy/central.crt
    }
}

5. Links to relevant resources:

1 Like

On the frontend side, try changing this:

uptimekuma.dragofer.com {
    reverse_proxy https://192.168.22.106
}

to this:

uptimekuma.dragofer.com {
    reverse_proxy https://192.168.22.106 {
        transport http {
            tls_server_name uptimekuma.dragofer.com
        }
    }
}

and see if that helps.

Right now, the frontend is trying to talk to the backend over HTTPS, but it’s doing the TLS handshake using 192.168.22.106 as the server name, while the backend is expecting uptimekuma.dragofer.com.

Update: One thing that might be needed is a tls_trust_pool, since your backend is using a custom CA. This would let the frontend trust the backend’s certificate. tls_insecure_skip_verify would also work, but it is probably best to avoid that unless you are just testing.

That said, I do not have much hands-on experience with using Caddy’s acme_ca in an mTLS setup, so consider my comment about tls_trust_pool more of a guess.

Thank you, setting a tls server name changed the warning from “no certificate for 192.168.22.106” to “bad certificate for uptimekuma.dragofer.com” on the backend and “no information found to solve challenge for identifier: uptimekuma.dragofer.com” on the frontend.

Backend logs when I try to connect to a service:

Feb 10 22:44:01 debian13docker caddy[112599]: {"level":"debug","ts":1770759841.8569586,"logger":"tls.handshake","msg":"choosing certificate","identifier":"uptimekuma.dragofer.com","num_choices":1}
Feb 10 22:44:01 debian13docker caddy[112599]: {"level":"debug","ts":1770759841.8569646,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"uptimekuma.dragofer.com","subjects":["uptimekuma.dragofer.com"],"managed":true,"issuer_key":"acme.dragofer.com-acme-central-directory","hash":"130768225451b7bc32b264fc2beefa9b5ab8cb5f54910c64ecd4b035a24b1f51"}
Feb 10 22:44:01 debian13docker caddy[112599]: {"level":"debug","ts":1770759841.8569705,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"192.168.22.100","remote_port":"54628","subjects":["uptimekuma.dragofer.com"],"managed":true,"expiration":1770697616,"hash":"130768225451b7bc32b264fc2beefa9b5ab8cb5f54910c64ecd4b035a24b1f51"}
Feb 10 22:44:01 debian13docker caddy[112599]: {"level":"debug","ts":1770759841.857456,"logger":"http.stdlib","msg":"http: TLS handshake error from 192.168.22.100:54628: remote error: tls: bad certificate"}

And on the frontend when connecting to a service:

Feb 10 22:44:01 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770759841.8534236,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"uptimekuma.dragofer.com"}
Feb 10 22:44:01 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770759841.853434,"logger":"tls.handshake","msg":"choosing certificate","identifier":"*.dragofer.com","num_choices":1}
Feb 10 22:44:01 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770759841.8534408,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"*.dragofer.com","subjects":["*.dragofer.com"],"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"3337b1da36a6c5f55dfd13834efbf5747498f33206a2a1f0ce16b80ce5ab3c6d"}
Feb 10 22:44:01 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770759841.85345,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"192.168.23.101","remote_port":"49308","subjects":["*.dragofer.com"],"managed":true,"expiration":1778349088,"hash":"3337b1da36a6c5f55dfd13834efbf5747498f33206a2a1f0ce16b80ce5ab3c6d"}
Feb 10 22:44:01 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770759841.8579714,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"192.168.22.106:443","total_upstreams":1}
Feb 10 22:44:01 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770759841.859287,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"192.168.22.106:443","duration":0.001273757,"request":{"remote_ip":"192.168.23.101","remote_port":"49308","client_ip":"192.168.23.101","proto":"HTTP/2.0","method":"GET","host":"uptimekuma.dragofer.com","uri":"/","headers":{"Priority":["u=0, i"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:147.0) Gecko/20100101 Firefox/147.0"],"Te":["trailers"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"X-Forwarded-For":["192.168.23.101"],"Via":["2.0 Caddy"],"Sec-Fetch-User":["?1"],"Sec-Fetch-Site":["none"],"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["uptimekuma.dragofer.com"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Mode":["navigate"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Sec-Fetch-Dest":["document"],"Accept-Language":["en-US,en;q=0.9"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"uptimekuma.dragofer.com"}},"error":"tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-10T22:44:01+01:00 is after 2026-02-10T04:26:55Z"}
Feb 10 22:44:01 debian13reverseproxy caddy[8956]: {"level":"error","ts":1770759841.8593433,"logger":"http.log.error","msg":"tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-10T22:44:01+01:00 is after 2026-02-10T04:26:55Z","request":{"remote_ip":"192.168.23.101","remote_port":"49308","client_ip":"192.168.23.101","proto":"HTTP/2.0","method":"GET","host":"uptimekuma.dragofer.com","uri":"/","headers":{"Sec-Fetch-Dest":["document"],"Sec-Fetch-Site":["none"],"Sec-Fetch-Mode":["navigate"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Te":["trailers"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.9"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-User":["?1"],"Priority":["u=0, i"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:147.0) Gecko/20100101 Firefox/147.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"uptimekuma.dragofer.com"}},"duration":0.00140599,"status":502,"err_id":"t4eue3hbn","err_trace":"reverseproxy.statusError (reverseproxy.go:1390)"}

Leaving the two on while I was at work, I also saw plenty of this relating to my mTLS domain, on the frontend:

Feb 10 19:51:11 debian13reverseproxy caddy[8956]: {"level":"warn","ts":1770749471.28048,"logger":"tls","msg":"looking up info for HTTP challenge","host":"uptimekuma.dragofer.com","remote_addr":"192.168.22.100:35186","user_agent":"Go-http-client/1.1","error":"no information found to solve challenge for identifier: uptimekuma.dragofer.com"}
Feb 10 19:51:11 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770749471.2804992,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"192.168.22.106:443","total_upstreams":1}
Feb 10 19:51:11 debian13reverseproxy caddy[8956]: {"level":"debug","ts":1770749471.2816978,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"192.168.22.106:443","duration":0.001181471,"request":{"remote_ip":"192.168.22.100","remote_port":"35186","client_ip":"192.168.22.100","proto":"HTTP/1.1","method":"GET","host":"uptimekuma.dragofer.com","uri":"/.well-known/acme-challenge/WrMqeoLkU0PLr9RtHDsOLTGzrQAp4QJR","headers":{"User-Agent":["Go-http-client/1.1"],"Referer":["http://uptimekuma.dragofer.com/.well-known/acme-challenge/WrMqeoLkU0PLr9RtHDsOLTGzrQAp4QJR"],"Accept-Encoding":["gzip"],"X-Forwarded-For":["192.168.22.100"],"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["uptimekuma.dragofer.com"],"Via":["1.1 Caddy"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"","server_name":"uptimekuma.dragofer.com"}},"error":"tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-10T19:51:11+01:00 is after 2026-02-10T04:26:55Z"}
Feb 10 19:51:11 debian13reverseproxy caddy[8956]: {"level":"error","ts":1770749471.2817361,"logger":"http.log.error","msg":"tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-10T19:51:11+01:00 is after 2026-02-10T04:26:55Z","request":{"remote_ip":"192.168.22.100","remote_port":"35186","client_ip":"192.168.22.100","proto":"HTTP/1.1","method":"GET","host":"uptimekuma.dragofer.com","uri":"/.well-known/acme-challenge/WrMqeoLkU0PLr9RtHDsOLTGzrQAp4QJR","headers":{"User-Agent":["Go-http-client/1.1"],"Referer":["http://uptimekuma.dragofer.com/.well-known/acme-challenge/WrMqeoLkU0PLr9RtHDsOLTGzrQAp4QJR"],"Accept-Encoding":["gzip"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"","server_name":"uptimekuma.dragofer.com"}},"duration":0.001269581,"status":502,"err_id":"0qvndbfpp","err_trace":"reverseproxy.statusError (reverseproxy.go:1390)"}

And backend:

Feb 10 22:26:04 debian13docker caddy[112599]: {"level":"info","ts":1770758764.8858702,"logger":"tls.cache.maintenance","msg":"certificate expires soon; queuing for renewal","identifiers":["uptimekuma.dragofer.com"],"remaining":-61148.885870063}
Feb 10 22:36:04 debian13docker caddy[112599]: {"level":"info","ts":1770759364.8838918,"logger":"tls","msg":"certificate is in configured renewal window based on expiration date","subjects":["uptimekuma.dragofer.com"],"expiration":1770697616,"ari_cert_id":"","next_ari_update":null,"renew_check_interval":600,"window_start":-6795364578.8713455,"window_end":-6795364578.8713455,"remaining":-61748.883890628}
Feb 10 22:36:04 debian13docker caddy[112599]: {"level":"info","ts":1770759364.884181,"logger":"tls.cache.maintenance","msg":"certificate expires soon; queuing for renewal","identifiers":["uptimekuma.dragofer.com"],"remaining":-61748.884180556}

Regarding trust pool, I saw a message that my current subdirective “trusted_ca_certs” is deprecated and I should use trust_pool instead. Ran into some errors about trust pool module not being registered, but googling now shows the right syntax for that is:
trust_pool file {
pem_file /etc/caddy/ca.crt
}

So, once I get the trust_pool working my backend should be good to go (trusts the internal ca, has a cert and asks for a cert from the internal ca for uptimekuma.dragofer.com).

The frontend is still a problem. I think it doesn’t know it needs a certificate from the internal ca to connect to uptimekuma.dragofer.com. I saw reverse_proxy has subdirectives to specify a .crt and a .key, but it’s not clear to me how the frontend would automatically obtain those from my acme server. It also needs to verify the backend’s certificate.

The acme server is also a problem since it appears to provide certificates to anyone who can connect, which runs counter to my aims with mTLS. The documentation shows I can restrict by IP or domain, but I’d much prefer something stronger like a token or ssh key.

So I think my steps from here are:

  1. Setup a separate policy-based ca i.e. smallstep that provides my caddy instances with certificates via api + token or ssh + pipe
  2. Configure the backend and frontend with trust_pools, or add the ca to the system trust store.
  3. Configure the frontend’s reverse_proxy directive to use the ca’s cert and key.
  4. Configure the frontend to require and verify the right certificate from the backend.

I realised I needed a good deal more reading on PKIs and service meshes before I could configure the mTLS that I wanted (smallstep has an extremely helpful starting resource here: Everything you should know about certificates and PKI but are too afraid to ask).

A service mesh lives from having strong identity attestation, for which there are very interesting tools like the SPIFFE framework that can attest the identity of individual processes. From what I can see, Caddy’s inbuilt CA only offers ACME, which is more suited for a web PKI between webservers, or for delivering a certificate after something else has done the attestation.

I also had a hard time finding examples of service meshes in the Caddy documentation. The mTLS setups were geared to identifying clients, but not towards securing communication between server resources inside a LAN.

So I took a pretty deep plunge into configuring 1) my own CA, 2) SPIFFE/SPIRE and 3) a different proxy software that has a stronger focus on service meshes, and that has integrations with my CA and SPIFFE (Envoy). The result should be a zero trust network.

I didn’t manage to work out how to configure mTLS between Caddy instances before I reinstalled my proxy VM. But the key thing to understand is that it’s a very symmetrical setup, where both sides need a common root of trust (the root certificate) and each their own certificate/keypair. The hard part is representing that symmetry in the Caddy config, since the configuration on the sending and receiving sides is asymmetric.

Did you read this post?

1 Like

I think I came across that post when I was researching, but had already gotten that far by using the Caddy documentation. What it describes is a single Caddy instance that reverse proxies to services on the LAN, but what I’m actually wanting to do is figure out how to have multiple proxy instances that communicate with each other via mTLS.

This is to solve the following problems:

  • I want to avoid sending data within my network over http. Apps have diverse TLS implementations, and some are even http-only. I would like to attach a proxy instance to each app which handles the TLS termination in a consistent way.
  • Apps should not be able to reach all other resources simply because they are in the same subnet. As my number of services grows, the risk of falling victim to a supply-chain attack or one of them having malicious behaviour increases. I want to bind my apps to localhost and let the proxy instances (central ingress and per-app sidecar) be the only way to access. The proxies should control which app has access to which other resources.

A diagram of such a topology can be found here: