1. The problem I’m having:
We use Caddy and Caddy-docker-proxy internally in the following configuration:
Client app stack(s) in docker → Caddy (caddy-docker-proxy) as the “ingress router” for the docker host-> Caddy (standard caddy) as the “global router” at our network edge (this instance routes requests to many different customer environments) → Cloudflare.
In this configuration, our Caddy instance at the network edge (“Global Router”) can get TLS certificates without issue, and the connection between Cloudflare and our Network Edge can operate in “Full (Strict)” mode. As we are proxying various names for different customers and do not control their DNS, we can’t rely on DNS challenge for our “Global Router” Caddy instance and instead rely on HTTP challenge with the disable_tlsalpn_challenge setting.
However, the secondary Caddy instances (caddy-docker-proxy as the “ingress router” on docker hosts) is unable to get a valid cert through either DNS or HTTP challenge, and so currently we use tls internal on the “ingress router” hosts, and tls_insecure_skip_verify on the upstream “Global Router”.
What I was wondering is there a way for the upstream “Global Router” to provide TLS certificates and keys to the downstream Caddy instances so that the end-to-end connection can work in fully trusted manner. Or in the reverse, is there an easy automated way for (a dynamic number of) downstream Caddy instances to provide their CA to the upstream Caddy instance so that tls internal does not require the corresponding tls_insecure_skip_verify ?
If there are some docs or examples for this which I am missing please let me know!
2. Caddy version:
- caddy “global router” instance: v2.10.2
- caddy-docker-proxy “ingress router” instances: v2.11.x