Need help/pointers on obtaining TLS certificates in a Caddy-behind-Caddy deployment

1. The problem I’m having:

We use Caddy and Caddy-docker-proxy internally in the following configuration:

Client app stack(s) in docker → Caddy (caddy-docker-proxy) as the “ingress router” for the docker host-> Caddy (standard caddy) as the “global router” at our network edge (this instance routes requests to many different customer environments) → Cloudflare.

In this configuration, our Caddy instance at the network edge (“Global Router”) can get TLS certificates without issue, and the connection between Cloudflare and our Network Edge can operate in “Full (Strict)” mode. As we are proxying various names for different customers and do not control their DNS, we can’t rely on DNS challenge for our “Global Router” Caddy instance and instead rely on HTTP challenge with the disable_tlsalpn_challenge setting.

However, the secondary Caddy instances (caddy-docker-proxy as the “ingress router” on docker hosts) is unable to get a valid cert through either DNS or HTTP challenge, and so currently we use tls internal on the “ingress router” hosts, and tls_insecure_skip_verify on the upstream “Global Router”.

What I was wondering is there a way for the upstream “Global Router” to provide TLS certificates and keys to the downstream Caddy instances so that the end-to-end connection can work in fully trusted manner. Or in the reverse, is there an easy automated way for (a dynamic number of) downstream Caddy instances to provide their CA to the upstream Caddy instance so that tls internal does not require the corresponding tls_insecure_skip_verify ?

If there are some docs or examples for this which I am missing please let me know!

2. Caddy version:

  • caddy “global router” instance: v2.10.2
  • caddy-docker-proxy “ingress router” instances: v2.11.x

I would not try to have the edge Caddy distribute public ACME certificates and private keys to the downstream Caddy instances. That creates key-distribution and lifecycle problems, and it is not really what ACME is solving here.

The cleaner model is to treat the edge-to-ingress hop as a private trust boundary:

  • public/client side: Cloudflare → edge Caddy uses public ACME certs
  • internal side: edge Caddy → ingress Caddy uses either plain HTTP on a private network, or HTTPS with a private CA
  • ingress Caddy can use tls internal but the edge Caddy should explicitly trust the ingress CA instead of using tls_insecure_skip_verify

For example, export/distribute the ingress CA root to the edge router, then configure the edge proxy transport with a trust pool:

reverse_proxy https://ingress1.internal:443 {

    transport http {

        tls_trust_pool file /path/to/ingress-ca-root.pem

        tls_server_name ingress1.internal

    }

    # If the ingress Caddy routes by the original customer Host header, make sure that is preserved.

    header_up Host {host}

}

The important part is that the upstream TLS certificate must be valid for whatever tls_server_name the edge Caddy uses.

For a dynamic number of ingress routers, I would avoid each ingress instance generating its own unrelated CA unless you also have automation to distribute all those roots to the edge. A shared internal CA or a proper internal PKI process is usually easier; issue each ingress a cert for its internal name, trust that CA once on the edge router and keep the public ACME certs only on the public edge.

So the answer is probably not whether downstream instances should also get public ACME certs. It is to use public ACME at the edge and use a deliberate private PKI or plain private HTTP between the two Caddy layers.