Caddy proxy-chain Implementations

Due to complexity handling multiple platforms for several domains behind single IP, implementations were as following,
Primary caddy server is facing to WAN acting as controller with TLS manager.
Secondary caddy servers are handling each machine listening to port 443 and acting as forwarders based on requests to where X and Y are matrix

Both primary and secondary servers are working only with proxy directives { transparent insecure_skip_verify }.

Observations are as following,
In absence on secondary stacks, connections goes too smooth while error 403 occurs from time to another while moving from page to another before resuming connections, not always but just happens, not for particular platform, but for any.

Please advise on proper configurations on proxy-chain and/or advice on how to split caddyfile into files while to eliminate back-end web servers.

I’m not a fan of insecure_skip_verify; I don’t really see a good reason to use HTTPS without verification, I’d rather just use port 80 over an internal-only network.

Not sure how to tackle your proxy chain issue, but I split my Caddyfile using the import directive, see docs here. Here’s what I use:

# /etc/caddy/Caddyfile
import /etc/caddy/vhosts/*.caddy
# /etc/caddy/vhosts/ {

Thanks for hint on import directive
Well, I fully agree with you regarding insecure_skip_verify but I’m just starting with caddy and it helps me saying goodbye to pound, HAProxy and alike by considering LE certifications.

I tried at the beginning without that directive, but it helps getting things work without going too deep with TLS.
My thought is, it is enough - for now - accessing through the front proxy using LE certificates while back-ends working with self-signed ones.

I keep going deep with caddy to say goodbye soon to apache and nginx, hoping that our friends can extend caddy examples for more platforms.

I understand, but HTTPS is designed to start throwing errors at first sign of trouble so that you know there are problems with your setup. I don’t see any point configuring it half-baked to start out with.

Why bother accessing the backend with a self-signed certificate at all? Unless your backend is all the way across the internet and you very specifically want an encrypted connection but don’t care about the possibility of getting MITM, there is no benefit except a slight overhead on the TLS handshake. Literally zero usefulness - correct me if I’m wrong… You might as well just configure it on :80 and move it to Automatic HTTPS via DNS later on if you feel the need.