Using Caddy with AWS or Google cloud

Is the general pattern when using Caddy with AWS or Google cloud to use it as kind of a “side car” docker container?

Since AWS and GCP use load balancers, I guess you would configure them to pass through the SSL connection (don’t terminate at the load balancer level) and then when the request hits your instance, the instance has 2 docker containers: 1) caddy 2) web server

So caddy would then perform the SSL related operations and proxy back to your backend webserver.

Is this correct and the best practice?

Yeah, essentially. It depends what kind of app you’re running, but a very common use it to use Caddy’s reverse_proxy to sit in front of your app, so you get the benefits of Caddy’s automatic HTTPS.

Like you said, your load balancer should be configured in TCP mode so it doesn’t try to terminate TLS. Keep in mind that if you have more than one instance of Caddy running for the same domain, you should make sure that Caddy’s storage is shared across all its instances so that they share ACME state and cert/keys. See the docs for that:

Yeah I was thinking of redis or a shared file system…

Thanks!

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.