Using 2 caddy servers behind a load balancer

I’ve been running a single caddy instance for a while now and its been working amazingly. I want to add a bit more fault tolerance to my application and have 2 caddy servers behind a load balancer. I have a couple of questions.

Not sure if there is any documentation on this anywhere?

Is there a way for Caddy to share its configuration?

I’m using automatic TLS for Custom Domains. Is there a way to share those certs between servers?

1 Like

Hey Mitch –

You can give both Caddy instances the same configuration if that is what is right for your deployment. It’s really hard to say without more details.

Our docs explain this:

Any Caddy instances that are configured to use the same storage will automatically share those resources and coordinate certificate management as a cluster.

So, simply configure them to use the same storage (for example, same shared folder on a mounted file system, or use another storage backend):

Our wiki has a list of storage modules to choose from.

1 Like

I actually managed to do just this using the following on AWS:

  • EC2 instances running Amazon-Linux
  • An Elastic File System

Build out your configuration on the EC2, then add the EFS drive.
Be sure to set it to remount on reboot.

After I did that I added this into my Caddyfile.

        storage file_system {
         root /mnt/efs/fs1
        on_demand_tls {

Above my root /mnt/efs/fs1 is where I mounted my file system. It can be anywhere technically. Like: /etc/caddy/certificates or whatever.

The only load balancer that will work for you in this case is the TCP Network Load Balancers, the application network balancer will not pass HTTPS traffic unless the certificate is added in the load balancers.

If you do go through this, getting the healthcheks right took me some time. Check out my other question and you’ll see the solution.


Thanks everyone for your replies.
That cleared things up.
I am hosting on AWS. using EFS is going to be my best bet.


One more question related to this. If I update one of the servers in the “cluster” using the API, will it update the configuration of the other ones?

No, config is not shared (for now).

So I would have to send an API call for each server in the cluster?
If I set the XDG_CONFIG_HOME env variable to point to the same storage across all servers would that possibly work?

Yeah, you could possibly hack something together with this information:

Buuuuut I haven’t designed it for that, so, if you try it, let us know if it works!

So after a little experiment. It looks like it could work. But I would need to watch the autosave.json file for changes and run sudo systemctl restart caddy or something.

You might be able to use --watch to your advantage, too.

You should prefer sudo systemctl reload caddy over restart. Reloading is graceful and zero downtime, restarts will have downtime.

1 Like

So i tried to do a reload but i get this error. Any idea why?

Failed to reload caddy.service: Job type reload is not applicable for unit caddy.service.
See system logs and 'systemctl status caddy.service' for details.

You need an ExecReload line in your service file.

You can find the latest recommended service files here: