Using 2 caddy servers behind a load balancer

I’ve been running a single caddy instance for a while now and its been working amazingly. I want to add a bit more fault tolerance to my application and have 2 caddy servers behind a load balancer. I have a couple of questions.

Not sure if there is any documentation on this anywhere?

Is there a way for Caddy to share its configuration?

I’m using automatic TLS for Custom Domains. Is there a way to share those certs between servers?

1 Like

Hey Mitch –

You can give both Caddy instances the same configuration if that is what is right for your deployment. It’s really hard to say without more details.

Our docs explain this:

Any Caddy instances that are configured to use the same storage will automatically share those resources and coordinate certificate management as a cluster.

So, simply configure them to use the same storage (for example, same shared folder on a mounted file system, or use another storage backend): JSON Config Structure - Caddy Documentation

Our wiki has a list of storage modules to choose from.

1 Like

I actually managed to do just this using the following on AWS:

  • EC2 instances running Amazon-Linux
  • An Elastic File System

Build out your configuration on the EC2, then add the EFS drive.

Be sure to set it to remount on reboot.

After I did that I added this into my Caddyfile.

{
        storage file_system {
         root /mnt/efs/fs1
        }
        on_demand_tls {
                ask https://api.nvssolutions.com/api/tada-domain-checker-base
        }
}

Above my root /mnt/efs/fs1 is where I mounted my file system. It can be anywhere technically. Like: /etc/caddy/certificates or whatever.

(edit)
The only load balancer that will work for you in this case is the TCP Network Load Balancers, the application network balancer will not pass HTTPS traffic unless the certificate is added in the load balancers.

If you do go through this, getting the healthcheks right took me some time. Check out my other question and you’ll see the solution.

4 Likes

Thanks everyone for your replies.
That cleared things up.
I am hosting on AWS. using EFS is going to be my best bet.

Thanks

One more question related to this. If I update one of the servers in the “cluster” using the API, will it update the configuration of the other ones?

No, config is not shared (for now).

So I would have to send an API call for each server in the cluster?
If I set the XDG_CONFIG_HOME env variable to point to the same storage across all servers would that possibly work?

Yeah, you could possibly hack something together with this information: Conventions — Caddy Documentation

Buuuuut I haven’t designed it for that, so, if you try it, let us know if it works!

So after a little experiment. It looks like it could work. But I would need to watch the autosave.json file for changes and run sudo systemctl restart caddy or something.

You might be able to use --watch to your advantage, too.

You should prefer sudo systemctl reload caddy over restart. Reloading is graceful and zero downtime, restarts will have downtime.

1 Like

So i tried to do a reload but i get this error. Any idea why?

Failed to reload caddy.service: Job type reload is not applicable for unit caddy.service.
See system logs and 'systemctl status caddy.service' for details.

You need an ExecReload line in your service file.

You can find the latest recommended service files here:

1 Like

--watch does not seem to do anything.

#
# For using Caddy with its API.
#
# This unit is "durable" in that it will automatically resume
# the last active configuration if the service is restarted.

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target

[Service]
Environment="XDG_CONFIG_HOME=/mnt/efs/fs1"
User=ubuntu
Group=www-data
ExecStart=/usr/bin/caddy run --watch --environ --resume
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

Not sure if I’m using it correctly though

If you use --resume, the config file is what was resumed from before (which only changes if you update the config through the API). Usually you won’t use --watch with --resume.

Ok, so I think I’ve found a solution.

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target

[Service]
Environment="XDG_CONFIG_HOME=/mnt/efs/fs1"
User=ubuntu
Group=www-data
ExecStart=/usr/bin/caddy run --watch --environ --config /mnt/efs/fs1/caddy/caddy.json
ExecReload=/usr/bin/caddy reload --config /mnt/efs/fs1/caddy/caddy.json
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

This did update when i changed the caddy.json file. So I think if I have 2 caddy servers share that config file. I can update the json file from my webapp when I add a custom domain.

I’ll let you know if this works. Can you see any issues with this?

This topic was automatically closed after 90 days. New replies are no longer allowed.