I’m thinking about using Caddy as a load balancer, but in a cluster of multiple load balancers, so the load balancer isn’t a single point of failure. That means that each Caddy instance must have the same configuration, share TLS certificates, etc. I was curious about how you would approach this problem, as I have thought about several ways of doing it.
Example setup: 3 vps nodes in different availability zones, all running the same backend systems, and Caddy.
Setting up Caddy on all three nodes, with a DNS challenge for each domain, and using the reverse proxy directive to proxy to whatever service on all three nodes. Then doing the distribution among them via DNS.
Similar to the above, but having one Caddy instance be the active load balancer, with quick but manual failover via DNS.
What am I thinking? This is way too complex. Just have one dedicated Caddy instance as a load balancer, and hope it doesn’t go down.
I would appreciate any input on this. I could also just use the Elastic Load Balancer provided by AWS, but I like Caddy.
Caddy can already work in a cluster if you have them all share the same storage backend.
You just need to make sure they use a solution that keeps the storage in sync between all your instances. Caddy uses a file system storage driver by default, but it can be configured to use redis or dynamodb as well.
If you use the file system driver, you could use something like glusterfs or some similar tool to keep some mounted path in sync, then configure Caddy to use that location as storage instead of its default.
You’ll need to keep the configs in sync on your own, because Caddy doesn’t take care of that itself, but that might come as a feature in the future.