I am trying to determine a way to run Caddy with docker swarm in a High availbility way. Right now I have 4 nodes (1 manager and 3 workers). It appears caddy can only run on 1 manager at a time. So if the manager fails, everything goes down. This also routes all traffic though the one manager.
I want caddy to run on managers and workers
If a manager or worker fails, the system continues to work.
4. Error messages and/or full log output:
When manager node goes down, application’s go down
5. What I already tried:
I tried setting it up like this
but couldn’t get the example to work. It seems caddy controller didn’t start.
Note that in the distributed.yaml you shared, there are “servers” and “controllers”.
A controller runs on a swarm manager and sends the Caddyfile to the actual servers serving traffic on a swarm worker.
If the swarm manager currently running the controller goes down, the actual servers will continue to serve traffic just fine.
You just will miss out on config changes, until a controller is redeployed somewhere.
Doesn’t matter if it’s the same swarm manager being up again, or another.
Caddy will store public certificates, private keys, and other assets in its configured storage facility (or the default one, if not configured – see link for details).
[…]
Any Caddy instances that are configured to use the same storage will automatically share those resources and coordinate certificate management as a cluster.
Because honestly don’t really feel like explaining Docker Swarm in great detail right now, especially considering there are lots of resources out there doing just that already.
And this also scratches the line of being slightly off-topic and with the current amount of volunteers, it’s only really feasible to focus on Caddy questions.