Caddy Cluster Storage

Hello Community,

we run a docker swarm cluster with caddy as ingress service. It is deployed in global mode, so that on each node one instance is running.
The tls data is shared with the caddy-tlsconsul storage module.
This works great. The configuration is loaded from a mounted caddyfile on each node. The nodes have the same caddyfile.

Now we try to move to the API approach. We also added the resume flag to the start command.
One route was created by a curl request to the API of one container and not the service.
The route is working when we request it. But when I open the “/config/caddy/autosave.json”, the new config is only stored inside of this container and not the others.

So my question. Is there a possibility to also share the configuration through the storage modules?
Or do I misunderstand some concepts?

With best regards :slight_smile:

I’m not very familiar with Docker, but isn’t there a way to configure it to mount that folder from outside the container?

Yes this volume is mounted from outside the container. The issue I have is the following.
2 Containers on different nodes running Caddy in a Cluster. Do I have to configure something that they run as a cluster or does it work out of the box?
Both Containers have mounted the configuration directory. So the changes are persistent made by api.
Now my issue is:
I add a route through the Caddy API of Container 1. Container 2 does not receive information about the new route. I make this assumption because the autosave.json file does not include the information which are available at Container 1.
This behavior should be the same when I send the request to the Service, which acts as a load balancer.

I hope I did not misunderstand some of the caddy concepts.

If you share the /config volume across your cluster, then they will all have the same autosave.json. At that point, after making a config change, all you need to do is to trigger a config reload on all the other instances in your cluster (i.e. execute the caddy reload command in the containers, which under the hood makes Caddy make an HTTP request to the API, loading the new config).

Caddy doesn’t currently have clustered config support yet (only clustered storage), but that may be coming later.

Thank you for your response, that helps a lot :).
Ok when I do this with for example a shared s3 volume there might be some race conditions. If container 1 and container 2 gets a route update request at nearly the same time.
So container 1 will write the new route 1 to autosave.json. Then the reload is triggered.
Container 2 receives the new route 2 and saves this also to autosave.json and triggers reload.
Will there be both routes in the new config or might be a route lost?

Has caddy some logic that there will be no data loss?

This topic was automatically closed after 30 days. New replies are no longer allowed.