Website Migration Best Practice with Caddy

Hi everybody,

I would like to know how you manage website migrations with Caddy. I am not sure if I am doing this wrong, but the automatic HTTPS is not very helpful during development or migration.

Here is my current work flow:

I had to deploy a PHP App. For this I have created a docker image with a Caddyfile in it. The image itself gets deployed as container in Kubernetes as LoadBalancer Service (this way Caddy can manage their HTTPS itself). I see a few problems and questions here:

  1. Letting Caddy manage the HTTPS itself in Kubernetes is not the Kubernetes way. The better solution would be to just listen on port 80 and let the ingress do its work.
  2. If Caddy manages itself what happens when I spawn up multiple replicas? I guess in that case the Let’s Encryt API would get flooded with requests and my cluster would get banned, right? 2-3 Replicas are might not an issue, but what about 200-300?
  3. If I want to switch from development to production I need to redeploy the whole image, because right now I am copying the caddyfile manually into the image. The better solution would be mounting the Caddyfile via ConfigMap in Kubernetes, but how would I trigger a reload? as far as I know: Caddy doesn’t react to file changes (like traefik does). So it needs a cronjob or something that triggers a reload every x minutes? (sounds stupid).

The next question is not really kubernetes related:

How do you properly work with Caddy in terms of development and migration? My solution was to set port :80 in the Caddyfile and work with this and later for production I would just remove the :80 from all host names and start serving HTTPS. I don’t know if this is the recommended way to work with it. If you know a better one, feel free to share it.

Check out the Caddy ingress project. I don’t use k8s so I don’t have much else to say there.

If they all share the same storage backend, they can work in a cluster. They will use the storage as a distributed lock for certificate maintenance. You shouldn’t have multiple instances working independently, for sure. Just configure them with the same volume for /data and you should be fine.

You can run the caddy reload command. It’s mentioned in docker hub how to do this: Docker

You could also use the --watch argument to caddy run but this is not recommended in production because it will cause Caddy to check for file changes every few seconds. Better to just trigger reloads when you need to.

I use an env var like {$CADDY_HOST_LABEL} instead of :80, then configure my .env beside my docker-compose.yml to define that. In prod, I then set the env to the real domain, and in dev it’s just :80.

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.