Caddyfile, adding more sites and Lets Encrypt

Hi there,

I’m new to Caddy and currently using Caddy v2 beta12.

I’m planning to migrate some personal sites from Apache/nginx to Caddy, site by site.

So far I’ve been able to use a Caddyfile to configure my different sites. Some use file server, some proxy, etc. So far so good, trial and error… till I got bitten by the Lets Encrypt rate limiter. So far I’ve got some sites working, but I’m afraid that when I change my Caddyfile to add more, if they are not properly configured and I need to re-configure and restart Caddy many times, I run again into the LE rate limiter.

How do you usually work in other to avoid this?


1 Like

Hi, welcome! And thanks for trying Caddy 2 while it’s still in beta!

You can use Let’s Encrypt’s staging endpoint while you test your setup. Ad-hoc instructions (temporarily, while I finish the new docs site) are here:

1 Like

Hey Matt,

Thank you. I did that, as suggested by you on twitter. So I got a couple of sites working with real certificates. Now, if I add more sites, I will add that staging LE endpoint just to those new ones, but I’m afraid that for the others that use the real endpoints if I restart caddy too many times I will get the rate limit again.

Sorry for my poor explanation and thanks again.

What is causing validations to fail? Fix that, then once it’s fixed, don’t use the staging endpoint anymore. Much easier.

Usually is small things, like some subdomain missing or wrong URL path by my side. I can use the staging endpoint on those while I test/refactor, but since that involves restarting caddy because I’m using a Caddyfile I wonder if that could affect the other domains that I got already working (and are with the production LE endpoint).

Anyway, thank you again. If I run into any issues I’ll try to report.

Can you clarify some more? Missing a subdomain in the DNS records, or in your config? What do you mean by “wrong URL path”?

Also, this is the wrong way to use Caddy; don’t stop the server (that will take your sites offline), instead, do a graceful config reload with signal USR1 and if there are any errors, it will roll back to the previous config without downtime. Although, in recent versions of v2 betas, cert management is done in the background, so the config won’t roll back if a cert issuance fails, it will just keep retrying, but it’s slower to get it wrong then get it right.

Yeah, it’s me forgetting to setup things properly when migration from apache.

Understood, thanks. Unfortunately when doing a pkill -USR1 caddy what I see in the logs is 2020/01/10 13:02:18.057 INFO not implemented {"signal": "SIGUSR1"}.

Any hints?

Ah, sorry, I got this thread mixed up with another and forgot you were using Caddy 2.

Caddy 2 doesn’t use signals for config reloads, instead use caddy reload or the API. Docs here: