If I launch a site with many domains to be served in the Caddyfile for the first time, am I at a risk of overrunning the acme api limits?
To test or experiment with your Caddy configuration, make sure you change the ACME endpoint to a staging or development URL, otherwise you are likely to hit rate limits which can block your access to HTTPS for up to a week, depending on which rate limit you hit.
So make sure to use the staging endpoint when testing.
Caddy does have a number of safeguards to help avoid this. For example, it will switch challenge types and retry, then backoff and retry over the course of a few weeks. During retries, it will switch to Let’s Encrypt’s staging endpoint automatically so that they won’t count against your rate limit. However, this logic isn’t a perfect guide: once it succeeds in staging it will try again in production. So if the state of the two CAs is different (for example, internal rate limit errors as opposed to validation errors which are global), this protection won’t help you much.
Always make sure your domains are properly configured before attempting to use (production) HTTPS.
I should have been more specific with my question. I always put acme_ca [staging url] in the main block of my Caddyfile. I’m a longtime greenlock.js user, so some of those processes come from my experience with that.
What I was wondering about was what would happen if I turn a fresh server on after testing it, that needs certificates for say, 20 or 30 domains. Could that present a problem?
I’m fairly ignorant to how the process looks as far as certmagic interacting with acme.
The first answer was still helpful though.
Depends – if they are subdomains, yes since there’s a Let’s Encrypt rate limit for certificates per registered (eTLD+1) domain. In that case, use a single wildcard cert instead.
I’m sure there are other things that could go wrong.
But Caddy has built-in throttling so, in theory, you can give it a million domains and it’ll get certificates for them gradually. Caddy’s job isn’t to enforce Let’s Encrypt’s rate limits – it’s impossible since we can’t access their internal state – it just plays nice so that if there are any errors (including rate limiting), it will backoff and retry later.
Still, always test in staging first if you’re not sure.