Regenerate ~1000 certs without hitting rate limit

1. The problem I’m having:

I have a domain hosting platform where we have thousands of domains hosted. Initially, when we started a server with Caddy, we did not enable persistence layer for all the certs. Now as the certs growing, we occasionally face trouble related to cert renew / authorization etc.

I am planning to add a persistence layer and run a new container. There is no straight forward way to attach persistence to a previously running container as we are using a managed service (AWS ECS > Fargate). After we reconfigure, all our previous certs will sink with the old container/task. I have tried several ways to get previous certs from the existing ECS container but it’s simply not possible.

Now I am thinking to run the caddy server with new configuration and let Caddy regenerate all the certs. The problem is, I will hit cert generation rate limit (300 certs / account / 3 hrs) from Let’s Encrypt almost instantly as the caddy server will try to generate a massive number of certificates at once. For this rate limit, after 300 successful certs, the rest of my domains will not get cert within the next 3 hours, resulting in a large number of domains being inaccessible for a couple of hours!

Is there any way I can run caddy server without any downtime for any domain for certificate regeneration?

2. Error messages and/or full log output:

    "level": "error",
    "ts": 1690368347.5156703,
    "logger": "tls.obtain",
    "msg": "could not get certificate from issuer",
    "identifier": "www.#####.###",
    "issuer": "",
    "error": "HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new order :: too many failed authorizations recently: see"

3. Caddy version:


4. How I installed and ran Caddy:

a. System environment:

AWS Fargate, Linux, ECS

b. Command:

It's a Fargate container so I don't need to run any command. I just have provided my docker image made out of caddy:2.4.5

c. Service/unit/compose file:

FROM caddy:2.4.5

COPY Caddyfile /etc/caddy/Caddyfile

ENV SslValidation ${SslValidation}
ENV ViewerEndpoint ${ViewerEndpoint}
ENV SitemapEndpoint ${SitemapEndpoint}
ENV DashboardEndpoint ${DashboardEndpoint}


d. My complete Caddy config:

  on_demand_tls {
    ask {env.SslValidation}

:443 {
  header Server "Server_name"
  header -x-powered-by
  @trailing_slash {
    path_regexp no_slash (.+)\/$
  @domain {
    header_regexp domain host ^www\.(.+)$

  redir @domain https://{http.regexp.domain.1}{uri}
  redir @trailing_slash {re.no_slash.1} 308

  tls {

  handle_path /dashboard {
    reverse_proxy {env.DashboardEndpoint}

  handle_path /dashboard/* {
    reverse_proxy {env.DashboardEndpoint}

  reverse_proxy {env.ViewerEndpoint}

5. Links to relevant resources:

1 Like

If it’s truly impossible to get the files out of your storage device, then you’ll need to make new ones.

The only way to do that without a Let’s Encrypt rate limit are to use another CA (Caddy will fall back to ZeroSSL if it tries LE first and fails) or apply for a rate limit exemption (they have a form for this, I don’t have the link handy right now but you could search for it).

Since you’re using on-demand TLS, this assumes that all the domains will be accessed right away. For busy sites, that’s certainly a reasonable assumption. For small sites that get occasional traffic, you may not see a request come in for several minutes or even hours. So that actually might be in your favor, even if only slightly.

I’m working on solutions to make Caddy handle CA rate limits even better, but for now the best thing is to not lose the certificates you already have… (this is a concern I have brought up with them before in discussion forums, but they’re pretty rigid on those rate limits, though sometimes they have leniency for renewals which is good).

I’m not familar with Fargate but it may be worth a few more minutes to triple-check and verify that there’s no way to access its files…

You can also let Caddy use ZeroSSL, but their issuance backend may be a little slower overall (they’re constrained by upstream software sometimes).

Thank you so much @matt for your well-detailed reply.

I will investigate more on Fargate to retrieve existing certs. If I fail to get them, do you think the following assumptions of mine are correct:

  1. Run a new caddy instance with email in Caddyfile global config
  2. When I would start getting rate limit errors, switch the email with so that Let’s encrypt thinks I am requesting certs for a different account
  3. After the expiration of the certs generated with, they will be renewed successfully without any issues even if the email in the global config is something else

@matt We can bypass limits using zerossl but how we can bypass limit on caddy for thousands of certificates issuance within a short limited time frame ?

You could do this, but Caddy will also just switch to ZeroSSL automatically when it gets an error from Let’s Encrypt.

They’ll be renewed, but I can’t guarantee by which CA; if you get more than 300 certs in 3 hours, Caddy will try to renew them all within a 3 hour window, and you may get errors from Let’s Encrypt again since I don’t think that particular RL exempts renewals. In that case, it will switch to ZeroSSL again.

I’m not sure what you’re asking? (Caddy’s internal throttle lets you get 1 cert per second. That’s a lot.)

See this Make internal rate limiting configurable · Issue #143 · caddyserver/certmagic · GitHub …if there is a limit of 20/minute, it will take approximately approximately 4 hrs to process 5k certificates. Can we do something to override it using configuration in caddyfile ?

You can get 60/min, not 20; and I don’t think you’d want to go faster because that’d add a lot of pressure to CAs and you’d hit relevant rate limits even faster.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.