Disable auto-reload of config via API


Following up this discussion https://twitter.com/gorghoa/status/1576945788936015872 :wink:

My use case:

We run a site farm with up to a thousand hosted websites with different domains. Sites are frequently published (serving static files specific for each site), unpublished (serving static files to a global default folder), etc. In both cases, ssl is done by let’s encrypt.

When publishing/unpublishing only one site, it all goes well.

But when doing bulk actions on sites, thing may become a bit more complex.

Consider doing things sequentially: Bulk publishing 20 sites, would result to 20 API calls to the Caddy’s API endpoint.

So 20 API calls, lead to 20 config reload, hence:

  • 20 × websocket connections losses (websockets auto reconnects, but still, loading spinners everywhere :D)
  • 20 × calls to let’s encrypt
  • Eventually, caddy will timeout if too much API calls are done

What I would have love is:

1st. Call a caddy’s API to disable auto-reload (maybe with config lock to prevent concurrent updates ?) :new:
2nd. Do all my config change sequentially as before
3thd Call a caddy’s API to commit the changes and restart config auto-reload :new:

(Yes, think like a DB transaction :wink: )

I know I can just pull all the json config, do my things and then repush all the config, but it as a “too much power” and error prone feeling :sweat_smile:

Anyhow, thanks for caddy, it’s admin API , auto let’s encrypt, etc. It’s a very handy web server!!!


Note that @francislavoie nicely answered for the current state of things by pointing me to this part of documentation: API — Caddy Documentation

Thanks @francislavoie :slight_smile:

1 Like

Config reloads by themselves are quite lightweight. They do tidy up to avoid leaking resources, but this can often be minimized with the right config.

I don’t really know a good way to solve this yet. The only alternative I know of is to leave the WS dangling but that of course leaks resources. What if the new config doesn’t have that WS endpoint, for example? What if the way the connection is established has changed in the new config? A config change signals that you don’t want clients to continue using the old config, so having them reconnect is the only way I know of to enforce that.

Outstanding/in-progress ACME transactions will be canceled to avoid leaking resources. If you’re seeing a lot of ACME transactions every time you load config, you might consider on-demand TLS instead.

This should not be the case. How can we reproduce this?

If the changes only happen within a specific scope (/config/foo/bar/... path), you can scope your changes to only that part of the config to limit inadvertent harm done. I don’t really understand how doing a batch change on your end is “too much power” given that the whole point of the API is to give you the control/power over your server.

1 Like

Well, it seems that everything for my use case is actually already possible!

On demand TLS seems a perfect fit (and may even actually remove my needs for frequent config changes :face_with_peeking_eye:). The only thing is that my admin site is now a critical dependency for caddy to run consistently over time (for the automation ask param).

Regarding websockets connections losses, I have read other similar topics on it. I understand how it’s a complex thing, and it’s absolutely not a blocker for me, just a small annoyance for my service’s admin users.

Regarding timeouts, I tried to publish few hundreds sites simultaneously. But all in all, the limiting factor quickly came from letsencrypt rates limiting of course. It was few weeks ago, so maybe the timeouts weren’t from caddy’s side but from my http client now that you say it. I can’t remember correctly :confused:


This topic was automatically closed after 60 days. New replies are no longer allowed.