I am currently using Caddy 2 and it’s default setting of storing the in-memory config at the
/home/user/.config/caddy/autosave.json file location.
I am also using
rsync to backup copies of that file as it changes, in-response to the Admin API being sent requests to change in-memory config state. This process backs up a timestamped copy of the autosave file upon a write event on the file (via inotify) to a shared NFS directory to the inactive Caddy machine (which is not receiving live changes from its Admin API).
My question is theoretical, as I am not having a problem. My question is whether or not the
autosave.json file is read by the process as well, so that it can update the in-memory config to match the autosave.json file in the event of a failover and I need to make the backup Caddy’s in-memory config match the original?
I would like to use the most recent timestamped copy of the autosave file that exists on my shared NFS drive to load directly into the in-memory state of the backup Caddy process without integrating those changes into my static Caddyfile (which is intended to not change).
I’ve read quite a bit of the docs and forum questions, and it doesn’t seem like this is intended usage by any means, but I have a rather complex project I’m trying to work Caddy into with a high-availability setup using both the static Caddyfile and then dynamic domain matchers being injected from a pool of workers.
Thank you for your time and assistance. Caddy is a great tool, and I’ve enjoyed working with it.
I have a hunch that I might be overthinking it, and I can just POST the backup file to the backups API to do the trick…