I’ve got Caddy in a clustered environment and could use some advice on offloading storage.
My Environment:
- The latest Caddy build
- Inside Docker
- On an Ubuntu virtual machine or three
- Behind a load balancer
- On Digital Ocean (though this can change).
Requirements/constraints:
- I need to accomodate a few thousand domains initially, and more later, if all goes well.
- I’m mostly a front-end dev, so keeping things simple and offloading as much as possible would be nice.
I’ve done some searching, but am a little fuzzy on which method might be considered the best practice. I’d hate to settle on something that works for now, only to have avoidable headaches later.
As for the options I’ve looked at:
The Redis plugin seems well-liked, but mentions it doesn’t work for clustered DB instances, which I believe is what Digital Ocean’s managed DB service is (which is what I’d be using).
I’m also nervous about storing important things like certs in memory. Wouldn’t a file system be more appropriate, especially for a large number of certs?
The Consul plugin is popular, but I’m completely unfamiliar with Consul and think it may be more practical to stick with methods more in line with my existing architecture/knowledge.
Network drives are also a possibility, but I’ve read on this forum that they can be problematic.
An s3 plugin is the most familiar to me, having used s3 a lot, and I’ve already got it working pretty well for my 2 or 3 test domains.
I’d love for s3 to be the ticket, but I’m worried that I might run into issues as the app scales up to managing thousands of certificates. Will it slow things down?
Would another option be better, like DynamoDB? If I must introduce a new system for storage, I’d prefer it to be something managed.
Any help or direction would be appreciated, thanks so much!