1. Caddy version (caddy version
):
Latest stable
2. How I run Caddy:
Would run on Ubuntu servers via systemd
.
a. System environment:
systemd
, Ubuntu 20.04.
b. Command:
No command setup yet
c. Service/unit/compose file:
No config yet
d. My complete Caddyfile or JSON config:
No config yet
3. The problem I’m having:
I’m not having a specific problem but am more trying to see if something is possible. I’m looking at rolling Caddy out to replace Nginx servers and a custom Let’s Encrypt wildcard setup where a control server manages the certificate then pushes it out to other servers. I use DNS proof for the wildcard certificate via Cloudflare.
I’d like to replace this setup with Caddy (still using a wildcard cert!) and am looking at using Consul or Redis to store the certificates. I’m open to other solutions as well if that matters, but cannot use shared FS. I’m aware that Caddy is cluster aware in this situation so will only renew the certificate once which is great.
What I’m wondering is if its possible to decrease my exposure footprint and only give Cloudflare credentials to a single Caddy instance and force it to manage the renewal of the certificate whilst still letting the others in the “fleet” use and access the certificate. My use case is I don’t trust some of the servers as much and would really prefer not to have to distribute DNS API keys to every server that runs Caddy.
Basically; have one server renew and manage, the others just use to the serve traffic. Does that make sense?
4. Error messages and/or full log output:
N/A
5. What I already tried:
Nothing yet!