No, the wildcard subdirective - that is, the one that tells Caddy to pretend that a site address that is a fully qualified domain name is actually wildcard, for cert acquisition purposes - is not present in v2.
Currently you’d need to configure a site address that is actually a wildcard, e.g. *.example.com.
@Whitestrake. Actually I’d prefer not to use the wildcard. It would allow me to have sub/sub domains
One of the reasons I’ve used a wildcard is that I have FQDN subdomains I use only within my private lans with corresponding dns records served only locally within the lans. I need caddy to serve https for those without depending on an internet connection (I have iot stuff that must work always, and for security don’t want public dns records for lan only subdomains). But I also have publicly accessible subdomains serviced via route53. Using a wildcard (for at least the local subdomains) allowed me to do that.
I have a cron script that checks/renews the wildcard cert and distributes to my caddy servers which means I can go something like 90 days without an internet connection which is of course fine.
So either I need an alternative to using a wildcard that meets my needs or I’ll just have to wait unit V2 supports wildcard subdirective :(.
Even if I am forced to abandon wildcard subdirective I would need to get acme route53 dns tls working and that is currently just a PR that I am trying out. V2 modules and cross compile
Maybe I don’t need to use a wildcard if.
The cert caddy gets for each subdomain is good for some time. Is it like wildcard about 90 days? (i.e. caddy still works if it can’t access the acme server but the cert is still valid)
A public dns record (i.e. at route53) is not required for caddy/acme to successfully get a cert for that subdomain and that subdomain does not have reachable from public internet for the cert to be assigned?
I see this post V2 Get it working with internal CA in systemd?
Maybe I need to go the way of an internal CA? In that post @matt talks about an internal acme CA server for 2.1 Maybe that’s the solution? But can that approach live together with the acme certs for public subdomains on the same FQDN?
If you use HTTP or ALPN challenges (on by default) then your server needs to be reachable to get the cert. If using the DNS challenge, then your server doesn’t need to be reachable (pretty sure…), but your server needs to be able to set DNS records via the route53 API.
@francislavoie Thx for info I guess I’ll make the effort to confirm your “pretty sure”.
One other issue I see with using the route53 api is that I have two lans that both have an instance of caddy running and both lans internally use the same subdomains (have their own caddy stanza for that subdomain). So if they are both requesting a cert for that same subdomain that won’t be an issue?? Seems it would be ok if your “pretty sure” is a definitely sure. ACME wouldn’t know which one asked for the cert and I assume will hand out a copy of the already valid cert if the “second” caddy instance is asking for it?
Alright so I think there’s a couple misconceptions you might have that I can clear up:
Let’s Encrypt can’t ever give the same certificate to different servers, because a certificate is in essence an assertion that the private key owned by the server is trusted (the server’s public key is essentially what’s signed by the CA).
Your two Caddy servers won’t share private keys, UNLESS they share the same storage (see Automatic HTTPS — Caddy Documentation). If you can make your Caddy servers in separate LANs share the same storage (with something like glusterfs or ceph, etc) then they’ll use the storage to set up mutexes to avoid stepping on eachother’s toes.
Let’s Encrypt does have pretty generous rate limits, so regardless, I’m pretty sure that it wouldn’t be an issue if only two different servers try to get certificates issued for the same domain. I don’t remember exactly how the DNS records look that are used during the DNS challenge, but I don’t think they should conflict because they use hashes or nonces in them.