In a docker container launched by docker-compose with some 60 other services.
a. System environment:
Linux (on AWS)
b. Command:
n/a
c. Service/unit/compose file:
very large and not relevant
d. My complete Caddyfile or JSON config:
paste config here, replacing this text
use `caddy fmt` to make it readable
DO NOT REDACT anything except credentials
or helpers will be sad
3. The problem I’m having:
I’ve been working to get caddy to work in our “sandbox” environment. This is all launched via docker-compose and has about 60 containers in it. We currently use Nginx with self-signed certs. (We expose about 35 or so end-points that need certs.). This each developer gets there own own sandbox.
The main reason I want to switch to Caddy is to stop using self-signed certs and use real certs.
I got everything working with Caddy using the existing self-signed certs. I have just tried to now get real certs. However, I quickly run into this issue:
{"level":"error","ts":1610611569.4617696,"logger":"tls","msg":"job failed","error":"fake-integrator-web.rayj2.dev.tilia-inc.com: obtaining certificate: [fake-integrator-web.rayj2.dev.tilia-inc.com] Obtain: [fake-integrator-web.rayj2.dev.tilia-inc.com] finalizing order https://acme-v02.api.letsencrypt.org/acme/order/97676709/7304942037: request to https://acme-v02.api.letsencrypt.org/acme/finalize/97676709/7304942037 failed after 1 attempts: HTTP 429 urn:ietf:params:acme:error:rateLimited - Error finalizing order :: too many certificates already issued for: tilia-inc.com: see https://letsencrypt.org/docs/rate-limits/ (ca=https://acme-v02.api.letsencrypt.org/directory)"}
Obviously, asking for so many certs at once for all these sub-domains is a problem. My question is - what can I do about this?
All the domains are of the form <service_name>.<developer_username>.company.com
So is there a way to instead ask for a wildcard cert for *.<developer_username>.company.com instead? That way each developer is only asking for one cert?
Or is there another approach I should be taking here?
Adding to this: For now, ZeroSSL also issues wildcards via the HTTP challenge, but there’s some discussion in CABF about disallowing that and restricting it to DNS challenge only. So DNS challenge is your safest, long-term bet.
ZeroSSL doesn’t rate limit for subdomains though, either – so that’s another possibility. But I recommend the wildcard cert.
https://*.myco.com, d1.myco.com {
stuff related to d1
}
I actually have > 60 sites all with different configs, so:
https://*.myco.com, https://d1.myco.com {
stuff related to d1
}
https://*.myco.com, https://d2.myco.com {
stuff related to d2
}
https://*.myco.com, https://d3.myco.com {
stuff related to d3
}
...
https://*.myco.com, https://d60.myco.com {
stuff related to d60
}
Is this right as far as the syntax goes? This will result in only attempting to get 1 wildcard cert for *.myco.com that all of those sites will use? Can someone confirm this?
https://*.myco.com, https://d1.myco.com {
stuff related to d1
}
https://d2.myco.com {
stuff related to d2
}
https://d3.myco.com {
stuff related to d3
}
...
https://d60.myco.com {
stuff related to d60
}
It seems like it would try to get certs on all the other ones. Normally, that standard syntax would result in another cert being generated. In which case, I’ll run into rate limits again…
Does the one with the wildcard have to be the first one? Or anything like that?
In fact you can probably wind up with a more elegant JSON config this way rather than using the Caddyfile. But you can use your Caddyfile as a starting point by converting it to JSON.