1. Caddy version (caddy version
):
2.4.1
2. How I run Caddy:
systemctl start caddy --config=/etc/caddy/config.json
a. System environment:
Ubuntu 20.0.4
b. Command:
systemctl start caddy --config=/etc/caddy/config.json
c. Service/unit/compose file:
d. My complete Caddyfile or JSON config: I had to cut it short as the post size was being exceeded, my config is huge
{
"storage": {
"module": "s3",
"host": "REMOVED",
"bucket": "REMOVED",
"access_key": "REMOVED",
"secret_key": "REMOVED",
"prefix": "",
"redis_address": "REMOVED",
"redis_password": "REMOVED",
"redis_db": REMOVED
},
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [
**the below repeated 112 times for each subdomain**
{
"match": [
{
"host": [
"demo.staffconnect-app.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "/var/www/staffconnect/app/fe/demo"
}
]
},
{
"handle": [
{
"handler": "rewrite",
"uri": "{http.matchers.file.relative}"
}
],
"match": [
{
"file": {
"try_files": [
"{http.request.uri.path}",
"{http.request.uri.path}/",
"/index.html"
]
}
}
]
},
{
"handle": [
{
"encodings": {
"gzip": {}
},
"handler": "encode",
"prefer": [
"gzip"
]
}
]
},
{
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "static_response",
"headers": {
"Location": [
"{http.request.uri.path}/"
]
},
"status_code": 308
}
],
"match": [
{
"file": {
"try_files": [
"{http.request.uri.path}/index.php"
]
},
"not": [
{
"path": [
"*/"
]
}
]
}
]
},
{
"handle": [
{
"handler": "rewrite",
"uri": "{http.matchers.file.relative}"
}
],
"match": [
{
"file": {
"split_path": [
".php"
],
"try_files": [
"{http.request.uri.path}",
"{http.request.uri.path}/index.php",
"index.php"
]
}
}
]
},
{
"handle": [
{
"handler": "reverse_proxy",
"transport": {
"protocol": "fastcgi",
"split_path": [
".php"
]
},
"upstreams": [
{
"dial": "unix//run/php/php7.1-fpm.sock"
}
]
}
],
"match": [
{
"path": [
"*.php"
]
}
]
}
]
}
],
"match": [
{
"path": [
"/showcase.php",
"/presentation.php"
]
}
]
},
{
"handle": [
{
"handler": "file_server",
"hide": [
"/etc/caddy/Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
},
"tls_connection_policies": [
**the below repeated 112 times for each subdomain**
{
"match": {
"sni": [
"demo.staffconnect-app.com"
]
},
"certificate_selection": {
"any_tag": [
"cert97"
]
}
},
{}
]
}
}
},
"tls": {
"certificates": {
"load_files": [
{
"certificate": "/etc/letsencrypt/live/staffconnect-app.com/fullchain.pem",
"key": "/etc/letsencrypt/live/staffconnect-app.com/privkey.pem",
"tags": [
"cert97"
]
}
]
}
}
}
}
3. The problem I’m having:
There are currently 112 subdomains of the form *.staffconnect-app.com being served by a wildcard SSL cert that was installed manually (tag: cert97)
I’d like to gradually have caddy takeover from the existing wildcard cert and generate an individual cert for each subdomain.
I want to do this gradually (under 50 per week) so as not to hit LetsEncrypt’s rate limits.
I assume that should I leave the current set up as is, eventually the wildcard ssl cert will expire and caddy will try to issue certs for all 112 subdomains simultaneously, hit the rate limit and fail.
4. Error messages and/or full log output:
5. What I already tried:
I’ve tried deleting the corresponding tls_connection_policies object for a *.staffconnect-app.com sub domain eg I deleted the below from the above json config:
from “tls_connection_policies”: [
{
“match”: {
“sni”: [
“brownformannewjersey.staffconnect-app.com”
]
},
“certificate_selection”: {
“any_tag”: [
“cert97”
]
}
},
but after reloading and restarting caddy a new certificate for the subdomain is not generated - brownformannewjersey.staffconnect-app.com still uses the old wildcard ssl cert.
I’ve read a number of similar posts to this eg:
in the above Matt says not to just delete the certificate folders for live sites - which was what I was going to try next…
… so I’m now at abit of a loss, any advice appreciated - how can I force caddy to ignore the cached wildcard ssl certificate and generate a certificate for an individual subdomain?