1. The problem I’m having:
I am trying to use a manually loaded Cloudflare Origin wildcard certificate for my root domain (example.com), while using Caddy’s automatic HTTPS (via Let’s Encrypt and the DNS-01 challenge) for my subdomains (git.example.com and board.example.com).
Because the manually loaded certificate is a wildcard (*.example.com), Caddy’s auto-HTTPS logic determines that a valid certificate is already in memory for those subdomains. As a result, it completely skips the automatic certificate management for the subdomains, and no Let’s Encrypt certificates are requested. I am looking for a way to force Caddy to fetch ACME certificates for the subdomains and ignore the loaded wildcard cert for those specific blocks.
2. Error messages and/or full log output:
caddy | { level: "info", ts: 1771844438.8848627, msg: "maxprocs: Leaving GOMAXPROCS=1: CPU quota undefined" } [cite: 4]
caddy | { level: "info", ts: 1771844438.886466, msg: "GOMEMLIMIT is updated", package: "github.com/KimMachineGun/automemlimit/memlimit", GOMEMLIMIT: 876817612, previous: 9223372036854775807 } [cite: 5]
caddy | { level: "info", ts: 1771844438.887361, msg: "using config from file", file: "/etc/caddy/Caddyfile" } [cite: 6]
caddy | { level: "info", ts: 1771844438.8901865, msg: "adapted config to JSON", adapter: "caddyfile" } [cite: 7]
caddy | { level: "warn", ts: 1771844438.8924909, msg: "Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies", adapter: "caddyfile", file: "/etc/caddy/Caddyfile", line: 2 }
caddy | { level: "info", ts: 1771844438.897751, logger: "admin", msg: "admin endpoint started", address: "localhost:2019", enforce_origin: false, origins: ["//localhost:2019", "//[::1]:2019", "//127.0.0.1:2019"] } [cite: 9]
caddy | { level: "warn", ts: 1771844438.9052832, logger: "tls", msg: "stapling OCSP", error: "no OCSP stapling for [cloudflare origin certificate *.example.com example.com]: no URL to issuing certificate" } [cite: 10]
caddy | { level: "info", ts: 1771844438.9107153, logger: "http.auto_https", msg: "skipping automatic certificate management because one or more matching certificates are already loaded", domain: "example.com", server_name: "srv0" } [cite: 11]
caddy | { level: "info", ts: 1771844438.9120688, logger: "http.auto_https", msg: "skipping automatic certificate management because one or more matching certificates are already loaded", domain: "*.example.com", server_name: "srv0" } [cite: 12]
caddy | { level: "info", ts: 1771844438.9128053, logger: "http.auto_https", msg: "enabling automatic HTTP->HTTPS redirects", server_name: "srv0" } [cite: 13]
caddy | { level: "info", ts: 1771844438.914895, logger: "http", msg: "enabling HTTP/3 listener", addr: ":443" } [cite: 14]
caddy | { level: "info", ts: 1771844438.922761, msg: "failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details." } [cite: 15]
caddy | { level: "info", ts: 1771844438.924708, logger: "http.log", msg: "server running", name: "srv0", protocols: ["h1", "h2", "h3"] } [cite: 16]
caddy | { level: "warn", ts: 1771844438.9283235, logger: "http", msg: "HTTP/2 skipped because it requires TLS", network: "tcp", addr: ":80" } [cite: 17]
caddy | { level: "warn", ts: 1771844438.928646, logger: "http", msg: "HTTP/3 skipped because it requires TLS", network: "tcp", addr: ":80" } [cite: 18]
caddy | { level: "info", ts: 1771844438.9292579, logger: "http.log", msg: "server running", name: "remaining_auto_https_redirects", protocols: ["h1", "h2", "h3"] } [cite: 19]
caddy | { level: "info", ts: 1771844438.9316483, msg: "autosaved config (load with --resume flag)", file: "/config/caddy/autosave.json" } [cite: 20]
caddy | { level: "info", ts: 1771844438.933737, msg: "serving initial configuration" } [cite: 21]
caddy | { level: "info", ts: 1771844438.9080808, logger: "tls.cache.maintenance", msg: "started background certificate maintenance", cache: "0x15c594025200" } [cite: 22]
caddy | { level: "info", ts: 1771844438.9354918, logger: "tls", msg: "storage cleaning happened too recently; skipping for now", storage: "FileStorage:/data/caddy", instance: "e836ae62-4494-4910-bdb4-a60d6cfaae7d", try_again: 1771930838.9354901, try_again_in: 86399.999999499 } [cite: 23]
caddy | { level: "info", ts: 1771844438.9355779, logger: "tls", msg: "finished cleaning storage units" } [cite: 24]
3. Caddy version:
v2.10.2
4. How I installed and ran Caddy:
Compiled it myself with cloudflare DNS plugin:
FROM caddy:builder AS builder
RUN xcaddy build --with github.com/caddy-dns/cloudflare
FROM caddy:alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "run", "--config", "/etc/caddy/Caddyfile", "--adapter", "caddyfile"]
a. System environment:
Debian 13, amd64, systemd, Docker
b. Command:
Docker service command: docker compose up -d
Inside container: caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
c. Service/unit/compose file:
services:
caddy:
image: caddy-cloudflare:latest
user: "1006:1006"
container_name: caddy
restart: unless-stopped
ports:
- "443:443"
- "443:443/udp"
environment:
CLOUDFLARE_API_TOKEN: ${CLOUDFLARE_API_TOKEN}
d. My complete Caddy config:
(security_headers) {
header {
Strict-Transport-Security "max-age=15552000; includeSubDomains; preload"
X-Frame-Options "SAMEORIGIN"
Content-Security-Policy "upgrade-insecure-requests; frame-ancestors 'self'"
X-Content-Type-Options "nosniff"
Referrer-Policy "same-origin"
-Server
-X-Powered-By
}
}
(no_robots) {
respond /robots.txt 200 {
body "User-agent: *\nDisallow: /\n"
close
}
}
example.com {
tls /keys/cloudflare/origin.pem /keys/cloudflare/origin.key
import security_headers
log {
output file /var/log/caddy/landing_access.log
level INFO
}
root * /static
try_files {path} /index.html
file_server
}
*.example.com {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
resolvers 1.1.1.1
}
import security_headers
import no_robots
log {
output file /var/log/caddy/access.log
level INFO
}
@git host git.example.com
handle @git {
reverse_proxy forgejo:3000
}
@vikunja host board.example.com
handle @vikunja {
reverse_proxy vikunja:3456
}
handle {
abort
}
}