1. The problem I’m having:
How can I reuse the generated SSL certs on different servers?
I tried extracting the certs from .local/share/caddy
on one of the containers, save the certs on a custom directory and mount that custom directory on the Caddy container but after restarting the container Caddy deleted whatever was on the directory.
I restarted my containers a copuple of times and now I’m being rate limited by LetsEcrypt (with a good reason).
I want to note that the first time I run caddy + cloudflare DNS worked fine but after some restarts trying to reuse the certs I got the errors.
2. Error messages and/or full log output:
these are the two errors I get:
{"level":"error","ts":1725488847.1490495,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.darespider.family","issuer":"acme-v02.api.letsencrypt.org-directory","error":"HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new account :: too many registrations for this IP: see https://letsencrypt.org/docs/too-many-registrations-for-this-ip/"}
{"level":"error","ts":1725490988.181133,"logger":"tls.obtain","msg":"will retry","error":"[*.darespider.family] Obtain: [*.darespider.family] creating new order: attempt 1: https://acme-v02.api.letsencrypt.org/acme/new-order: HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new order :: too many certificates (5) already issued for this exact set of domains in the last 168 hours: *.darespider.family, retry after 2024-09-06T06:04:09Z: see https://letsencrypt.org/docs/duplicate-certificate-limit/ (ca=https://acme-v02.api.letsencrypt.org/directory)","attempt":8,"retrying_in":1200,"elapsed":2469.51802837,"max_duration":2592000}
I want to note that I just these errors only mention *.darespider.family
domain, but I got the same errors for www.darespider.family
and darespider.family
domains that are declared on my Caddyfiles.
3. Caddy version:
v2.8.4 h1:q3pe0wpBj1OcHFZ3n/1nl4V4bxBrYoSoab7rL9BMYNk=
4. How I installed and ran Caddy:
I have two raspberry pi with Caddy + Cloudflare DNS through docker compose custom build, both machines reuse the same dockerfile and compose
a. System environment:
Docker on Armbian on two raspberrypi servers
Armbian 23.11.1 Jammy with Linux 6.6.31-current-bcm2711
Docker version 27.1.1, build 6312585
b. Command:
Both servers share the same caddy file, just have different services on another file
docker compose -f compose.caddy.yml -f compose.services.yml up
c. Service/unit/compose file:
Dockerfile
FROM caddy:builder-alpine AS builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare
FROM caddy:alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
Docker compose
services:
caddy:
build: ./caddy
container_name: caddy
environment:
- CF_API_TOKEN=${CLOUDFLARE_ZONE_TOKEN}
- ACME_AGREE=true
ports:
- 443:443
- 443:443/udp
volumes:
- ${SHARED_CERTS_DIR}:${HOME}/.local/share/caddy/certificates
- ${CADDY_DIR}/Caddyfile:/etc/caddy/Caddyfile
- ${CADDY_DIR}/site:/srv:ro
- ${CADDY_DIR}/data:/data
- ${CADDY_DIR}/config:/config
networks:
- caddy
restart: unless-stopped
networks:
caddy:
d. My complete Caddy config:
I use two caddy instances because I don’t want to open many ports on the secondary server and use http://192.168.1.123:8080
all over the place, this way I only use the container name + the internal container port. If there is another easier or better/common option I’m open to hear it.
Server 1
https://darespider.family {
tls {EMAL} {
dns cloudflare {env.CF_API_TOKEN}
}
redir https://www.darespider.family
}
https://www.darespider.family {
tls {EMAL} {
dns cloudflare {env.CF_API_TOKEN}
}
reverse_proxy http://home:3000
}
https://*.darespider.family {
tls {EMAL} {
dns cloudflare {env.CF_API_TOKEN}
}
@status {
host glances-1.darespider.family
}
handle @status {
reverse_proxy http://glances:61208
}
@pihole {
host pihole.darespider.family
}
handle @pihole {
reverse_proxy http://pihole:80
}
@dns {
host dns.darespider.family
}
handle @dns {
reverse_proxy http://pihole:53
}
@docker {
host docker.darespider.family
}
handle @docker {
reverse_proxy http://portainer:9000
}
@stock {
host stock.darespider.family
}
handle @stock {
reverse_proxy http://homebox:7745
}
@budget {
host budget.darespider.family
}
handle @budget {
reverse_proxy http://firefly:8080
}
# Default handler for all other subdomains
handle {
# Redirect to home dashboard to allow the use of deployed apps
redir https://www.darespider.family
}
}
Server 2
https://*.darespider.family {
tls {
dns cloudflare {env.CF_API_TOKEN}
}
@status {
host glances-2.darespider.family
}
handle @status {
reverse_proxy http://glances:61208
}
# Multimedia
@media {
host media.darespider.family
}
handle @media {
reverse_proxy http://jellyfin:8096
}
@books {
host books.darespider.family
}
handle @books {
reverse_proxy http://kavita:5000
}
@comics {
host comics.darespider.family
}
handle @comics {
reverse_proxy http://komga:25600
}
@sync {
host sync.darespider.family
}
handle @sync {
reverse_proxy http://syncthing:8384
}
# Default handler for all other subdomains
handle {
# Redirect to home dashboard to allow the use of deployed apps
redir https://www.darespider.family
}
}
5. Links to relevant resources:
I followed these two posts to get my caddy instance working