How to reuse certs generated by dns challenge?

1. The problem I’m having:

How can I reuse the generated SSL certs on different servers?

I tried extracting the certs from .local/share/caddy on one of the containers, save the certs on a custom directory and mount that custom directory on the Caddy container but after restarting the container Caddy deleted whatever was on the directory.

I restarted my containers a copuple of times and now I’m being rate limited by LetsEcrypt (with a good reason).

I want to note that the first time I run caddy + cloudflare DNS worked fine but after some restarts trying to reuse the certs I got the errors.

2. Error messages and/or full log output:

these are the two errors I get:

{"level":"error","ts":1725488847.1490495,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.darespider.family","issuer":"acme-v02.api.letsencrypt.org-directory","error":"HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new account :: too many registrations for this IP: see https://letsencrypt.org/docs/too-many-registrations-for-this-ip/"}
{"level":"error","ts":1725490988.181133,"logger":"tls.obtain","msg":"will retry","error":"[*.darespider.family] Obtain: [*.darespider.family] creating new order: attempt 1: https://acme-v02.api.letsencrypt.org/acme/new-order: HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new order :: too many certificates (5) already issued for this exact set of domains in the last 168 hours: *.darespider.family, retry after 2024-09-06T06:04:09Z: see https://letsencrypt.org/docs/duplicate-certificate-limit/ (ca=https://acme-v02.api.letsencrypt.org/directory)","attempt":8,"retrying_in":1200,"elapsed":2469.51802837,"max_duration":2592000}

I want to note that I just these errors only mention *.darespider.family domain, but I got the same errors for www.darespider.family and darespider.family domains that are declared on my Caddyfiles.

3. Caddy version:

v2.8.4 h1:q3pe0wpBj1OcHFZ3n/1nl4V4bxBrYoSoab7rL9BMYNk=

4. How I installed and ran Caddy:

I have two raspberry pi with Caddy + Cloudflare DNS through docker compose custom build, both machines reuse the same dockerfile and compose

a. System environment:

Docker on Armbian on two raspberrypi servers

Armbian 23.11.1 Jammy with Linux 6.6.31-current-bcm2711
Docker version 27.1.1, build 6312585

b. Command:

Both servers share the same caddy file, just have different services on another file

docker compose -f compose.caddy.yml -f compose.services.yml up

c. Service/unit/compose file:

Dockerfile

FROM caddy:builder-alpine AS builder

RUN xcaddy build \
	--with github.com/caddy-dns/cloudflare

FROM caddy:alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

Docker compose

services:
  caddy:
    build: ./caddy
    container_name: caddy
    environment:
      - CF_API_TOKEN=${CLOUDFLARE_ZONE_TOKEN}
      - ACME_AGREE=true
    ports:
      - 443:443
      - 443:443/udp
    volumes:
      - ${SHARED_CERTS_DIR}:${HOME}/.local/share/caddy/certificates
      - ${CADDY_DIR}/Caddyfile:/etc/caddy/Caddyfile
      - ${CADDY_DIR}/site:/srv:ro
      - ${CADDY_DIR}/data:/data
      - ${CADDY_DIR}/config:/config
    networks:
      - caddy
    restart: unless-stopped

networks:
  caddy:

d. My complete Caddy config:

I use two caddy instances because I don’t want to open many ports on the secondary server and use http://192.168.1.123:8080 all over the place, this way I only use the container name + the internal container port. If there is another easier or better/common option I’m open to hear it.

Server 1

https://darespider.family {
        tls {EMAL} {
                dns cloudflare {env.CF_API_TOKEN}
        }

        redir https://www.darespider.family
}

https://www.darespider.family {
        tls {EMAL} {
                dns cloudflare {env.CF_API_TOKEN}
        }

        reverse_proxy http://home:3000
}

https://*.darespider.family {
        tls {EMAL} {
                dns cloudflare {env.CF_API_TOKEN}
        }

        @status {
                host glances-1.darespider.family
        }
        handle @status {
                reverse_proxy http://glances:61208
        }

        @pihole {
                host pihole.darespider.family
        }
        handle @pihole {
                reverse_proxy http://pihole:80
        }

        @dns {
                host dns.darespider.family
        }
        handle @dns {
                reverse_proxy http://pihole:53
        }

        @docker {
                host docker.darespider.family
        }
        handle @docker {
                reverse_proxy http://portainer:9000
        }

        @stock {
                host stock.darespider.family
        }
        handle @stock {
                reverse_proxy http://homebox:7745
        }

        @budget {
                host budget.darespider.family
        }
        handle @budget {
                reverse_proxy http://firefly:8080
        }

        # Default handler for all other subdomains
        handle {
                # Redirect to home dashboard to allow the use of deployed apps
                redir https://www.darespider.family
        }
}

Server 2

https://*.darespider.family {
        tls {
                dns cloudflare {env.CF_API_TOKEN}
        }

        @status {
                host glances-2.darespider.family
        }
        handle @status {
                reverse_proxy http://glances:61208
        }

        # Multimedia
        @media {
                host media.darespider.family
        }
        handle @media {
                reverse_proxy http://jellyfin:8096
        }

        @books {
                host books.darespider.family
        }
        handle @books {
                reverse_proxy http://kavita:5000
        }

        @comics {
                host comics.darespider.family
        }
        handle @comics {
                reverse_proxy http://komga:25600
        }

        @sync {
                host sync.darespider.family
        }
        handle @sync {
                reverse_proxy http://syncthing:8384
        }

        # Default handler for all other subdomains
        handle {
                # Redirect to home dashboard to allow the use of deployed apps
                redir https://www.darespider.family
        }
}

5. Links to relevant resources:

I followed these two posts to get my caddy instance working

Hi @RurickDev,

Possibly slow down on the request rate.

yeah, that’s the issue, but I don’t know how to configure Caddy to increment the time between regeneration of the certs, or reuse the already generated ones instead of trying to generate new ones.

I got rate limited because I recreated the container too many times and Caddy deleted and re-generated new certs every time instead of reusing what was already created.

Testing and debugging are best done using the Staging Environment as the Rate Limits are much higher.

1 Like

yeah, I didn’t know staging existed until reading this logs. :sweat_smile:

Right know I already have certs, just want to use them on caddy, do you know how can I reuse them with caddy deleting them and trying to request new ones?
Is even posible?

I’m pretty sure Caddy would never just outright delete these files.

It might try to renew them if they’re inside the renewal window (i.e. <30 days validity remaining) but it shouldn’t just yeet them.

I think it’s far more likely there’s something else going on deleting your files.

We’ve got a problem here, too, I think.

Caddy’s data isn’t located in $HOME/.local/share/caddy in the official Docker container.

Per https://hub.docker.com/_/caddy, the certs are stored in /data inside the container. You can see that this is done simply by setting XDG_DATA_HOME in the Dockerfile as per the file location conventions.

Your certs should be getting saved to:

Oh, and one other minor tip if you’re interested:

You can make this structure a little more efficient like so:

        # Multimedia
        @media host media.darespider.family
        reverse_proxy @media http://jellyfin:8096
1 Like

Maybe the delete was some bad configuration I had.

When I extracted the certs I had my docker compose with this extra env variables

    environment:
      - XDG_DATA_HOME=${HOME}/.local/share
      - XDG_CONFIG_HOME=${HOME}/.config

and I could extract them following the path that reused my home dir name, thats why the volume I tried to mount used ${HOME} but on the same moment I updated the compose with the new volume I removed the XDG_DATA_HOME variable (don’t remember why I put there on the first place), probably thats why the content was deleted, probably it was docker who delete them (due may bad setup) and not Caddy.

Also, just to make more sense, before removing that variable from the compose file, running a tree command where my caddy volumes pointed I got empty directories.

Now, using the stagin url (thanks @Bruce5051 ) and after the stagin certs where generated I run the tree command again and the /data dir contain the certs.

Knowing that now, do yo think that if I create the next directory

/data/caddy/certificates/acme-v02.api.letsencrypt.org-directory

and I put there my certs will Caddy use them or is preferably to wait for the rate limit to end (168 hours. or 7 days) and generate new ones?

PD: thanks for the short syntax on the caddy file, it will help.

You can safely go ahead and copy the certs back in, although I’d probably just copy the whole caddy folder in (including the ACME account too).

Worst that can happen there is they aren’t copied into the exact right place and Caddy ignores them, but as long as the directory structure is right it should Just Work and you’ll be up and running again well before 7 days of rate limit is up.

2 Likes

I did copy all the directory and it worked like a charm, thanks a lot! :tada:

1 Like