Faulty ACME certificates don't get cleaned up

1. The problem I’m having:

This may only be a corner case, but I think the certificate cleanup function could be improved in case old ACME certificates are faulty or not readable.

First, this issue occurred in an embedded environment, where for some reason the SSL certificates caddy got over ACME were suddenly empty. I suspect some error with the storage or device restart. This led to following error message:

2025/02/04 09:46:43.982 INFO    tls.on_demand   obtaining new certificate       {"remote_ip": "10.4.114.118", "remote_port": "59686", "server_name": "10.33.45.95"}
2025/02/04 09:46:43.983 ERROR   tls.on_demand   loading newly-obtained certificate from storage {"remote_ip": "10.4.114.118", "remote_port": "59686", "server_name": "10.33.45.95", "error": "no matching certificate to load for 10.33.45.95: decoding certificate metadata: unexpected end of JSON input"}

I could also reproduce this error in a local docker environment. All I had to do was to remove the contents of the ACME certificate after it was established. After that, caddy was no longer able to get further certificates from the ACME server.
Wouldn’t it be better if the faulty certificate files would be cleaned up and another ACME certificate would be requested?

2. Error messages and/or full log output:

Error loop in local docker environment:

acme-client  | 2025/02/11 00:18:17.333  INFO    tls.cache.maintenance   certificate expires soon; queuing for renewal   {"identifiers": ["acme-client"], "remaining": 11400.666742598}
acme-client  | 2025/02/11 00:28:17.329  WARN    tls.cache.maintenance   error while checking if stored certificate is also expiring soon        {"identifiers": ["acme-client"], "error": "decoding certificate metadata: unexpected end of JSON input"}
acme-client  | 2025/02/11 00:28:17.329  INFO    tls.cache.maintenance   certificate expires soon; queuing for renewal   {"identifiers": ["acme-client"], "remaining": 10800.670694755}
acme-client  | 2025/02/11 00:28:17.515  ERROR   tls.renew       will retry      {"error": "decoding certificate metadata: unexpected end of JSON input", "attempt": 8, "retrying_in": 1800, "elapsed": 3600.073811136, "max_duration": 2592000}

3. Caddy version:

2.9.1

4. How I installed and ran Caddy:

a. System environment:

Docker in WSL

b. Command:

docker-compose up

c. Service/unit/compose file:

services:
  acme-server:
    image: caddy:latest
    container_name: acme-server
    ports:
      - "9443:443"
    volumes:
      - ./Caddyfile.server:/etc/caddy/Caddyfile
      - ./server-storage:/data/caddy

  acme-client:
    image: caddy:latest
    container_name: acme-client
    ports:
      - "8443:443"
    volumes:
      - ./Caddyfile.client:/etc/caddy/Caddyfile
      - ./client-storage:/data/caddy
    depends_on:
      - acme-server

d. My complete Caddy config:

acme-server (Caddyfile.server):

{
    log {
        output stdout
        format console
        level debug
    }
    pki {
        ca test {
            name "Test CA"
        }
    }
}

acme-server:443 {
    tls {
        issuer internal {
            ca test
        }
    }
    acme_server {
        ca test
    }
}

acme-client (Caddyfile.client):

{
    acme_ca https://acme-server/acme/test/directory
    acme_ca_root /data/caddy/acme-root-ca/root.crt
    log {
        output stdout
        format console
        level debug
    }
}

acme-client:443 {
    respond "Client Server is running"
}

5. Links to relevant resources:

You will need to remove the certificate cache and it will re-create certificates. It should be in $HOME/.local/share/caddy.

Yes, I could also solve the problem by deleting the old certificates manually. But isn’t there a way to automatically handle this case?
I cannot imagine a scenario where the user would want faulty certificates, if they can also be replaced by new ones automatically over ACME.

I’m not quite sure. That would be a question for someone else way more knowledgeable.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.