Error in Logs with "updating ARI" after Upgrading to Caddy v2.8.1

1. The problem I’m having:

I noticed the following error messages in my Caddy log about “upgrading ARI” after upgrading to 2.8.1. Everything else seems to work fine and I didn’t observe any issues with Caddy as an HTTPS server.

Looks like these logs are related to the new ARI feature, but I didn’t find anything useful in the documentation and only found the relevant function in the source code.

Since some of the logs are shown as errors, what is the proper way to fix them (or should I be worrying at all)?

2. Error messages and/or full log output:

May 31 11:45:27 caddy[1309]: {"level":"warn","ts":1717181127.78544,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["git.kernel-panic.me"],"cert_hash":"59e5ce1dc959edc38d25c8a97ae4a129646f8efa837a569cad6be17cf00bacf0","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A2HvYw5MEQ96d2pkw77FphVq","cert_expiry":1724903620,"prev_start":1722226420.3333333,"next_start":-6795364578.8713455,"prev_end":1722399220.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722377934,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:45:27 caddy[1309]: {"level":"error","ts":1717181127.7854843,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
May 31 11:45:27 caddy[1309]: {"level":"warn","ts":1717181127.7855923,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["outline.kernel-panic.me"],"cert_hash":"631e401b49cf250a9131c228ac4e4eccf22a62201b4323e8b6e3c0e8e5be7fdb","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A0OirYl6PAYO7cgWdu9zy_Qp","cert_expiry":1724869741,"prev_start":1722192541,"next_start":-6795364578.8713455,"prev_end":1722365341,"next_end":-6795364578.8713455,"prev_selected_time":1722245150,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:45:27 caddy[1309]: {"level":"error","ts":1717181127.7856023,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
May 31 11:45:27 caddy[1309]: {"level":"warn","ts":1717181127.7857049,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["vault.kernel-panic.me"],"cert_hash":"0d29de0a1be6fa4dbd9d1855d62ceb38c28890dd1dfab99597f245f94ec23e98","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A9rQkXv-4OXxF-oY7HIaWLPr","cert_expiry":1724912990,"prev_start":1722235790.3333333,"next_start":-6795364578.8713455,"prev_end":1722408590.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722263411,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:45:27 caddy[1309]: {"level":"error","ts":1717181127.7857132,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
May 31 11:45:27 caddy[1309]: {"level":"warn","ts":1717181127.7857788,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["auth.kernel-panic.me"],"cert_hash":"6d7c6a6771c6954191fb20085dff3632c5a3abdeddf3528e521c948f63ede030","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A-svpqVdQexfStjS7-RbDLn7","cert_expiry":1724866176,"prev_start":1722188976,"next_start":-6795364578.8713455,"prev_end":1722361776,"next_end":-6795364578.8713455,"prev_selected_time":1722214894,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:45:27 caddy[1309]: {"level":"error","ts":1717181127.7857864,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
May 31 11:55:27 caddy[1309]: {"level":"warn","ts":1717181727.785036,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["git.kernel-panic.me"],"cert_hash":"59e5ce1dc959edc38d25c8a97ae4a129646f8efa837a569cad6be17cf00bacf0","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A2HvYw5MEQ96d2pkw77FphVq","cert_expiry":1724903620,"prev_start":1722226420.3333333,"next_start":-6795364578.8713455,"prev_end":1722399220.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722377934,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:55:27 caddy[1309]: {"level":"error","ts":1717181727.7850888,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
May 31 11:55:27 caddy[1309]: {"level":"warn","ts":1717181727.785189,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["outline.kernel-panic.me"],"cert_hash":"631e401b49cf250a9131c228ac4e4eccf22a62201b4323e8b6e3c0e8e5be7fdb","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A0OirYl6PAYO7cgWdu9zy_Qp","cert_expiry":1724869741,"prev_start":1722192541,"next_start":-6795364578.8713455,"prev_end":1722365341,"next_end":-6795364578.8713455,"prev_selected_time":1722245150,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:55:27 caddy[1309]: {"level":"error","ts":1717181727.7851977,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
May 31 11:55:27 caddy[1309]: {"level":"warn","ts":1717181727.7853189,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["vault.kernel-panic.me"],"cert_hash":"0d29de0a1be6fa4dbd9d1855d62ceb38c28890dd1dfab99597f245f94ec23e98","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A9rQkXv-4OXxF-oY7HIaWLPr","cert_expiry":1724912990,"prev_start":1722235790.3333333,"next_start":-6795364578.8713455,"prev_end":1722408590.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722263411,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:55:27 caddy[1309]: {"level":"error","ts":1717181727.785327,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
May 31 11:55:27 caddy[1309]: {"level":"warn","ts":1717181727.7853918,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["auth.kernel-panic.me"],"cert_hash":"6d7c6a6771c6954191fb20085dff3632c5a3abdeddf3528e521c948f63ede030","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.A-svpqVdQexfStjS7-RbDLn7","cert_expiry":1724866176,"prev_start":1722188976,"next_start":-6795364578.8713455,"prev_end":1722361776,"next_end":-6795364578.8713455,"prev_selected_time":1722214894,"next_selected_time":-6795364578.8713455,"explanation_url":""}
May 31 11:55:27 caddy[1309]: {"level":"error","ts":1717181727.7853982,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}

3. Caddy version:

Caddy v2.8.1 built with cloudflare module.

v2.8.1 h1:UVWB6J5f/GwHPyvdTrm0uM7YhfaWb4Ztdrp/z6ROHsM=

4. How I installed and ran Caddy:

a. System environment:

OS: NixOS 23.11
Architecture: x86

b. Command:

Systemd service is used to run Caddy. The command to run Caddy is:

/nix/store/x134girixx3xhbzm0h3dyvygr4bvdnbr-caddy-v2.8.1/bin/caddy run --config /etc/caddy/caddy_config --adapter caddyfile

c. Service/unit/compose file:

The full systemd service file (output of systemctl cat caddy.service):

# /etc/systemd/system/caddy.service
[Unit]
After=network.target network-online.target
StartLimitBurst=10
StartLimitIntervalSec=14400
Wants=network-online.target
X-Reload-Triggers=/nix/store/fkhc59lkhj0xhnwsvaln4xb4wn113zbd-X-Reload-Triggers-caddy

[Service]
Environment="LOCALE_ARCHIVE=/nix/store/zfx4c3pdn37l0li3h8rzgydvsd7jvgyh-glibc-locales-2.38-77/lib/locale/locale-archive"
Environment="PATH=/nix/store/pxf98n8dsxb6kmm0pdvc3dr3gzcqhizi-coreutils-9.3/bin:/nix/store/gx1pjlqmml6ifb5dpn24bnygy5xwa3za-findutils-4.9.0/bin:/nix/store/73nl94djwcgyn9vmv1qi7d8kh1cbbw04-gnugrep-3.11/bin:/nix/store/3kh010crcx64zzi5yb6dim2pq2y716nl-gnused-4.9/bin:/nix/store/cf4c2g42hg2zjyw6kmvgg1qanjbiqx4m-systemd-254.10/bin:/nix/store/pxf98n8dsxb6kmm0pdvc3dr3gzcqhizi-coreutils-9.3/sbin:/nix/store/gx1pjlqmml6ifb5dpn24bnygy5xwa3za-findutils-4.9.0/sbin:/nix/store/73nl94djwcgyn9vmv1qi7d8kh1cbbw04-gnugrep-3.11/sbin:/nix/store/3kh010crcx64zzi5yb6dim2pq2y716nl-gnused-4.9/sbin:/nix/store/cf4c2g42hg2zjyw6kmvgg1qanjbiqx4m-systemd-254.10/sbin"
Environment="TZDIR=/nix/store/b35w67i0p2jxw52bqpd0dvm2kbig7xw2-tzdata-2024a/share/zoneinfo"
AmbientCapabilities=cap_net_bind_service
CapabilityBoundingSet=cap_net_bind_service
EnvironmentFile=/run/secrets/caddy/cloudflare
ExecReload=
ExecReload=/nix/store/x134girixx3xhbzm0h3dyvygr4bvdnbr-caddy-v2.8.1/bin/caddy reload --config /etc/caddy/caddy_config --adapter caddyfile --force
ExecStart=
ExecStart=/nix/store/x134girixx3xhbzm0h3dyvygr4bvdnbr-caddy-v2.8.1/bin/caddy run --config /etc/caddy/caddy_config --adapter caddyfile
Group=caddy
LogsDirectory=caddy
NoNewPrivileges=true
PrivateDevices=true
ProtectHome=true
ReadWriteDirectories=/var/lib/caddy
Restart=on-failure
RestartPreventExitStatus=1
RestartSec=5s
StateDirectory=caddy
User=caddy

d. My complete Caddy config:

(only replaced my email address with a fake one)

{
	grace_period 10s
	acme_dns cloudflare {env.CLOUDFLARE_API_TOKEN}

	email user@example.com

	log {
		level INFO
	}
}

(header_security) {
	header {
		Permissions-Policy interest-cohort=()
		Strict-Transport-Security "max-age=31536000;"
		X-XSS-Protection "0"
		X-Content-Type-Options "nosniff"
		X-Frame-Options "SAMEORIGIN"
		-X-Powered-By
		Referrer-Policy "no-referrer-when-downgrade"
	}
}

(common) {
	import header_security
	encode zstd gzip
}

auth.kernel-panic.me {
	log {
		output file /var/log/caddy/access-auth.kernel-panic.me.log
	}

	import common
	reverse_proxy localhost:9815 {
		header_up X-Real-IP {http.request.header.Cf-Connecting-Ip}
	}
}

git.kernel-panic.me {
	log {
		output file /var/log/caddy/access-git.kernel-panic.me.log
	}

	import common
	reverse_proxy localhost:9816 {
		header_up X-Real-IP {http.request.header.Cf-Connecting-Ip}
	}
}

outline.kernel-panic.me {
	log {
		output file /var/log/caddy/access-outline.kernel-panic.me.log
	}

	import common
	reverse_proxy localhost:9817 {
		header_up X-Real-IP {http.request.header.Cf-Connecting-Ip}
	}
}

vault.kernel-panic.me {
	log {
		output file /var/log/caddy/access-vault.kernel-panic.me.log
	}

	header {
		X-Robots-Tag "noindex, nofollow"
	}
	import common
	reverse_proxy localhost:9818 {
		header_up X-Real-IP {http.request.header.Cf-Connecting-Ip}
	}
}

5. Links to relevant resources:

1 Like

I’m seeing similar behavior to the OP, running in Docker on Debian.
Occurs on both Caddy version v2.8.0 h1:7ZCvB9R7qBsEydqBkYCOHaMNrDEF/fj0ZouySV2D474= and v2.8.1 h1:UVWB6J5f/GwHPyvdTrm0uM7YhfaWb4Ztdrp/z6ROHsM=.

Relevant configuration (changed to my.domain to be generic/for privacy):

my.domain *.my.domain {
        tls {
                dns cloudflare {env.CLOUDFLARE_API_TOKEN}
        }
                route {
                        file_server * {
                                root /images/uploads
                                pass_thru
                        }
                        reverse_proxy http://192.168.1.7:10120 {
                                header_up Host {http.reverse_proxy.upstream.hostport}
                                header_up X-Real-IP {http.request.header.X-Real-IP}
                        }
                }
        }
}

Output (repeats every 10 minutes):

caddy  | {"level":"warn","ts":1717264252.022336,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["my.domain"],"cert_hash":"1267cc8a113929581f318de8a7a129f1570caf10ecaf6fe1444ad27c602a1b83","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.BOi1I2QptKn0ETHCBQhIgVlC","cert_expiry":1724820527,"prev_start":1722143327.3333333,"next_start":-6795364578.8713455,"prev_end":1722316127.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722309922,"next_selected_time":-6795364578.8713455,"explanation_url":""}
caddy  | {"level":"error","ts":1717264252.0223885,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
caddy  | {"level":"warn","ts":1717264852.025362,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["my.domain"],"cert_hash":"1267cc8a113929581f318de8a7a129f1570caf10ecaf6fe1444ad27c602a1b83","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.BOi1I2QptKn0ETHCBQhIgVlC","cert_expiry":1724820527,"prev_start":1722143327.3333333,"next_start":-6795364578.8713455,"prev_end":1722316127.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722309922,"next_selected_time":-6795364578.8713455,"explanation_url":""}
caddy  | {"level":"error","ts":1717264852.0254233,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}

I see the same, too (using caddy-docker-proxy) with config:

(logging) {
    log {
        output file /var/log/caddy/{args[0]}.access.log {
            roll_size 10mb
            roll_keep 20
            roll_keep_for 3d
        }
    }
}

# Apply logging to all incoming requests
{
    log {
        output file /var/log/caddy/common.access.log {
            roll_size 10mb
            roll_keep 20
            roll_keep_for 3d
        }
    }
}

* {
    import logging wildcard
}

{$DOMAIN} {
    import logging {$DOMAIN}
    root * /var/www/public
    file_server
    rewrite * /index.html
}

{$DOMAIN_FRP} {
    import logging {$DOMAIN_FRP}
    reverse_proxy https://localhost:{$FRPS_DASHBOARD_PORT}{
        transport http {
            tls_insecure_skip_verify
        }
    }
}

sudo cat logs/common.access.log | tail -n 10 | jl:

[2024-06-01 19:12:04] WARNING: ARI window or selected renewal time changed [cert_expiry=1724944776 explanation_url= logger=tls.cache.maintenance]
[2024-06-01 19:12:04]   ERROR: updating ARI [logger=tls.cache.maintenance]
    could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)

Wow that’s weird. I have not encountered this in my testing at all.

Looking into it… I may have more questions.

Edit: Can you run with debug logs enabled?

1 Like

I have debug enabled and can see the same ARI WARN and ERROR logs.
The following four DEBUG logs were issued shortly after a reload and a request. Then there were no further activities on that server and no logs until the WARN and ERROR happened:

2024-06-01T23:18:56.110+0200    DEBUG   events  event   {"name": "tls_get_certificate", "id": "52557240-145b-4191-98e9-1715afc79a92", "origin": "tls", "data": {"client_hello":{"CipherSuites":[4866,4867,4865,4868,49196,49200,52393,52392,49325,49195,49199,49324,49187,49191,49162,49172,49161,49171,157,49309,156,49308,61,60,53,47,159,52394,49311,158,49310,107,103,57,51,255],"ServerName":"playground.stbu.net","SupportedCurves":[29,23,30,25,24],"SupportedPoints":"AAEC","SignatureSchemes":[1027,1283,1539,2055,2056,2057,2052,2058,2053,2059,2054,1025,1281,1537,771,769,515,513],"SupportedProtos":["h2","http/1.1"],"SupportedVersions":[772,771],"RemoteAddr":{"IP":"77.21.103.183","Port":60084,"Zone":""},"LocalAddr":{"IP":"85.215.138.140","Port":8443,"Zone":""}}}}
2024-06-01T23:18:56.110+0200    DEBUG   tls.handshake   choosing certificate    {"identifier": "playground.stbu.net", "num_choices": 1}
2024-06-01T23:18:56.110+0200    DEBUG   tls.handshake   default certificate selection results   {"identifier": "playground.stbu.net", "subjects": ["playground.stbu.net"], "managed": true, "issuer_key": "acme-v02.api.letsencrypt.org-directory", "hash": "01b11cc576cdb4a08e7a2b6131aed6d6fb9f40ff1afe06e86dfe8d2f8d9a59eb"}
2024-06-01T23:18:56.110+0200    DEBUG   tls.handshake   matched certificate in cache    {"remote_ip": "77.21.103.183", "remote_port": "60084", "subjects": ["playground.stbu.net"], "managed": true, "expiration": "2024-08-30T14:01:07.000Z", "hash": "01b11cc576cdb4a08e7a2b6131aed6d6fb9f40ff1afe06e86dfe8d2f8d9a59eb"}
2024-06-01T23:28:16.212+0200    WARN    tls.cache.maintenance   ARI window or selected renewal time changed     {"identifiers": ["playground.stbu.net"], "cert_hash": "01b11cc576cdb4a08e7a2b6131aed6d6fb9f40ff1afe06e86dfe8d2f8d9a59eb", "ari_unique_id": "FC6zF7dYVsuuUAlA5h-vnYsUwsY.BDu1bNyo11F21yEVMkXoLxdI", "cert_expiry": "2024-08-30T14:01:06.000Z", "prev_start": "2024-07-30T14:21:06.000Z", "next_start": "0001-01-01T00:00:00.000Z", "prev_end": "2024-08-01T14:21:06.000Z", "next_end": "0001-01-01T00:00:00.000Z", "prev_selected_time": "2024-07-30T16:57:56.000Z", "next_selected_time": "0001-01-01T00:00:00.000Z", "explanation_url": ""}
2024-06-01T23:28:16.212+0200    ERROR   tls.cache.maintenance   updating ARI    {"error": "could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}

Especially the "next_start": "0001-01-01T00:00:00.000Z", looks strange.
If it helps: I run my server with a local timezone /etc/localtime -> /usr/share/zoneinfo/Europe/Berlin

1 Like

Thanks! Yours is different from what the logs above report, which is a next_start of -6795364578.8713455 (which is about 1754-08-30 22:43:42 +0000 UTC when converted from unix epoch seconds).

The zero-time (yours) make much more sense to me than the year 1754 date.

Although, time.Time{}.Unix() prints -62135596800 :thinking:

1 Like

Sure thing - there’s the output from startup to the error occurring 10 minutes later. Relevant Caddyfile included in case I configured it incorrectly.

tamamo.cat *.tamamo.cat {
        tls {
                dns cloudflare {env.CLOUDFLARE_API_TOKEN_MASTER}
        }
        log {
                level DEBUG
        }

        @basedomain host tamamo.cat www.tamamo.cat
        handle @basedomain {
                route {
                        file_server * {
                                root /images/uploads
                                pass_thru
                        }
                        reverse_proxy http://10.0.30.7:10120 {
                                header_up Host {http.reverse_proxy.upstream.hostport}
                                header_up X-Real-IP {http.request.header.X-Real-IP}
                        }
                }
        }

}
~/compose/caddy$ docker compose up
[+] Running 1/0
 ✔ Container caddy  Created                                                                                                                                                                                                        0.0s
Attaching to caddy
caddy  | {"level":"info","ts":1717278237.930344,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
caddy  | {"level":"info","ts":1717278237.9314961,"msg":"adapted config to JSON","adapter":"caddyfile"}
caddy  | {"level":"info","ts":1717278237.9321272,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//[::1]:2019","//127.0.0.1:2019","//localhost:2019"]}
caddy  | {"level":"info","ts":1717278237.932318,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy  | {"level":"info","ts":1717278237.9323335,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy  | {"level":"info","ts":1717278237.9323554,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00059a180"}
caddy  | {"level":"info","ts":1717278237.9330623,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
caddy  | {"level":"info","ts":1717278237.933519,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy  | {"level":"info","ts":1717278237.9336069,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy  | {"level":"info","ts":1717278237.9336162,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["tamamo.cat","*.tamamo.cat"]}
caddy  | {"level":"warn","ts":1717278237.9343095,"logger":"tls","msg":"ARI window or selected renewal time changed","identifiers":["tamamo.cat"],"cert_hash":"1267cc8a113929581f318de8a7a129f1570caf10ecaf6fe1444ad27c602a1b83","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.BOi1I2QptKn0ETHCBQhIgVlC","cert_expiry":1724820527,"prev_start":1722143327.3333333,"next_start":-6795364578.8713455,"prev_end":1722316127.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722309922,"next_selected_time":-6795364578.8713455,"explanation_url":""}
caddy  | {"level":"error","ts":1717278237.9343202,"logger":"tls","msg":"updating ARI upon managing","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}
caddy  | {"level":"info","ts":1717278237.9346488,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy  | {"level":"info","ts":1717278237.9346645,"msg":"serving initial configuration"}
caddy  | {"level":"info","ts":1717278237.935297,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"e33ff390-1978-49c3-b4f9-1e27942e77ed","try_again":1717364637.9352953,"try_again_in":86399.999999636}
caddy  | {"level":"info","ts":1717278237.9353762,"logger":"tls","msg":"finished cleaning storage units"}
caddy  | {"level":"warn","ts":1717278837.9334195,"logger":"tls.cache.maintenance","msg":"ARI window or selected renewal time changed","identifiers":["tamamo.cat"],"cert_hash":"1267cc8a113929581f318de8a7a129f1570caf10ecaf6fe1444ad27c602a1b83","ari_unique_id":"FC6zF7dYVsuuUAlA5h-vnYsUwsY.BOi1I2QptKn0ETHCBQhIgVlC","cert_expiry":1724820527,"prev_start":1722143327.3333333,"next_start":-6795364578.8713455,"prev_end":1722316127.3333333,"next_end":-6795364578.8713455,"prev_selected_time":1722309922,"next_selected_time":-6795364578.8713455,"explanation_url":""}
caddy  | {"level":"error","ts":1717278837.9334705,"logger":"tls.cache.maintenance","msg":"updating ARI","error":"could not fully update ACME renewal info: either no ACME issuer configured for certificate, or all failed (make sure the ACME CA that issued the certificate is configured)"}

Thanks for the details everyone. I think I know the problem now. It was a later-stage small refactor that affected more than I thought it would :sweat_smile:

4 Likes

I’ve verified a patch I committed resolves the error for me. Can you all try the latest commit on master? I’ll probably tag a release if it works for you too. (A stupid bug, really)

4 Likes

Thanks for the quick fix! So I built against the commit (version: v2.8.2-0.20240602000249-b7280e694928 h1:39g4KKbaIer074C8pP/BmVS9CP8u2326ImDiq/vMT/I=) and it’s been running for an hour or two now. I haven’t seen that error message (or any log related to ARI) anymore. Let me know if you need other information to confirm that it’s resolved.

1 Like

Great! Releasing v2.8.2 now. Thank you for testing it out!

2 Likes

I just tried installing this (2.8.2) and am getting an error when attempting to restart Caddy through systemd.

{"level":"info","ts":1717305960.5983784,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
Jun 02 01:26:00 caddy caddy[20662]: Error: ambiguous config file format; please specify adapter (use --adapter)

I install Caddy on Debian through apt, along with the Namecheap module built through xcaddy. That part goes well, but after the latest change it is unable to pick up the Caddyfile it seems.
This is the same Caddyfile I’ve been using for a while so I do not think the file itself is an issue. I tried adding a .caddyfile extension (and updated systemd to point to that different file) and it appears to have allowed the file to load.

From what I understood from the release notes, it didn’t seem like I would need to make any changes to either the extension or the run command (a file name of just Caddyfile I think still should have worked), so this seems like a bug unless I missed something.

That bug is my fault. I’m working on a fix.

2 Likes

Separately, I’ve also been getting other SSL Certificate errors, mainly when adding additional sub(domains) to reverse_proxy to things on my network. Up until the latest upgrade, it generally worked fine.

Now I get 429s from zerossl (I thought they didn’t have rate limits at this time), and what seems to be some sort of unexplained error with letsencrypt with
challenge_type":"dns-01","error":"no memory of presenting a DNS record for \"_acme-challenge.mysubdomain.example.com\" (usually OK if presenting also failed)"}.
: presenting for challenge: adding temporary record for zone \"example.com.\": expected element type <ApiResponse> but have <html>

As mentioned earlier, I have my domain with Namecheap with the namecheap caddy module, and use the DNS Challenge method because this is all on my internal network. I confirmed with a curl command that it appears that my Namecheap API key is still working. I’m not positive if this is related to this thread or if it was just coincidental, but I do believe it happened around the time of these updates.

The fix for this one is here:

2 Likes

That error is from the DNS provider plugin and its interaction with the service (Namecheap). Looks like a coincidence.

Any other details on this? Logs, etc. They do have rate limits but they’re quite high. I’m also surprised to hear this.

1 Like

The 429s I think happened when I explicitly set the acme_ca option because I was trying to see if my issues were just with LetsEncrypt vs. ZeroSSL.

I set acme_ca https://acme.zerossl.com/v2/DV90 (I found it in some old forum post so I’m not positive it’s correct) along with my email in my Caddyfile global options.

Roughly piecing it together from the logs, it looks like this I think when it tries to get the certificate:
Obtain: [example.subdomain.com] creating new order: fetching new nonce from server: HTTP 429: (ca=https://acme.zerossl.com/v2/DV90)","attempt":1,"retrying_in":60,"elapsed":10.062732456,"max_duration":2592000}

I have a decent number of individual entries in my Caddyfile at this point (at least a few dozen I think), so maybe it’s hitting some sort of per-minute rate limit or something? I also don’t seem to get the 429 when I don’t specify the CA so maybe in that case it spreads things out better or something (I don’t know what Caddy does internally).

For the caddy-dns plugin issues, should I move that to a separate forum post on this site, or post directly on that repository page? It seemed like the caddy-dns organization was related to Caddy itself based on posts I saw on the forum linked to it, but I wasn’t sure.

I’m affected by this as well (seems to be happening every 30m or so now). When do the official Docker image get rebuilt? The latest one is for 2.8.1 :confused:
Should I downgrade to 2.8.0 in the meanwhile (i.e does it have the bug as well)?

Thanks!

Assuming you’re referring to the ARI warn/error, it exists in 2.8.0 and the Matt’s patch is in 2.8.2, pending testing. Based on my experience, it’s not service impacting - just log spam.

@ Matt - I’ll test the patch here shortly and report back.

Edit: Seems to be fixed. Had one output regarding the cert on first start, but quiet since then after running for 2hr.

1 Like

I think I figured out my issue with Namecheap and I agree it’s likely not due to this change, it was just coincidental.

It seems like the caddy-dns/namecheap module was not cleaning up the _acme-challenge DNS Challenge records in Namecheap like I thought it was supposed to. I believe that led to it coming up against Namecheap’s 150 records per domain limit, and then failing with an API error that didn’t match what the caddy-dns module expected and so that returned an unclear response.
Deleting all those extra records manually allowed certificates to proceed as expected again.

1 Like