Caddy v2.7.0-beta.2 "tls internal" renewal

No, that’s a client rule, not for servers.

The client will just shoot themselves in the foot like this.

Hello Matt,

Okay. It was only a suggestion. :slight_smile:

For the main issue reported in this ticket, I have the feeling that the catchall definition block with on_demand tls makes caddy attempt to renew all certificates through this circuit, even for certificates that were initially not generated on-demand and through another issuer (local).

Also it seems it already did this even with 2.6.4 but I never noticed it as with 2.6.4 the endpoint is still answering while it’s doing the renewal attempts. With 2.7.x however the endpoint is not answering while it does the attempts.

When a request for it is received it says the certificate is found in cache and that the issuer is “local”. But right after on_demand says that the certificate is not found on storage and trigger a request for it (not through local issuer)

{"level":"debug","ts":1688358135.954308,"logger":"events","msg":"event","name":"tls_get_certificate","id":"45910e5b-c1e0-4755-8175-42b6bc9ad10c","origin":"tls","data":{"client_hello":{"CipherSuites":[49199,49195,49169,49159,49171,49161,49172,49162,5,47,53,49170,10],"ServerName":"","SupportedCurves":[23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[1025,1027,513,515,1025,1281,1537],"SupportedProtos":null,"SupportedVersions":[771,770,769],"Conn":{}}}}
{"level":"debug","ts":1688358135.9545977,"logger":"tls.handshake","msg":"choosing certificate","identifier":"94.103.97.97","num_choices":1}
{"level":"debug","ts":1688358135.9546542,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"94.103.97.97","subjects":["94.103.97.97"],"managed":true,"issuer_key":"local","hash":"021cbb03e4a27c8a603c40c30d7d8891f31648b4e6d0930af5b2bdde3c31afff"}
{"level":"debug","ts":1688358135.9547286,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"193.46.254.155","remote_port":"40394","subjects":["94.103.97.97"],"managed":true,"expiration":1688357934,"hash":"021cbb03e4a27c8a603c40c30d7d8891f31648b4e6d0930af5b2bdde3c31afff"}
{"level":"debug","ts":1688358135.9550312,"logger":"tls.on_demand","msg":"certificate not found on disk; obtaining new certificate","identifiers":["94.103.97.97"]}
{"level":"info","ts":1688358135.9550858,"logger":"tls.on_demand","msg":"obtaining new certificate","remote_ip":"193.46.254.155","remote_port":"40394","server_name":"94.103.97.97"}

{"level":"info","ts":1688358135.9556062,"logger":"tls.obtain","msg":"acquiring lock","identifier":"94.103.97.97"}
{"level":"info","ts":1688358135.9580948,"logger":"tls.obtain","msg":"lock acquired","identifier":"94.103.97.97"}
{"level":"info","ts":1688358135.958292,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"94.103.97.97"}
{"level":"debug","ts":1688358135.958326,"logger":"events","msg":"event","name":"cert_obtaining","id":"09b00d0c-f3bd-4fd7-9238-fdf6589cd41f","origin":"tls","data":{"identifier":"94.103.97.97"}}
{"level":"debug","ts":1688358135.9589188,"logger":"tls.obtain","msg":"trying issuer 1/2","issuer":"acme-v02.api.letsencrypt.org-directory"}
{"level":"debug","ts":1688358135.958969,"logger":"tls.obtain","msg":"trying issuer 2/2","issuer":"acme.zerossl.com-v2-DV90"}
{"level":"debug","ts":1688358135.9590015,"logger":"events","msg":"event","name":"cert_failed","id":"9d845701-3416-4f26-a803-e47d9fc5fb99","origin":"tls","data":{"error":{},"identifier":"94.103.97.97","issuers":["acme-v02.api.letsencrypt.org-directory","acme.zerossl.com-v2-DV90"],"renewal":false}}
{"level":"error","ts":1688358135.9590635,"logger":"tls.obtain","msg":"will retry","error":"[94.103.97.97] Obtain: subject does not qualify for a public certificate: 94.103.97.97","attempt":1,"retrying_in":60,"elapsed":0.000940839,"max_duration":2592000}
{"level":"debug","ts":1688358186.807662,"logger":"events","msg":"event","name":"tls_get_certificate","id":"43f0502e-daae-4cd5-b3f5-048a5e6b168e","origin":"tls","data":{"client_hello":{"CipherSuites":[49195,49199,49196,49200,52393,52392,49161,49171,49162,49172,156,157,47,53,49170,10,4865,4866,4867],"ServerName":"","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2052,1027,2055,2053,2054,1025,1281,1537,1283,1539,513,515],"SupportedProtos":null,"SupportedVersions":[772,771],"Conn":{}}}}
{"level":"debug","ts":1688358186.8078265,"logger":"tls.handshake","msg":"choosing certificate","identifier":"94.103.97.97","num_choices":1}
{"level":"debug","ts":1688358186.8078632,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"94.103.97.97","subjects":["94.103.97.97"],"managed":true,"issuer_key":"local","hash":"021cbb03e4a27c8a603c40c30d7d8891f31648b4e6d0930af5b2bdde3c31afff"}
{"level":"debug","ts":1688358186.8078904,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"83.97.73.89","remote_port":"57380","subjects":["94.103.97.97"],"managed":true,"expiration":1688357934,"hash":"021cbb03e4a27c8a603c40c30d7d8891f31648b4e6d0930af5b2bdde3c31afff"}
{"level":"debug","ts":1688358186.808172,"logger":"tls.on_demand","msg":"certificate not found on disk; obtaining new certificate","identifiers":["94.103.97.97"]}
{"level":"info","ts":1688358195.9609685,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"94.103.97.97"}
{"level":"debug","ts":1688358195.9611003,"logger":"events","msg":"event","name":"cert_obtaining","id":"a86a3f30-f8ce-4cd5-a207-2ca588ecea67","origin":"tls","data":{"identifier":"94.103.97.97"}}
{"level":"debug","ts":1688358195.9615548,"logger":"tls.obtain","msg":"trying issuer 1/2","issuer":"acme-v02.api.letsencrypt.org-directory"}
{"level":"debug","ts":1688358195.9615967,"logger":"tls.obtain","msg":"trying issuer 2/2","issuer":"acme.zerossl.com-v2-DV90"}
{"level":"debug","ts":1688358195.9616263,"logger":"events","msg":"event","name":"cert_failed","id":"47465934-6b08-44c0-b397-15c4f316fb55","origin":"tls","data":{"error":{},"identifier":"94.103.97.97","issuers":["acme-v02.api.letsencrypt.org-directory","acme.zerossl.com-v2-DV90"],"renewal":false}}
{"level":"error","ts":1688358195.9616845,"logger":"tls.obtain","msg":"will retry","error":"[94.103.97.97] Obtain: subject does not qualify for a public certificate: 94.103.97.97","attempt":2,"retrying_in":120,"elapsed":60.003561421,"max_duration":2592000}

Thanks for explaining – you’ve been very detailed, I just still haven’t had a chance to catch up in full (just bits and pieces while mobile), since it’s still a holiday-ish weekend here (the actual holiday is on Tuesday, but I have family in town), so I’ll be a little spotty with forum support for a few days. Please just hang tight. I’ll look at this before we tag a new release. :+1:

1 Like

Hello Matt,

No problem, thanks a lot!

Hello,

Just wanted to add a note on this issue. I think I found a workaround that resolve the on_demand https catch all block interfering with “ip addresses” certificates.

Replacing this block

https:// {
        tls {
                on_demand
        }
        respond "Catch all HTTPS" {
                close
        }
}

With

# Catchall https - Match only host names and not ip-addresses:
https://*.*,
https://*.*.*,
https://*.*.*.*,
https://*.*.*.*.*,
https://*.*.*.*.*.* {
        tls {
                on_demand
        }
        respond "Catch all HTTPS" {
                close
        }
}

Now with this change, even default_sni works as it should :slight_smile:. This is a bit tricky though, not easy to guess.

I found this unexpectedly while studying how nextcloud-aio mastercontainer was built:

Kind regards
Sébastien

Damn, using https://*.*, https://*.*.*, etc pattern was almost a success.

The only remaining problem is that when a client give an IP address as SNI (and it looks it happens more thank we would think…), caddy routes it again through the on_demand certificate processing.

I guess specifically because of the https://*.*.*.* pattern which then matches the IPv4 SNI sent by the client.

I can trigger it easily with:

openssl s_client -servername 94.103.96.188 -connect 94.103.96.188:443

Result in log:

{"level":"error","ts":1689225755.3789954,"logger":"tls.obtain","msg":"will retry","error":"[94.103.96.188] Obtain: subject does not qualify for a public certificate: 94.103.96.188","attempt":1,"retrying_in":60,"elapsed":0.000691158,"max_duration":2592000}

It was so close to be perfect…

That’s a bit why I suggested, as IP are anyway forbidden in SNI as per the rfc, that if caddy detects an IP pattern in the SNI, it should maybe just disacard the SNI indication and treat it as it was a non-SNI request…

EDIT: Was able to config that it’s *.*.*.* matching the IPv4 SNI supplied by some clients:

Disabling the pattern on the catchall block avoid this, but then hostnames like some.host.name.tldwill not be catched.

# Workaround to match only host names and not ip-addresses:
https://*.*,
https://*.*.*,
#https://*.*.*.*,
https://*.*.*.*.*,
https://*.*.*.*.*.* {
        import common
        tls {
                on_demand
                load /etc/caddy/certs
        }
}

(Yes I could probably avoid it by disallowing the endpoint to return OK for an IPv4 pattern…, but then it would be left without certificate and return an SSL error)

1 Like

Ok, thanks for the follow-up @r00tsh3ll .

RFC 4366 explicitly forbids IP addresses in SNI:

Literal IPv4 and IPv6 addresses are not permitted in “HostName”.

So I’m thinking it makes the most sense to ignore literal IPs in SNI.

Of course, that won’t 100% solve your problem, since somebody could always put something like localhost or foo.local in the SNI and won’t be able to get a cert for it.

I’m still looking into this…

@r00tsh3ll Have you actually confirmed whether the ServerName field of the client’s TLS handshake is set to an IP address? Or is it just by observing Caddy’s behavior?

Because Caddy will use the ServerName if present; otherwise it will fall back to the local address from the TCP connection:

(Which makes sense.)

I’m still trying to reproduce the issue where the wrong issuer is used on-demand for that IP address, but as for how it gets the IP address, I hope that helps clarify things…

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.