The IP address is the server itself. I’m pretty sure this started when I fired up Caddy while one of the site DNS entries was wrong so it failed to authenticate. Shortly afterwards the site was able to authenticate successfully, and got its certificate. However, this error has persisted, over several days, and a couple of reboots. All eight sites defined in the server are working perfectly, and tls is operational on them all. No other Caddy logs report anything related, nor the system logs.
Any ideas how I might get rid of it, as my log file is growing by a MB a day just for this?
(Note - no Caddyfile given, as this has persisted through several complete rewrites - the Caddyfile clearly has no relationship to it; the server is running Windows)
That’s weird. Sounds like someone is hitting your server every 7 seconds with a TLS client certificate in the handshake, and the server can’t verify it.
I guess I need to do some low-level logging, then. I’d wondered if there was something hanging over from the failed certificate request at Let’s Encrypt that’s got into a loop still trying to complete, but that would be surprising.
It would help to understand where each part of that error message originates.
For background, this machine is behind a NAT router. The local address is 192.168.1.71 and the external address is 82.70.166.77. If I use the external address within the network, my understanding is that the router turns the message round through the NAT processing.
Now the message seems to say that the error is caused by 82.70.166.77, which is the server’s own address, but not one that it actually knows!* But it suggests that some process in the server is sending out a message to itself (using the external address, which is why I thought it might be related to the checking done by Let’s Encrypt) which is then failing. I tried to check quickly if the message was actually local by disconnecting the Internet from the router. However, the messages then stopped - which is not conclusive, because that could have caused the router to suspend its NAT processing as well.
Time to start TCP/IP tracing…
Correction - the machine believes its own address to be 192.168.1.71. But if it looks up the address of the sites it is serving they will all show as 82.70.166.77. However, at the time this behaviour started, one site - the one whose certificate was being requested - was in the local DNS server as 82.70.166.77, but apparently the public DNS hadn’t had time to propagate sufficiently (I was trying to minimise downtime during the site handover between servers), and so was showing 82.70.166.73 (the address of my old web server, now retiring). Presumably Let’s Encrypt tried to reconcile the address it found for the site with that which Caddy claimed it was serving, and failed to, thus causing it to reject the certificate request. But without knowing details of how that check is done, I can’t say whether it really has any link to what’s now happening, or whether I’m chasing a phantom with that thought.
By doing what? And why were you using SSLv3 (Caddy doesn’t support it)?
Do revoke it if the private key is compromised or, in more niche situations, if the certificate was mis-issued (typically a CA concern, or maybe if you’re obtaining certificates on behalf of others but made a clerical/legal error or something). I guess there’s not a disadvantage to revoking it except that revoking it unnecessarily it adds unnecessary pressure to the CRLs – not sure if that’s a bad thing but in PKI we try to act in moderation.
Yes, this is how renewals work.
If it’s no longer being used, sure.
Caddy will manage them for you already.
I don’t know. Can you give me more information? All the participants in this thread have been pretty mum about their system configuration (hostnames, Caddyfiles, etc)… hence I and everyone else hasn’t been much help.