I have caddy running with two IPs and two Domains. Domain A should be served by IP A only and Domain B should be served by IP B only. I can access Domain A via IP A and get the expected response, as well as Domain B via IP B.
When however I try to access Domain A via IP B (or vice versa), caddy still performs a TLS handshake and sends an empty response.
Is this the intended behavior? I´d expect that because Domain A is bound to IP A only, a handshake is only performed when the request is received via IP A.
The TLS certs are managed globally, not per-server, so when a connection comes in with SNI of a cert that does exist in cache, it uses it.
As you noticed though, the HTTP routes don’t get invoked since they only exist for that domain on the other server.
You can see how it gets organized with caddy adapt -p.
It is possible in the JSON config to configure tls_connection_policies to only allow specific sni per server, but that’s not exposed in the Caddyfile currently.
We’re talking about possibly having Automatic HTTPS add connection policies limiting by sni per server, but we need to think about it to make sure it doesn’t have unintended side effects.
I fully understand that such changes on caddy side must be taken very carefully. I did not look into achieving it using JSON-based config. Even if that is possible, Caddyfile-based config is just so much simpler. For our use case, it is probably not worth it (due to available alternatives)
I also thought about running two separate instances but was not able to get it up and running.
In our case, caddy runs within k8s using hostNetwork: true. Because I also specified the ports in the spec (80,443) only the first instance was running and the second failed to schedule due to port conflict. Then luckily I stumbled upon Overlapping port in hostNetwork: true - #15 by Jebin_J - General Discussions - Discuss Kubernetes. By omitting the ports from the k8s spec, we can run two caddy instances, one for ip-a and one for ip-b resulting in the desired behavior.