Caddy performs TLS handshake for domain even if different bind is used

1. The problem I’m having:

I have caddy running with two IPs and two Domains. Domain A should be served by IP A only and Domain B should be served by IP B only. I can access Domain A via IP A and get the expected response, as well as Domain B via IP B.

When however I try to access Domain A via IP B (or vice versa), caddy still performs a TLS handshake and sends an empty response.

curl -I https://domain-a:443 --resolve 'domain-a:443:ip-b'
HTTP/2 200
alt-svc: h3=":443"; ma=2592000
server: Caddy
date: Wed, 25 Sep 2024 07:49:18 GMT

Is this the intended behavior? I´d expect that because Domain A is bound to IP A only, a handshake is only performed when the request is received via IP A.

3. Caddy version:

v2.8.4 h1:q3pe0wpBj1OcHFZ3n/1nl4V4bxBrYoSoab7rL9BMYNk=

4. How I installed and ran Caddy:

a. System environment:

Docker

d. My complete Caddy config:

domain-a {
    bind ip-a
    respond "Hello, Domain A!"
}

domain-b {
    bind ip-b
    respond "Hello, Domain B!"
}

Thanks,
Fabian

The TLS certs are managed globally, not per-server, so when a connection comes in with SNI of a cert that does exist in cache, it uses it.

As you noticed though, the HTTP routes don’t get invoked since they only exist for that domain on the other server.

You can see how it gets organized with caddy adapt -p.

It is possible in the JSON config to configure tls_connection_policies to only allow specific sni per server, but that’s not exposed in the Caddyfile currently.

We’re talking about possibly having Automatic HTTPS add connection policies limiting by sni per server, but we need to think about it to make sure it doesn’t have unintended side effects.

2 Likes

I wonder if as a cheap and quick workaround instead of using JSON, you could just run two Caddy servers each binding to its own IP?

Probably incredibly easy with Docker, to be honest. Could even share cert storage still.

2 Likes

Thanks a lot for the replies!

I fully understand that such changes on caddy side must be taken very carefully. I did not look into achieving it using JSON-based config. Even if that is possible, Caddyfile-based config is just so much simpler. For our use case, it is probably not worth it (due to available alternatives)

I also thought about running two separate instances but was not able to get it up and running.

In our case, caddy runs within k8s using hostNetwork: true. Because I also specified the ports in the spec (80,443) only the first instance was running and the second failed to schedule due to port conflict. Then luckily I stumbled upon Overlapping port in hostNetwork: true - #15 by Jebin_J - General Discussions - Discuss Kubernetes. By omitting the ports from the k8s spec, we can run two caddy instances, one for ip-a and one for ip-b resulting in the desired behavior.

3 Likes