Migrating TLS domain from apache to caddy (load balanced domain)

Hi there,

I am trying to migrate a domain from Apache to Caddy.

The setup consists of two machines behind a load balancer. The loadbalancer is load balancing https traffic to two boxes running apache.

Everything works fine when I run apache but when I run caddy it seems the loadbalancer cannot talk over https with Caddy.

I do not have control of the load balancer. What they told me is that the load balancer is performing a health check against the boxes on port 443 (GET request) and it does not get any answer.

Again, this all works fine with apache that means both ports (80 and 443) should be reachable from the loadbalancer. I can confirm locally that caddy is serving everything correctly:

$ curl -k --resolve domainhere:443:127.0.0.1 https://domainhere
Correct answer from server here

One thing that is different between how I run apache and how I run Caddy is that apache is running via docker. I expose ports 443 and 80 from the container to the host machine. When I test Caddy I do not use docker (I will in the future) and run the binary directly on the host machine(s).

I know this is very limited information but I thought I’d ask the community just in case someone has some idea what may be happening.

Thank you,
-drd

Some more info:

My Caddyfile looks like:

$ cat Caddyfile
https://domainhere https://domainhere2 {
        tls cert key
        root * /home/zuviz/caddy_test/public
        file_server
        log
}

http:/domainhere http://domainhere2 {
        root * /home/zuviz/caddy_test/public
        file_server
        log
}

I am running:

$ ./bin/caddy version
v2.6.1 h1:EDqo59TyYWhXQnfde93Mmv4FJfYe00dO60zMiEt+pzo=

I think I may find out what is going on.

When I look at the ports caddy is binding to, I see it is binding to :::80 and :::443. But when I compare that to apache I see 0.0.0.0:80 and 0.0.0.0:443. I think Caddy is not listening on all the interfaces.

I have tried to use bind 0.0.0.0 but that doesn’t work. How can I tell caddy to bind to all the interfaces?

Caddy binds on all the interfaces by default. Apache is just binding to IPv4 and Caddy is binding to “all” which depending on your kernel config might be IPv6, as it seems to be in your case.

Thank you, Matt. How can I make Caddy bind to all the IPv4 interfaces?

Compare the netstat outputs for apache and caddy:

# CADDY
netstat -antl | grep LISTEN | grep -P "80|443"
2022/09/28 17:27:40.749 INFO    using adjacent Caddyfile
tcp6       0      0 :::80                   :::*                    LISTEN
tcp6       0      0 :::443                  :::*                    LISTEN
# APACHE
   $ netstat -antl | grep LISTEN | grep -P "80|443"
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp6       0      0 :::443                  :::*                    LISTEN
tcp6       0      0 :::80                   :::*                    LISTEN

You can either hard-code it with bind [::] 0.0.0.0 or read this thread for more info:

That was a long thread!

What I tried based on reading that thread was to use caddy via docker.
When I do that, then I see caddy binding to 80 and 443 on IPv4.

 $ netstat -antl | grep LISTEN | grep -P "80|443"
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp6       0      0 :::443                  :::*                    LISTEN
tcp6       0      0 :::80                   :::*                    LISTEN

Unfortunately the loadbalancer keeps dropping the connection. I will keep digging. Let me know if you have more ideas about how to troubleshoot.

1 Like

You could try setting docs/caddyfile/options#default-sni and see if it changes anything.

It could very much be, that the mentioned health check doesn’t send a SNI.
Caddy won’t respond to https requests that omit their SNI or have it set to something Caddy isn’t configured to serve - at least by default.

You can test that yourself by running the following on your Caddy machine:

❯ curl https://127.0.0.1 -k

curl will send 127.0.0.1 as SNI, but your Caddy isn’t configured to serve that - so it won’t and curl returns an error.

Or you could also use --resolve, just like you did in your opening post, but with a hostname that you aren’t serving to yield the same error:

❯ curl -k https://example.com -v --resolve example.com:443:127.0.0.1

Also, :::443 and :::80 is listing on both IPv4 and IPv6 - no need to worry.
You actually tested that before by running --resolve domainhere:443:127.0.0.1 - which is IPv4 and you said worked :innocent:

3 Likes

Success!

That was it @emilylange. Thank you so much for taking the time to read the thread and respond.

The solution, as @emilylange suggested was to add a default_sni global option to the caddyfile.

-drd

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.