Caddy "leaks" non-https container data

1. The problem I’m having:

I am trying to run Caddy natively on Debian 12. Ideally I can run docker-compose instances and route them through a reverse-proxy configuration. This works but it “leaks” data from the non-https container:

Good (insecure due to LE Staging):

curl -vL --insecure https://whoami.lapre.com    
*   Trying 5.161.195.89:443...
* Connected to whoami.lapre.com (5.161.195.89) port 443 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=whoami.lapre.com
*  start date: Jun 24 13:37:14 2024 GMT
*  expire date: Sep 22 13:37:13 2024 GMT
*  issuer: C=US; O=(STAGING) Let's Encrypt; CN=(STAGING) False Fennel E6
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* using HTTP/2
* h2h3 [:method: GET]
* h2h3 [:path: /]
* h2h3 [:scheme: https]
* h2h3 [:authority: whoami.lapre.com]
* h2h3 [user-agent: curl/7.88.1]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x56210ed6f400)
> GET / HTTP/2
> Host: whoami.lapre.com
> user-agent: curl/7.88.1
> accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 200 
< alt-svc: h3=":443"; ma=2592000
< content-type: text/plain; charset=utf-8
< date: Thu, 27 Jun 2024 03:02:48 GMT
< server: Caddy
< content-length: 317
< 
Hostname: 2fa99d1e2c77
IP: 127.0.0.1
IP: ::1
IP: 192.168.80.2
IP: fe80::42:c0ff:fea8:5002
RemoteAddr: 192.168.80.1:33010
GET / HTTP/1.1
Host: whoami.lapre.com
User-Agent: curl/7.88.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 5.161.195.89
X-Forwarded-Host: whoami.lapre.com
X-Forwarded-Proto: https

* Connection #0 to host whoami.lapre.com left intact

Bad:

curl -vL http://whoami.lapre.com:2400       
*   Trying 5.161.195.89:2400...
* Connected to whoami.lapre.com (5.161.195.89) port 2400 (#0)
> GET / HTTP/1.1
> Host: whoami.lapre.com:2400
> User-Agent: curl/7.88.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Thu, 27 Jun 2024 03:06:21 GMT
< Content-Length: 206
< Content-Type: text/plain; charset=utf-8
< 
Hostname: 2fa99d1e2c77
IP: 127.0.0.1
IP: ::1
IP: 192.168.80.2
IP: fe80::42:c0ff:fea8:5002
RemoteAddr: 5.161.195.89:51126
GET / HTTP/1.1
Host: whoami.lapre.com:2400
User-Agent: curl/7.88.1
Accept: */*

* Connection #0 to host whoami.lapre.com left intact

I am forwarding port 80 from the whoami container to port 2400.

Why is this happening? How can I work around it? Can this be handled via ufw or a redirect possibly? What would be the best approach to prevent the “bad” scenario above? Am I running a non-standard configuration?

2. Error messages and/or full log output:

No real errors or misconfigurations, I just wish for some guidance.

3. Caddy version:

2.6.2

4. How I installed and ran Caddy:

sudo apt install caddy

Then I modified /etc/caddy/Caddyfile and ran sudo systemctl reload caddy.service.

a. System environment:

Debian 12 on x86_64, Docker 26.0.0, docker-compose v2.25.0.

b. Command:

sudo systemctl reload caddy.service

c. Service/unit/compose file:

services:
  whoami:
    # A container that exposes an API to show its IP address
    image: traefik/whoami
    restart: unless-stopped
    ports:
      - 2400:80

d. My complete Caddy config:

{
	debug
	acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}

lapre.com,
www.lapre.com {
	# Set this path to your site's directory.
	root * /usr/share/caddy

	# Enable the static file server.
	file_server
}

whoami.lapre.com {
	reverse_proxy localhost:2400
}

paperless.lapre.com {
	reverse_proxy localhost:8000 {
		header_down Referrer-Policy "strict-origin-when-cross-origin"
	}
}

5. Links to relevant resources:

Sorry - It might just be me being dense, but I’m glossing over the “Bad” section and it’s not immediately apparent to me what data is being leaked that’s undesirable.

It also doesn’t look like Caddy’s in the HTTP path at all. If it was, with that config, you’d get a HTTP->S redirect instead of a 200 response, and it would come with a server: Caddy header. It seems like you’re querying whoami directly. Caddy is giving a good response but direct access is a bad response? Am I understanding that right?

Can you be a little more specific on the exact problem details?

2 Likes

Sorry, that’s what I get for posting late at night.

The bad part is that I don’t want anything at all available on a non-secure connection.

Here’s the output from ufw:

sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
22                         ALLOW       Anywhere                  
80                         ALLOW       Anywhere                  
443                        ALLOW       Anywhere                  
22 (v6)                    ALLOW       Anywhere (v6)             
80 (v6)                    ALLOW       Anywhere (v6)             
443 (v6)                   ALLOW       Anywhere (v6)

I am only allowing 22, 80, and 443 on this server. I have everything locked down for security. But 2400 is shared beyond localhost and I don’t want that. I want it to be funneled through Caddy (which it is) and not available directly (which unfortunately it is).

FWIW my paperless container exhibits the same behavior as well: https://paperless.lapre.com works but http://paperless.lapre.com:8000 also works even though I never made either 2400 or 8000 available via ufw.

How do I make ports 2400 and 8000 internal only? Isn’t this the standard way Docker operates?

With this configuration, you have instructed Docker to open port 2400 on your host and allow it through the firewall. I’m guessing you also have a ports configuration on paperless.

Docker configures iptables directly to allow configured ports through. This nuance is sometimes easy to miss, especially for newer Docker users. That means it bypasses ufw and similar tools. It is a valid criticism of Docker that this behaviour is often surprising and unintuitive until you’re explicitly made aware of it.

You’ll need to Google how to tell Docker not to do this; I don’t remember off the top of my head but I’m pretty sure it’s a daemon json config file change + reload the Docker daemon.

Alternatively, and this is my preferred way to handle it: put Caddy in Docker too, put it on the same network as your other compose services, and refer to those containers by their service name instead of localhost in your Caddyfile.

So, ultimately, this isn’t Caddy leaking any info (as evidenced by the lack of server header on those requests). It’s just a Docker-specific gotcha relating to the firewall.

3 Likes

Thanks for the help. I need to go back and review Docker networking apparently! And maybe rework my config to add Caddy to Docker.

Thanks again!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.