Reverse Proxy not working with https/:443

1. Caddy version (caddy version):

v2.4.3

2. How I run Caddy:

a. System environment:

Proxmox VM > Debian (10.10) > Docker

b. Command:

Docker Compose

c. Service/unit/compose file:

  caddy:
    image: caddy
    container_name: caddy
    hostname: caddy
    restart: unless-stopped
    networks:
      - base
    ports:
      - "80:80"
      - "443:443"
    environment:
      - MY_DOMAIN
    volumes:
      - ${USERDIR}/docker/Caddyfile:/etc/caddy/Caddyfile
      - ${USERDIR}/docker/caddy/data:/data
      - ${USERDIR}/docker/caddy/config:/config

d. My complete Caddyfile or JSON config:

{
        admin :2020
        email ...
        #acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
ha.horsboll.dk {
        reverse_proxy whoami:80
}

3. The problem I’m having:

For the life of me I can’t get reverse_proxy to work with https. If forcing port :80 everything works as intended.

The intent is to have Caddy reverse_proxy my subdomain to a different VM on the proxmox host hosting Home Assistant (which run on another subnet than the docker containers). This works if forcing port 80, but for now I’ve set up some test-containers in the Docker to eliminate this bridging as a source of my errors (or HA requring it’s own setup to accept the proxy - I’ll cross that bridge when I get there I suppose ;).

4. Error messages and/or full log output:

docker caddy logs

gives me the following (which look OK to me)

{"level":"info","ts":1624947039.7097056,"logger":"admin","msg":"admin endpoint started","address":"tcp/:2020","enforce_origin":false,"origins":[":2020"]}
{"level":"warn","ts":1624947039.7097206,"logger":"admin","msg":"admin endpoint on open interface; host checking disabled","address":"tcp/:2020"}
{"level":"info","ts":1624947039.7098718,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1624947039.7098897,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1624947039.7100322,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["adguard.horsboll.dk","ha.horsboll.dk"]}
{"level":"info","ts":1624947039.7100985,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000545500"}
{"level":"info","ts":1624947039.7135723,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc000544af0"}
{"level":"info","ts":1624947039.7137063,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1624947039.7137303,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1624947039.718417,"logger":"admin","msg":"stopped previous server","address":"tcp/:2020"}

5. What I already tried:

I’m fairly new in this home-hosting setup (and Linux), so I feel like it’s something obvious I’m missing. It has been a breeze setting things up to this point, but now I’ve spent almost a weekfull of entire evenings trying to get https to work - and I just don’t have the techical knowhow to troubleshoot it. I feel like I’ve tried everything from setting up firewall rules (and completely disabling the firewall), trying host-networked Docker containers, different Linux distributions (debian and alpine), etc. I honestly can’t remember everything I’ve been though, but I feel like I’ve read every caddy-tutorial and forum post out there, along with the the caddy documentation.

Online port-checkers tell me that :80 is open, but :443 is not. So it seems nothing runs actively on the port, but everything looks mapped correctly from Docker and Caddy seems to be listening on the port (as far as I can see with commands like “lsof -i -P -n | grep LISTEN” (which tells me Docker is listening):

docker-pr 13047     root    4u  IPv4  57437      0t0  TCP *:443 (LISTEN)
docker-pr 13053     root    4u  IPv6  55378      0t0  TCP *:443 (LISTEN)

Previously I had a Home Assistant/DuckDNS setup running on a RPI, so I know that the port isn’t blocked from my ISP and that my router forwards it correctly. The Docker VM has it’s own IP on my home network, so unless there’s something buried within this software-approach to defining a VM as it’s own device, then everything up to this point has been verified to work on different hardware.

6. Links to relevant resources:

I apologize if there’s something missing in the above. I’m not strong in the Linux-kung fu, but I will do my best to supply further information as requested.

Thanks in advance for any pointers you might have.

I think there’s something missing in this post. What exactly are you trying to do? What’s the config that you would like to work but isn’t working? I’m not seeing an actual problem in what you posted, everything seems to be working as intended.

I apologize. The specific error im getting in my browser is a 404 (Chrome writes “No such file or directory”) whenever I try serving the reverse_proxy with https/port :443.

If I add :80 after my domain, to force http, everything works.

This might not be an issue with caddy - but as written I am not able to troubleshoot any further. For all I can discern it might be an issue with serving up the certicate or something like that, since http works. It might also be a firewall issue, although I’ve tried disabling that through proxmox, but I’m at my wits end and though I might as well try asking someone smarter than myself.

Thank you for your time.

Do you mean whoami:443? You didn’t post your config, so I can only assume based on what you mean. Please be specific.

Your upstream app is what would be returning the 404 response. I can think of a couple of possible reasons it could be, based on the above assumption.

See the note here: reverse_proxy (Caddyfile directive) — Caddy Documentation. The Host header of the original request will be retained. This means that if your upstream is expecting the hostname to be whoami to serve what you want, then the request will fail. To solve this, you can override the Host header with the header_up option of the reverse_proxy handler. For example, to use the upstream address as the Host header:

header_up Host {http.reverse_proxy.upstream.hostport}

And also, if your upstream serves a certificate that Caddy doesn’t trust (i.e. isn’t in the trust store of the Caddy container) then it would also fail. Either make sure that the certificate it serves is trusted, or set the tls_insecure_skip_verify option (but I very very strongly recommend against doing this; it throws away all security that TLS provides – if you’re going to turn on tls_insecure_skip_verify, then you might as well just use :80 instead).

I mean when accessing “ha.horsboll.dk” in the browser from my main PC. The Proxmox>Docker>Caddy setup is running on a mini-pc and when trying to access ha.horsboll.dk from my main PC, I get the 404 (unless forcing port 80).

What config do you want to see? The docker-compose?

I read about the host-header, but honestly I do not know enough about the inner-workings of the flow to understand whether or not that is my issue (which is why I’m here, hat in hand).

Well this sounds like something I’ve completely missed in all of what I’ve read. It was my understanding that Caddy handled all of the https/SSL workings - hence only Caddy needed access to the certificates it pulls from letsencrypt? Which also is why I can (edit: or should be able to) reverse_proxy to port :80 on my apps, since Caddy serves as a middle-man and handles the https/SSL part of the traffic.

At least that is what I get from reading guides like this one (written for v1 I assume): Using Caddy as a reverse proxy in a home network

I once again apologize for maybe missing a key part of the setup. When reading the guides and the documentation for what I wanted to achieve (reverse proxying to different VMs and Docker containers) everything seemed easy enough to understand and setup – and it was up untill now – but maybe theres a layer of basic understanding that I’ve missed entirely.

My assumption is/was that it has to do with serving the certificates, since port 80 works, but that might just be the Host header not being correct, as you mentioned. I will try looking into that.

Thank you once again for your input.

Okay, in that case you can ignore the points I made above. Those were about reverse proxying to an upstream serving HTTPS, not about connecting to Caddy over HTTPS.

Getting a 404 is suspect. Caddy should not respond with a 404 in that case.

Browsers are often unreliable for debugging these sorts of things. Try running curl -v https://ha.horsboll.dk and share the output. The headers in the response might give us a hint at what’s going on.

I’m going to guess that port 443 is forwarded to a different VM/machine on your network instead of to Caddy, or maybe your ISP is intercepting these requests (some ISPs are known to hijack port 443 for dumb reasons). But there’s not enough evidence to know either way without more digging.

1 Like

Okay, so by this you mean that if I instead had a running application with it’s own certificate, that would have be installed/verified somehow by Caddy?

When running this command on the VM itself (through SSH) I get:


* Expire in 0 ms for 6 (transfer 0x5605bc97bc10)
...
...
...
* Expire in 200 ms for 1 (transfer 0x5605bc97bc10)
*   Trying 176.23.151.123...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5605bc97bc10)
* Connected to ha.horsboll.dk (176.23.151.123) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, bad certificate (554):
* SSL certificate problem: EE certificate key too weak
* Closing connection 0
curl: (60) SSL certificate problem: EE certificate key too weak
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

There is a bunch of files in the /etc/ssl/certs-folder, but to my (limited) knowledge that’s not where Caddy saves the letsencrypt certificates? As you can see in my docker-compose file, I set up 3 volumes: 1 for the Caddyfile, 1 for the data folder and 1 for the config folder.

So my Caddy-certs are stored here as far as I can tell: ~/docker/caddy/data/caddy/certificates/ where I have the following two files acme-staging-v02.api.letsencrypt.org-directory (from my testing) and acme-v02.api.letsencrypt.org-directory.

Have I missed something here in my setup?

Try curl -kIL https://ha.horsboll.dk for us. We’re going to ignore the validation issue and confirm we’re actually talking to Caddy at all (I have a hunch we aren’t).

No, that’s where the VM is getting is CA-certificates (i.e. certificate authorities; the certs it uses to check that another cert is publicly valid).

This is why I think we might not be talking to Caddy. I don’t think it can requisition a certificate that is weak enough to trigger this.

2 Likes

With that I get the following:

HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json
Cache-Control: no-cache
Expires: 0

I don’t think whoami (assuming it’s one of the popular whoami containers on the Docker registry) should be sending Content-Type: application/json or keep-alive or Expires.

Caddy would also append a Server header, if that was in fact Caddy. I can see from the Caddyfile you provided that you haven’t explicitly stripped this header, so it should be present.

All of this leads me to believe that however your network environment is configured, you aren’t actually accessing Caddy when you browse to ha.horsboll.dk, so no amount of troubleshooting Caddy will solve the issue. You’ll need to figure out where those packets are going instead and how to fix that.

1 Like

That makes sense. The only thing that confuses me about that is, that if I force port 80 everything works. So somehow the port forwarding of port 443 must be the issue.

This is weird since port 443 previously has been forwarded to a RPI without issue (that port forward has been deleted of course), but I understand that if we’re not talking to Caddy, you have no chance in helping me.

I will have do dive more into Proxmox. The issue must be somewhere with the VM.

Thank you for all the help so far.

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.