Docker Caddy Porkbun reverse-proxy SSL issues

1. The problem I’m having:

Direct local access to Caddy hits an SSL error. Access to my domain times out.

$ curl -vL <local IP>
* Trying <local IP>:80...
* Connected to <local IP> (<local IP>) port 80
> GET / HTTP/1.1
> Host: <local IP>
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< Connection: close
< Location: https://<local IP>/
< Server: Caddy
< Date: Mon, 13 Jan 2025 02:59:19 GMT
< Content-Length: 0
<
* Closing connection
* Clear auth, redirects to port from 80 to 443
* Issue another request to this URL: 'https://<local IP>/'
* Trying <local IP>:443...
* Connected to <local IP> (<local IP>) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS alert, internal error (592):
* OpenSSL/3.0.13: error:0A000438:SSL routines::tlsv1 alert internal error
* Closing connection
curl: (35) OpenSSL/3.0.13: error:0A000438:SSL routines::tlsv1 alert internal error
$ curl -vL actual.<domain>
* Host actual.<domain>:80 was resolved.
* IPv6: (none)
* IPv4: <public IP>
* Trying <public IP>:80...
* Connected to actual.<domain> (<public IP>) port 80
> GET / HTTP/1.1
> Host: actual.<domain>
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< Connection: close
< Location: https://actual.<domain>/
< Server: Caddy
< Date: Mon, 13 Jan 2025 02:44:19 GMT
< Content-Length: 0
<
* Closing connection
* Clear auth, redirects to port from 80 to 443
* Issue another request to this URL: 'https://actual.<domain>/'
* Host actual.<domain>:443 was resolved.
* IPv6: (none)
* IPv4: <public IP>
* Trying <public IP>:443...
* connect to <public IP> port 443 from <local IP> port 60476 failed: Connection timed out
* Failed to connect to actual.<domain> port 443 after 135770 ms: Couldn't connect to server
* Closing connection
curl: (28) Failed to connect to actual.<domain> port 443 after 135770 ms: Couldn't connect to server

2. Error messages and/or full log output:

$ docker logs caddy
{"level":"info","ts":1736738805.3893967,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1736738805.3934267,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"info","ts":1736738805.3968825,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1736738805.3974032,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x4000497280"}
{"level":"info","ts":1736738805.3976498,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1736738805.3977003,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"debug","ts":1736738805.3977978,"logger":"http.auto_https","msg":"adjusted config","tls":{"automation":{"policies":[{"subjects":["actual.<domain>"]},{}]}},"http":{"servers":{"remaining_auto_https_redirects":{"listen":[":80"],"routes":[{},{}],"logs":{"logger_names":{"actual.<domain>":["log0"]}}},"srv0":{"listen":[":443"],"routes":[{"handle":[{"handler":"subroute","routes":[{"handle":[{"encodings":{"gzip":{},"zstd":{}},"handler":"encode","prefer":["gzip","zstd"]},{"handler":"reverse_proxy","upstreams":[{"dial":"actual_server:5006"}]}]}]}],"terminal":true}],"tls_connection_policies":[{}],"automatic_https":{},"logs":{"logger_names":{"actual.<domain>":["log0"]}}}}}}
{"level":"debug","ts":1736738805.3999841,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
{"level":"warn","ts":1736738805.4001048,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"warn","ts":1736738805.4001248,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"info","ts":1736738805.4001389,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"debug","ts":1736738805.40033,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":false}
{"level":"info","ts":1736738805.400494,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1736738805.4016218,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1736738805.401682,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["actual.<domain>"]}
{"level":"debug","ts":1736738805.4035254,"logger":"tls.cache","msg":"added certificate to cache","subjects":["actual.<domain>"],"expiration":1744486011,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"12b777d83d26aec6f3879816073c8bcb6e5b6dade4ee3a0ff754de89004be144","cache_size":1,"cache_capacity":10000}
{"level":"debug","ts":1736738805.4036276,"logger":"events","msg":"event","name":"cached_managed_cert","id":"f315f818-daae-4848-86de-c0265b8f81cf","origin":"tls","data":{"sans":["actual.<domain>"]}}
{"level":"info","ts":1736738805.4042442,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1736738805.404335,"msg":"serving initial configuration"}
{"level":"info","ts":1736738805.406132,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"cd1c4966-0822-4bee-9b19-c5f76fe6cd36","try_again":1736825205.4061272,"try_again_in":86399.999998185}
{"level":"info","ts":1736738805.4064026,"logger":"tls","msg":"finished cleaning storage units"}
$ docker exec -it caddy cat /var/log/caddy/access.log
{"level":"info","ts":1736738857.095173,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"<public IP>","remote_port":"35778","client_ip":"<public IP>","proto":"HTTP/1.1","method":"GET","host":"actual.<domain>","uri":"/","headers":{"User-Agent":["curl/8.5.0"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"","duration":0.000048796,"size":0,"status":308,"resp_headers":{"Server":["Caddy"],"Connection":["close"],"Location":["https://actual.<domain>/"],"Content-Type":[]}}

3. Caddy version:

v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=

4. How I installed and ran Caddy:

a. System environment:

Raspberry Pi 4B running Ubuntu Server 24.04.1 LTS.
Docker version 27.2.0, build 3ab4256
Ran docker build -t caddy-porkbun with the following Dockerfile.

FROM caddy:builder AS builder
RUN xcaddy build \
  --with github.com/caddy-dns/porkbun
FROM caddy:alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

On Porkbun I have an A record for actual.<domain> pointing to my public IP. I have configured port forwarding for 80 and 443 with my router.

b. Command:

docker compose up -d

c. Service/unit/compose file:

services:
  actual_server:
    container_name: actual_server
    image: docker.io/actualbudget/actual-server:latest
    ports:
      - '5006:5006'
    environment:
      - ACTUAL_HTTPS_KEY=/data/selfhost.key
      - ACTUAL_HTTPS_CERT=/data/selfhost.crt
    volumes:
      - ./data:/data
    restart: unless-stopped
  caddy:
    container_name: caddy
    image: caddy-porkbun
    cap_add:
      - NET_ADMIN
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy/data:/data
      - ./caddy/config:/config
    ports:
      - '80:80'
      - '443:443'
    restart: unless-stopped

(edited to include the last 4 lines that were missed when originally copied)

I added the NET_ADMIN capability more recently; I am not sure if it is needed. (As a side note, I’d also like to use Caddy to enable/enforce HTTPS locally for actual_server instead of the manually created certificates I’ve made, but have no idea if that is possible or how to configure it).

d. My complete Caddy config:

{
        debug
        acme_dns porkbun {
                api_key <redacted>
                api_secret_key <redacted>
        }
}
actual.<domain> {
        encode gzip zstd
        reverse_proxy actual_server:5006
        log {
                output file /var/log/caddy/access.log
                level debug
        }
}

I added the log block recently. It hasn’t shown much (contents pasted above).

5. Links to relevant resources:

Thanks for any guidance you can provide.

Port 443 isn’t reachable from outside your network. You should double check the port forwarding rules in your router.

Thanks for the response. My router only has 2 forwarding rules. The table looks like

Application Name Public Private Protocol Local IP Address Remote IP Address Status Manage Action
HTTP 80~80 80~80 TCP <local IP> Any ON [Manage] [Remove]
HTTPS 443~443 443~443 TCP <local IP> Any ON [Manage] [Remove]

I’m also not sure that a router misconfiguration explains why curl -vL <local IP> (also curl -vL localhost when ssh’d to the Pi) fails.

What about the firewall?

Can you be more specific? I’m not very familiar with firewalls. Would this be on my router, on the raspberry pi (Ubuntu server 24.04.1 LTS), or inside the docker container?

For the latter 2, do you have pointers to documentation?

Also, as mentioned before, it really seems to me that there’s an issue outside of my router since curl -vL localhost fails. Can you please explain why your focus is on the externally-facing router configuration when local traffic isn’t working either?

Oh, you need to expose the ports 80 and 443 here

Can you add -k to your curl command?

Can you also try what the following gives you?

openssl s_client -connect <local IP | localhost>:443
1 Like

Ah, drat, apparently when I copied my docker-compose.yml I didn’t get the last few lines. This is what it actually looked like. Sorry.

  caddy:
    container_name: caddy
    image: caddy-porkbun
    cap_add:
      - NET_ADMIN
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy/data:/data
      - ./caddy/config:/config
    ports:
      - '80:80'
      - '443:443'
    restart: unless-stopped

(Side note, the site made me wait 35 min to make this reply, but I also edited the original post. This rate-limit is also why I’m replying to both comments in one go.)

$ openssl s_client -connect localhost:443
CONNECTED(00000003)
2060778AFFFF0000:error:0A000438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:../ssl/record/rec_layer_s3.c:1599:SSL alert number 80
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 293 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
$ curl -kvL localhost
* Host localhost:80 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:80...
* Connected to localhost (::1) port 80
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/8.5.0
> Accept: */*
> 
< HTTP/1.1 308 Permanent Redirect
< Connection: close
< Location: https://localhost/
< Server: Caddy
< Date: Mon, 13 Jan 2025 15:13:48 GMT
< Content-Length: 0
< 
* Closing connection
* Clear auth, redirects to port from 80 to 443
* Issue another request to this URL: 'https://localhost/'
* Host localhost:443 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:443...
* Connected to localhost (::1) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, internal error (592):
* OpenSSL/3.0.13: error:0A000438:SSL routines::tlsv1 alert internal error
* Closing connection
curl: (35) OpenSSL/3.0.13: error:0A000438:SSL routines::tlsv1 alert internal error

I tried a comparable config in a cloud VM and it worked fine, so i’m going to conclude it is something wrong with my router or home network.