Caddy as reverse proxy for PostgreSQL in Docker

1. The problem I’m having:

I have Caddy 2.9 with the layer4 module compiled with xcaddy in one docker container and PostgreSQL 17 in another docker container on the same device. I’m trying to reverse proxy traffic coming in through the subdomain data.franzmuenzner.com to the postgres container. When I expose port 5432 on the postgres container and use it directly, I can get in, but as soon as I try to use Caddy as the reverse proxy, with both containers in a custom docker network, the psql command just freezes and eventually times out.

2. Error messages and/or full log output:

# CADDY DOCKER LOG
Attaching to caddy
caddy  | {"level":"info","ts":1741017220.100555,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
caddy  | {"level":"info","ts":1741017220.1018128,"msg":"adapted config to JSON","adapter":"caddyfile"}
caddy  | {"level":"warn","ts":1741017220.1018238,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
caddy  | {"level":"info","ts":1741017220.102782,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
caddy  | {"level":"info","ts":1741017220.102902,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy  | {"level":"info","ts":1741017220.1029184,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy  | {"level":"info","ts":1741017220.1038644,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
caddy  | {"level":"info","ts":1741017220.1041534,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000192080"}
caddy  | {"level":"info","ts":1741017220.104218,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy  | {"level":"warn","ts":1741017220.1043134,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
caddy  | {"level":"warn","ts":1741017220.1043332,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
caddy  | {"level":"info","ts":1741017220.1043365,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy  | {"level":"info","ts":1741017220.1043396,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["franzmuenzner.com","data.franzmuenzner.com","www.franzmuenzner.com"]}
caddy  | {"level":"info","ts":1741017220.1057901,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy  | {"level":"info","ts":1741017220.1058023,"msg":"serving initial configuration"}
caddy  | {"level":"info","ts":1741017220.1063685,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"dd7f0f54-9d67-4f26-b7f6-40131cff6122","try_again":1741103620.1063673,"try_again_in":86399.99999974}
caddy  | {"level":"info","ts":1741017220.1064937,"logger":"tls","msg":"finished cleaning storage units"}
# POSTGRES DOCKER LOG
Attaching to postgres
postgres  | The files belonging to this database system will be owned by user "postgres".
postgres  | This user must also own the server process.
postgres  | 
postgres  | The database cluster will be initialized with locale "en_US.utf8".
postgres  | The default database encoding has accordingly been set to "UTF8".
postgres  | The default text search configuration will be set to "english".
postgres  | 
postgres  | Data page checksums are disabled.
postgres  | 
postgres  | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres  | creating subdirectories ... ok
postgres  | selecting dynamic shared memory implementation ... posix
postgres  | selecting default "max_connections" ... 100
postgres  | selecting default "shared_buffers" ... 128MB
postgres  | selecting default time zone ... Etc/UTC
postgres  | creating configuration files ... ok
postgres  | running bootstrap script ... ok
postgres  | performing post-bootstrap initialization ... ok
postgres  | initdb: warning: enabling "trust" authentication for local connections
postgres  | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
postgres  | syncing data to disk ... ok
postgres  | 
postgres  | 
postgres  | Success. You can now start the database server using:
postgres  | 
postgres  |     pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres  | 
postgres  | waiting for server to start....2025-03-03 15:51:21.289 UTC [49] LOG:  starting PostgreSQL 17.4 (Debian 17.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
postgres  | 2025-03-03 15:51:21.289 UTC [49] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres  | 2025-03-03 15:51:21.293 UTC [52] LOG:  database system was shut down at 2025-03-03 15:51:21 UTC
postgres  | 2025-03-03 15:51:21.299 UTC [49] LOG:  database system is ready to accept connections
postgres  |  done
postgres  | server started
postgres  | 
postgres  | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres  | 
postgres  | waiting for server to shut down....2025-03-03 15:51:21.425 UTC [49] LOG:  received fast shutdown request
postgres  | 2025-03-03 15:51:21.427 UTC [49] LOG:  aborting any active transactions
postgres  | 2025-03-03 15:51:21.429 UTC [49] LOG:  background worker "logical replication launcher" (PID 55) exited with exit code 1
postgres  | 2025-03-03 15:51:21.431 UTC [50] LOG:  shutting down
postgres  | 2025-03-03 15:51:21.432 UTC [50] LOG:  checkpoint starting: shutdown immediate
postgres  | 2025-03-03 15:51:21.435 UTC [50] LOG:  checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.004 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/14E4FA0, redo lsn=0/14E4FA0
postgres  | 2025-03-03 15:51:21.440 UTC [49] LOG:  database system is shut down
postgres  |  done
postgres  | server stopped
postgres  | 
postgres  | PostgreSQL init process complete; ready for start up.
postgres  | 
postgres  | 2025-03-03 15:51:21.552 UTC [1] LOG:  starting PostgreSQL 17.4 (Debian 17.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
postgres  | 2025-03-03 15:51:21.552 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres  | 2025-03-03 15:51:21.552 UTC [1] LOG:  listening on IPv6 address "::", port 5432
postgres  | 2025-03-03 15:51:21.556 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres  | 2025-03-03 15:51:21.560 UTC [63] LOG:  database system was shut down at 2025-03-03 15:51:21 UTC
postgres  | 2025-03-03 15:51:21.566 UTC [1] LOG:  database system is ready to accept connections

3. Caddy version:

v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=

3.1 PostgreSQL version:

PostgreSQL 17.4 (Debian 17.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

4. How I installed and ran Caddy:

a. System environment:

Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-54-generic x86_64)

b. Command:

psql "postgresql://####:####@data.franzmuenzner.com:5432/postgres?sslmode=require&sslnegotiation=direct"

(credentials censored)

c. Service/unit/compose file:

#POSTGRES COMPOSE FILE
services:
  postgres:
    image: postgres:17
    container_name: postgres
    restart: unless-stopped
    shm_size: 128mb
    environment:
      POSTGRES_USER: ####
      POSTGRES_PASSWORD: ####
    volumes:
      - ./data:/var/lib/postgresql/data
    expose:
      - "5432"
    networks:
      - proxy

volumes:
  postgres:

networks:
  proxy:
    name: proxy
    external: true
# CADDY COMPOSE FILE
services:
  caddy:
    build: ./build
    image: jurifm/caddy
    container_name: caddy
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
      - "5432:5432"
    volumes:
      - $PWD/conf:/etc/caddy
      - $PWD/site:/srv
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - proxy

volumes:
  caddy_data:
  caddy_config:

networks:
  proxy:
    external: true

d. My complete Caddy config:

{
    servers {
        listener_wrappers {
            layer4 {
                @postgres tls sni data.franzmuenzner.com
                route @postgres {
                    proxy postgres:5432
                }
            }
            tls
        }
    }
}

franzmuenzner.com {
        root * /srv
        file_server
}

www.franzmuenzner.com {
        redir "https://franzmuenzner.com/" permanent
}

5. Links to relevant resources:

https://hub.docker.com/_/caddy
https://hub.docker.com/_/postgres

I’m not seeing anything in Caddy logs. Make sure you add the debug global option.

I’m not a PostgreSQL guru, but I’m not seeing anything indicative of a problem in your current logs. Your Docker network for proxy is external, so it would be helpful if we could see the configuration for it.

Maybe I’m misunderstanding something with layer4, but I don’t think port 5432 should be published, especially in Caddy’s container. Did you try it without publishing it? If the proxy network is in a bridge configuration, you shouldn’t need to publish it, anyway.

Well I just created the network with docker network new proxy, there is no special config. If I want to proxy incoming traffic from outside the server on port 5432 to the postgres container don’t I need to publish the port so caddy can listen on it?

I can only imagine it being a problem with postgres (they used a weird custom ssl/tls handshake before version 17), but in the github issue I linked someone posted a supposedly working config.

Anyways, here’s the caddy log with debug on, but I don’t see any new information:

Attaching to caddy
caddy  | {"level":"info","ts":1741091291.6330407,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
caddy  | {"level":"info","ts":1741091291.6341875,"msg":"adapted config to JSON","adapter":"caddyfile"}
caddy  | {"level":"warn","ts":1741091291.6341999,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
caddy  | {"level":"info","ts":1741091291.6352012,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
caddy  | {"level":"info","ts":1741091291.6353812,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy  | {"level":"info","ts":1741091291.635397,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy  | {"level":"debug","ts":1741091291.6354165,"logger":"http.auto_https","msg":"adjusted config","tls":{"automation":{"policies":[{}]}},"http":{"servers":{"remaining_auto_https_redirects":{"listen":[":80"],"routes":[{},{}]},"srv0":{"listen":[":443"],"listener_wrappers":[{"routes":[{"handle":[{"handler":"proxy","upstreams":[{"dial":["postgres:5432"]}]}],"match":[{"tls":{"sni":["data.franzmuenzner.com"]}}]}],"wrapper":"layer4"},{"wrapper":"tls"}],"routes":[{"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"static_response","headers":{"Location":["https://franzmuenzner.com/"]},"status_code":301}]}]}],"terminal":true},{"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"vars","root":"/srv"},{"handler":"file_server","hide":["/etc/caddy/Caddyfile"]}]}]}],"terminal":true}],"tls_connection_policies":[{}],"automatic_https":{}}}}}
caddy  | {"level":"info","ts":1741091291.6356335,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0003fcf80"}
caddy  | {"level":"debug","ts":1741091291.6358824,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":false}
caddy  | {"level":"info","ts":1741091291.635921,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
caddy  | {"level":"info","ts":1741091291.6361973,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy  | {"level":"debug","ts":1741091291.636257,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
caddy  | {"level":"warn","ts":1741091291.6362672,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
caddy  | {"level":"warn","ts":1741091291.6362698,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
caddy  | {"level":"info","ts":1741091291.636272,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy  | {"level":"info","ts":1741091291.6362746,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["www.franzmuenzner.com","franzmuenzner.com","data.franzmuenzner.com"]}
caddy  | {"level":"debug","ts":1741091291.6374419,"logger":"tls.cache","msg":"added certificate to cache","subjects":["www.franzmuenzner.com"],"expiration":1748627089,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"0f94cb0bcde205c6857d16f17acb44f2e72c7482c9b8c11a9d8793a667e56320","cache_size":1,"cache_capacity":10000}
caddy  | {"level":"debug","ts":1741091291.637471,"logger":"events","msg":"event","name":"cached_managed_cert","id":"3ae526fa-93fb-4474-8d0f-edc2c36251a2","origin":"tls","data":{"sans":["www.franzmuenzner.com"]}}
caddy  | {"level":"debug","ts":1741091291.6378398,"logger":"tls.cache","msg":"added certificate to cache","subjects":["franzmuenzner.com"],"expiration":1748622549,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"418a0c6014d3b1dbbd85d5a396ed09579b5db7922f1e73a4ce8fc980f0c078e1","cache_size":2,"cache_capacity":10000}
caddy  | {"level":"debug","ts":1741091291.6378675,"logger":"events","msg":"event","name":"cached_managed_cert","id":"74663a00-d6ca-4174-bba8-c32912619eaf","origin":"tls","data":{"sans":["franzmuenzner.com"]}}
caddy  | {"level":"debug","ts":1741091291.6382515,"logger":"tls.cache","msg":"added certificate to cache","subjects":["data.franzmuenzner.com"],"expiration":1748785330,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"ac140bfca35d9924afb38074b5b7394e3d9a00445ae740f3fc829a6c15a8de17","cache_size":3,"cache_capacity":10000}
caddy  | {"level":"debug","ts":1741091291.638314,"logger":"events","msg":"event","name":"cached_managed_cert","id":"d6ecec18-aab6-4184-81e5-d7d9766dcc8c","origin":"tls","data":{"sans":["data.franzmuenzner.com"]}}
caddy  | {"level":"info","ts":1741091291.6385577,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy  | {"level":"info","ts":1741091291.6385715,"msg":"serving initial configuration"}
caddy  | {"level":"info","ts":1741091291.6393154,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"dd7f0f54-9d67-4f26-b7f6-40131cff6122","try_again":1741177691.6393142,"try_again_in":86399.99999969}
caddy  | {"level":"info","ts":1741091291.6393998,"logger":"tls","msg":"finished cleaning storage units"}

I realised that port 5432 was the wrong approach and tried the connection again on port 443. This time, it didn’t time out but printed the following output:

psql error:

psql: error: connection to server at "data.franzmuenzner.com" (194.164.206.244) , port 433 failed: SSL SYSCALL error: EOF detected

caddy logs:

{"level":"debug","ts":1741109199.0379655,"logger":"caddy.listeners.layer4","msg":"matching","remote":"80.142.12.5:54361","error":"consumed all prefetched bytes","matcher":"layer4.matchers.tls","matched":false}
{"level":"debug","ts":1741109199.0610607,"logger":"caddy.listeners.layer4","msg":"prefetched","remote":"80.142.12.5:54361","bytes":350}
{"level":"debug","ts":1741109199.0611362,"logger":"layer4.matchers.tls","msg":"matched","remote":"80.142.12.5:54361","server_name":"data.franzmuenzner.com"}
{"level":"debug","ts":1741109199.0611463,"logger":"caddy.listeners.layer4","msg":"matching","remote":"80.142.12.5:54361","matcher":"layer4.matchers.tls","matched":true}
{"level":"debug","ts":1741109199.0619378,"logger":"layer4.handlers.proxy","msg":"dial upstream","remote":"80.142.12.5:54361","upstream":"postgres:5432"}
{"level":"debug","ts":1741109199.0749602,"logger":"caddy.listeners.layer4","msg":"connection stats","remote":"80.142.12.5:54361","read":350,"written":0,"duration":0.036997513}

also, here is the docker network info as per docker network inspect proxy:

[
    {
        "Name": "proxy",
        "Id": "0a415ccc223c7843c81fa25c37d370a62ec0cd3f0fad4681628359cee6e5a75e",
        "Created": "2025-03-03T15:11:16.710371468Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "e7143315b63ebb840ed96ffbb76e229409ff27cfb8b0e642c2d01f5486584838": {
                "Name": "caddy",
                "EndpointID": "af0501c7be5bac82bed08346c1172d53cb662c663f8f074e05e5727c5d714f60",
                "MacAddress": "b2:7c:b1:95:ad:93",
                "IPv4Address": "172.21.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

I figured it out! Another config error sneaked in while I was trying different things to get it to work. For anyone wondering how to get it to work, my initial setup, but without publishing port 5432 for caddy and using port 443 in the connection string.

1 Like

Hey, could you post your complete connection string, please? Thanks!

Well it’s just the one under command in the first post, so with the port replaced with 443 and my url replaced with a template it would be:

psql "postgresql://####:####@data.example.com:443/postgres?sslmode=require&sslnegotiation=direct"

This assumes you’re connecting to the database postgres, replace the hashtags with your postgres username and password. Doesn’t work with IntelliJ though for reasons I couldn’t figure out.