Synapse server, coturn server with tailscale and caddy as sidecar


Title: Self Hosting with tailscale (TSDproxy) and caddy as sidecar
Description: To achieve selfhosted Synapse and coturn with Tsdproxy and Caddy layer4 as sidecare

[IMPORTANT]

I came across different Tsdproxy documents and setups by different people on internet to achieve a secure access to self served services from homelab. Following is the setup of synapse, coturn hosted on ubuntu server at home behind CGNAT. I tried to use TSDproxy instead of tailscale with caddy as sidecar.

[PROBLEMS]

The problem I faced with this setup is that my caddy container is not peaking the caddyfile. Wither its a problem on caddy side? Don’t know… I use the same caddyfile with tailscale and caddy as a sidecar and it is working. But call are not working Both with simple caddy container and with addational modules i-e Layer4.

Coming straight to the setup.

Docker-compose.yaml

services:
  synapse:
    container_name: synapse
    image: docker.io/matrixdotorg/synapse:latest
    restart: unless-stopped
    environment:
      - SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
      - UID=1000
      - GID=1000
    volumes:
      - /home/ubuntu/synapse:/data
    depends_on:
      - synapse-db
    ports:
      - 8008:8008
      - 8448:8448
    networks:
      - synapse
    labels:
      - tsdproxy.enable=true
      - tsdproxy.name=matrix
      - tsdproxy.ephemeral=false
      - tsdproxy.container_port=8008 

  synapse-db:
    image: docker.io/postgres:15-alpine
    container_name: synapse-db
    restart: unless-stopped
    environment:
      - POSTGRES_USER=xyz....
      - POSTGRES_PASSWORD=xyz....
      - POSTGRES_DB=synapse
      - POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C
    volumes:
      - /home/ubuntu/synapse-db/schemas:/var/lib/postgresql/data
    networks:
      - synapse

  coturn:
    image: instrumentisto/coturn
    container_name: coturn
    restart: unless-stopped
    environment:
      - TURN_SERVER_PORT=3478
      - TURN_SERVER_PORT_TLS=5349
      - TURN_SERVER_REALM=coturn.tailnet.ts.net.
      - TURN_SERVER_FQDN=coturn.tailnet.ts.net.
      - TURN_SERVER_AUTH_SECRET=xyz....
      - TURN_SERVER_MIN_PORT=49152
      - TURN_SERVER_MAX_PORT=65535
    ports:
      - "3478:3478"
      - "3478:3478/udp"
      - "3479:3479"
      - "3479:3479/udp"
      - "5349:5349"
      - "5349:5349/udp"
    networks:
      - synapse
    labels:
      - tsdproxy.enable=true
      - tsdproxy.name=coturn
      - tsdproxy.ephemeral=false
      - tsdproxy.container_port=3478

  tsdproxy:
    image: almeidapaulopt/tsdproxy
    container_name: tsdproxy
    restart: unless-stopped
    environment:
      TZ: Asia/Singapur
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - tsdproxy_data:/data
      - /home/ubuntu/tsdproxy/config:/config
    networks:
      - synapse
    labels:
      - tsdproxy.enable=true
      - tsdproxy.name=tsdproxy-synapse
      - tsdproxy.ephemeral=false

  caddy:
    build:
      context: .
      dockerfile: Caddy.Dockerfile
    restart: unless-stopped
    environment:
      DOMAIN: matrix.tailnet.ts.net
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_certs:/certs
      - caddy_data:/data
      - caddy_config:/config
    network_mode: service:tsdproxy

volumes:
  caddy_certs:
  caddy_data:
  caddy_config:
  tsdproxy_data:

networks:
  synapse:
    name: synapse
    driver: bridge

/tsdproxy/config/tsdproxy.yaml

defaultproxyprovider: default
docker:
    local:
        host: unix:///var/run/docker.sock
        defaultproxyprovider: default
tailscale:
  providers:
    default:
      authkey: ""
      authkeyfile: "/config/authkey"
      controlurl: https://controlplane.tailscale.com
  datadir: /data/
http:
  hostname: 0.0.0.0
  port: 8080
log:
  level: info
  json: false
proxyaccesslog: true

Caddyfile

{
    layer4 {
        0.0.0.0:3478
            route {
                proxy {
                    upstream coturn:3478
                }
            }
        }
        0.0.0.0:3479
            route {
                proxy {
                    upstream coturn:3479
                }
            }
        }
    }
}
matrix.forest-gentoo.ts.net {
    reverse_proxy /_matrix/* synapse:8008
    reverse_proxy /_synapse/client/* synapse:8008
}

matrix.forest-gentoo.ts.net:8448 {
    reverse_proxy /_matrix/* synapse:8008
}

homeserver.yaml (for synapse)

server_name: "matrix.tailnet.ts.net."
pid_file: /data/homeserver.pid
listeners:
  - port: 8008
    tls: false
    type: http
    x_forwarded: true
    resources:
      - names: [client, federation]
        compress: false
database:
  name: psycopg2
  args:
    user: xyz.................
    password: xyz...............
    database: synapse
    host: synapse-db
    cp_min: 5
    cp_max: 10
log_config: "/data/matrix.tailnet.ts.net..log.config"
media_store_path: /data/media_store
registration_shared_secret: "xyz.........................................."
report_stats: true
macaroon_secret_key: "xyz..............................................."
form_secret: "xyz................................................
signing_key_path: "/data/matrix.tailnet.ts.net.signing.key"
trusted_key_servers:
  - server_name: "matrix.org"


# vim:ft=yaml

[REVIEW}
It is kindly requested to review my docker-compose.yaml and caddyfile

[REFERENCE MATERIAL]
Tailscale (and Caddy as a sidecar) Reverse Proxy · nextcloud/all-in-one · Discussion #5439 · GitHub

[LOGS - docker compose up]

Attaching to coturn, caddy-1, synapse, synapse-db, tsdproxy
tsdproxy    | 4:16PM INF request host=127.0.0.1:8080 method=GET status=200 url=/health/ready/
synapse     | 2025-03-18 16:16:39,370 - synapse.storage.databases.main.event_push_actions - 1396 - INFO - rotate_notifs-65 - Rotating notifications
synapse     | 2025-03-18 16:16:39,374 - synapse.storage.databases.main.event_push_actions - 1599 - INFO - rotate_notifs-65 - Rotating notifications up to: 1
synapse     | 2025-03-18 16:16:39,380 - synapse.storage.databases.main.event_push_actions - 1685 - INFO - rotate_notifs-65 - Rotating notifications, handling 0 rows
synapse     | 2025-03-18 16:16:39,398 - synapse.storage.databases.main.event_push_actions - 1770 - INFO - rotate_notifs-65 - Rotating notifications, deleted 0 push actions
synapse     | 2025-03-18 16:16:39,609 - synapse.util.caches.lrucache - 217 - INFO - LruCache._expire_old_entries-65 - Dropped 0 items from caches
synapse     | 2025-03-18 16:17:09,370 - synapse.storage.databases.main.event_push_actions - 1396 - INFO - rotate_notifs-66 - Rotating notifications
synapse     | 2025-03-18 16:17:09,375 - synapse.storage.databases.main.event_push_actions - 1599 - INFO - rotate_notifs-66 - Rotating notifications up to: 1
synapse     | 2025-03-18 16:17:09,381 - synapse.storage.databases.main.event_push_actions - 1685 - INFO - rotate_notifs-66 - Rotating notifications, handling 0 rows
synapse     | 2025-03-18 16:17:09,399 - synapse.storage.databases.main.event_push_actions - 1770 - INFO - rotate_notifs-66 - Rotating notifications, deleted 0 push actions
synapse     | 2025-03-18 16:17:09,609 - synapse.util.caches.lrucache - 217 - INFO - LruCache._expire_old_entries-66 - Dropped 0 items from caches
tsdproxy    | 4:17PM INF request host=127.0.0.1:8080 method=GET status=200 url=/health/ready/
synapse     | 2025-03-18 16:17:39,368 - synapse.storage.databases.main.event_push_actions - 1396 - INFO - rotate_notifs-67 - Rotating notifications
synapse     | 2025-03-18 16:17:39,373 - synapse.storage.databases.main.event_push_actions - 1599 - INFO - rotate_notifs-67 - Rotating notifications up to: 1
synapse     | 2025-03-18 16:17:39,380 - synapse.storage.databases.main.event_push_actions - 1685 - INFO - rotate_notifs-67 - Rotating notifications, handling 0 rows
synapse     | 2025-03-18 16:17:39,396 - synapse.storage.databases.main.event_push_actions - 1770 - INFO - rotate_notifs-67 - Rotating notifications, deleted 0 push actions
synapse     | 2025-03-18 16:17:39,609 - synapse.util.caches.lrucache - 217 - INFO - LruCache._expire_old_entries-67 - Dropped 0 items from caches
synapse     | 2025-03-18 16:18:09,374 - synapse.storage.databases.main.event_push_actions - 1396 - INFO - rotate_notifs-68 - Rotating notifications
synapse     | 2025-03-18 16:18:09,379 - synapse.storage.databases.main.event_push_actions - 1599 - INFO - rotate_notifs-68 - Rotating notifications up to: 1
synapse     | 2025-03-18 16:18:09,386 - synapse.storage.databases.main.event_push_actions - 1685 - INFO - rotate_notifs-68 - Rotating notifications, handling 0 rows
synapse     | 2025-03-18 16:18:09,401 - synapse.storage.databases.main.event_push_actions - 1770 - INFO - rotate_notifs-68 - Rotating notifications, deleted 0 push actions
synapse     | 2025-03-18 16:18:09,609 - synapse.util.caches.lrucache - 217 - INFO - LruCache._expire_old_entries-68 - Dropped 0 items from caches
synapse-db  | 2025-03-18 16:18:29.483 UTC [27] LOG:  checkpoint starting: time
synapse-db  | 2025-03-18 16:18:29.748 UTC [27] LOG:  checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.209 s, sync=0.021 s, total=0.266 s; sync files=3, longest=0.017 s, average=0.007 s; distance=6 kB, estimate=23 kB
tsdproxy    | 4:18PM INF request host=127.0.0.1:8080 method=GET status=200 url=/health/ready/
synapse     | 2025-03-18 16:18:39,373 - synapse.storage.databases.main.event_push_actions - 1396 - INFO - rotate_notifs-69 - Rotating notifications
synapse     | 2025-03-18 16:18:39,379 - synapse.storage.databases.main.event_push_actions - 1599 - INFO - rotate_notifs-69 - Rotating notifications up to: 1
synapse     | 2025-03-18 16:18:39,387 - synapse.storage.databases.main.event_push_actions - 1685 - INFO - rotate_notifs-69 - Rotating notifications, handling 0 rows
synapse     | 2025-03-18 16:18:39,396 - synapse.storage.databases.main.event_push_actions - 1770 - INFO - rotate_notifs-69 - Rotating notifications, deleted 0 push actions
synapse     | 2025-03-18 16:18:39,609 - synapse.util.caches.lrucache - 217 - INFO - LruCache._expire_old_entries-69 - Dropped 0 items from caches
synapse     | 2025-03-18 16:18:39,641 - synapse.storage.databases.main.metrics - 399 - INFO - generate_user_daily_visits-6 - Calling _generate_user_daily_visits
synapse     | 2025-03-18 16:19:09,370 - synapse.storage.databases.main.event_push_actions - 1396 - INFO - rotate_notifs-70 - Rotating notifications
synapse     | 2025-03-18 16:19:09,374 - synapse.storage.databases.main.event_push_actions - 1599 - INFO - rotate_notifs-70 - Rotating notifications up to: 1
synapse     | 2025-03-18 16:19:09,381 - synapse.storage.databases.main.event_push_actions - 1685 - INFO - rotate_notifs-70 - Rotating notifications, handling 0 rows
synapse     | 2025-03-18 16:19:09,396 - synapse.storage.databases.main.event_push_actions - 1770 - INFO - rotate_notifs-70 - Rotating notifications, deleted 0 push actions
synapse     | 2025-03-18 16:19:09,609 - synapse.util.caches.lrucache - 217 - INFO - LruCache._expire_old_entries-70 - Dropped 0 items from caches
tsdproxy    | 4:19PM INF request host=127.0.0.1:8080 method=GET status=200 url=/health/ready/
synapse     | 2025-03-18 16:19:39,376 - synapse.storage.databases.main.event_push_actions - 1396 - INFO - rotate_notifs-71 - Rotating notifications
synapse     | 2025-03-18 16:19:39,381 - synapse.storage.databases.main.event_push_actions - 1599 - INFO - rotate_notifs-71 - Rotating notifications up to: 1
synapse     | 2025-03-18 16:19:39,388 - synapse.storage.databases.main.event_push_actions - 1685 - INFO - rotate_notifs-71 - Rotating notifications, handling 0 rows
synapse     | 2025-03-18 16:19:39,406 - synapse.storage.databases.main.event_push_actions - 1770 - INFO - rotate_notifs-71 - Rotating notifications, deleted 0 push actions
synapse     | 2025-03-18 16:19:39,609 - synapse.util.caches.lrucache - 217 - INFO - LruCache._expire_old_entries-71 - Dropped 0 items from caches
synapse     | 2025-03-18 16:19:41,249 - synapse.http.server - 130 - INFO - GET-143 - <XForwardedForRequest at 0x7f192774bad0 method='GET' uri='/.well-known/matrix/client' clientproto='HTTP/1.1' site='8008'> **SynapseError: 404 - .well-known not available
synapse     | 2025-03-18 16:19:41,255 - synapse.access.http.8008 - 508 - INFO - GET-143 - 100.69.103.80 - 8008 - {None} Processed request: 0.004sec/0.002sec (0.005sec, 0.000sec) (0.000sec/0.000sec/0) 61B 404 "GET /.well-known/matrix/client HTTP/1.1" "Element/1.6.34 (Google Pixel 3; Android 13; TQ3A.230901.001; Flavour FDroid; MatrixAndroidSdk2 1.6.34)" [0 dbevts]**
**tsdproxy    | 4:19PM ERR error host=matrix.tailnet.ts.net method=GET module=proxymanager proxyname=matrix status=404 url=/.well-known/matrix/client**

[LOGS-caddy]

docker logs lee-caddy-1
{"level":"info","ts":1742255094.8142493,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1742255094.8152893,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1742255094.8152974,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1742255094.8158128,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1742255094.8158891,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1742255094.8158953,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1742255094.8159025,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"}
{"level":"info","ts":1742255094.8161504,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1742255094.8161805,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1742255094.8162313,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742255094.8162527,"logger":"http","msg":"enabling HTTP/3 listener","addr":":8448"}
{"level":"info","ts":1742255094.8162873,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
{"level":"warn","ts":1742255094.8163018,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"warn","ts":1742255094.8163037,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"info","ts":1742255094.816305,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742255094.8166583,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000493d80"}
{"level":"info","ts":1742255094.817025,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1742255094.8170485,"msg":"serving initial configuration"}
{"level":"info","ts":1742255094.8186064,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"4d62778a-bc64-432c-999e-37590223b5dc","try_again":1742341494.8186045,"try_again_in":86399.9999998}
{"level":"info","ts":1742255094.8186774,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1742255256.1181793,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}
{"level":"warn","ts":1742255256.1182258,"msg":"exiting; byeee!! 👋","signal":"SIGTERM"}
{"level":"info","ts":1742255256.1182382,"logger":"http","msg":"servers shutting down with eternal grace period"}
{"level":"info","ts":1742255256.1184583,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"info","ts":1742255256.118479,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}
{"level":"info","ts":1742255256.2363963,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1742255256.2373035,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1742255256.2373347,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1742255256.2380419,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1742255256.2382019,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1742255256.2382336,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1742255256.238249,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"}
{"level":"info","ts":1742255256.2384589,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000432980"}
{"level":"info","ts":1742255256.2386107,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1742255256.2386763,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1742255256.238792,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742255256.239326,"logger":"http","msg":"enabling HTTP/3 listener","addr":":8448"}
{"level":"info","ts":1742255256.239538,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
{"level":"warn","ts":1742255256.2395885,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"warn","ts":1742255256.2395928,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"info","ts":1742255256.239609,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742255256.239818,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1742255256.2398407,"msg":"serving initial configuration"}
{"level":"info","ts":1742255256.2409785,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"4d62778a-bc64-432c-999e-37590223b5dc","try_again":1742341656.2409766,"try_again_in":86399.9999998}
{"level":"info","ts":1742255256.2410526,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1742255288.3233187,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}
{"level":"warn","ts":1742255288.3233562,"msg":"exiting; byeee!! 👋","signal":"SIGTERM"}
{"level":"info","ts":1742255288.3233724,"logger":"http","msg":"servers shutting down with eternal grace period"}
{"level":"info","ts":1742255288.3235435,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"info","ts":1742255288.323569,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}
{"level":"info","ts":1742255288.4355724,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1742255288.436391,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1742255288.436419,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1742255288.4371314,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]}
{"level":"info","ts":1742255288.4372292,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"}
{"level":"info","ts":1742255288.43724,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1742255288.4372435,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1742255288.4373534,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00017b100"}
{"level":"info","ts":1742255288.4375424,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1742255288.4375758,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1742255288.4376297,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742255288.4376538,"logger":"http","msg":"enabling HTTP/3 listener","addr":":8448"}
{"level":"info","ts":1742255288.4377134,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
{"level":"warn","ts":1742255288.4377573,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"warn","ts":1742255288.4377608,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"info","ts":1742255288.4377623,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742255288.4378448,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1742255288.4378657,"msg":"serving initial configuration"}
{"level":"info","ts":1742255288.439387,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"4d62778a-bc64-432c-999e-37590223b5dc","try_again":1742341688.4393857,"try_again_in":86399.9999998}
{"level":"info","ts":1742255288.4394717,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1742257371.3942564,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}
{"level":"warn","ts":1742257371.3943055,"msg":"exiting; byeee!! 👋","signal":"SIGTERM"}
{"level":"info","ts":1742257371.394325,"logger":"http","msg":"servers shutting down with eternal grace period"}
{"level":"info","ts":1742257371.3945844,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"info","ts":1742257371.394613,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}
{"level":"info","ts":1742312609.534737,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1742312609.5388181,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1742312609.539039,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1742312609.5421622,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1742312609.5441732,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1742312609.5443826,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1742312609.5445087,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"}
{"level":"info","ts":1742312609.5448606,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000313100"}
{"level":"info","ts":1742312609.555271,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1742312609.562979,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"4d62778a-bc64-432c-999e-37590223b5dc","try_again":1742399009.5629747,"try_again_in":86399.9999993}
{"level":"info","ts":1742312609.5700417,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1742312609.5736861,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1742312609.5739176,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742312609.5740216,"logger":"http","msg":"enabling HTTP/3 listener","addr":":8448"}
{"level":"info","ts":1742312609.5741656,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
{"level":"warn","ts":1742312609.5742228,"logger":"http","msg":"HTTP/2 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"warn","ts":1742312609.5742333,"logger":"http","msg":"HTTP/3 skipped because it requires TLS","network":"tcp","addr":":80"}
{"level":"info","ts":1742312609.5742373,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1742312609.5753043,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1742312609.575842,"msg":"serving initial configuration"}

Kindly help me with adjusting my docker-compose and caddyfile with layer4 for raw tcp and udp for coturn

I see 2 suspects:

  • The layer4 config has only tcp configuration. If you want UDP, configure them as well.
  • You mount the Caddyfile directly. We have a note about that on Docker Hub:

:warning: Do not mount the Caddyfile directly at /etc/caddy/Caddyfile

If vim or another editor is used that changes the inode of the edited file, the changes will only be applied within the container when the container is recreated, which is explained in detail in this Medium article⁠

. When using such an editor, Caddy’s graceful reload functionality might not work as expected, as described in this issue⁠.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.