Proxy multiple websites all running inside Docker containers

Quick summary of my big goal:

I have several websites hosted at Squarespace (timwilson.info and rapidsarcheryjoad.org) and would like to move them to Linode to save money. Non are high traffic, and I’m using Hugo to generate static files which I copy into a separate caddy-powered Docker container for each site. My plan is to launch all the sites and a Caddy reverse proxy using docker-compose to coordinate everything. Note: I have only one IP address for this server.

I know that each individual site works because I’ve run them individually without the proxy in front. I’ve been reading lots of articles, but am stuck getting the proxy working. Rather than obfuscate the domains, I’m just going to post what I have in the config files. Now, on to the details…

1. Caddy version:

All containers are running Caddy v. 2.6.3

2. How I installed, and run Caddy:

All containers are based on the caddy:2-alpine image from hub.docker.com.

a. System environment:

I’m currently using a Linode 1 GB instance running Alpine 3.17. I’m running all my commands as a normal using with sudo. The Docker version is 20.0.21 and was installed via apk, the Alpine package manager.

b. Command:

Caddy is running within each of the containers. I’m not manually starting it. See details below.

c. Service/unit/compose file:

Here is my docker-compose.yml file on the Linode host.

version: "3.7"

services:
  proxy:
    image: caddy:2-alpine
    container_name: proxy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - caddy

  timwilson:
    image: timothydwilson/timwilson.info:latest
    container_name: timwilson
    restart: unless-stopped
    depends_on:
      - proxy
    expose:
      - "8000"
    volumes:
      - timwilson_caddy_data:/data
      - timwilson_caddy_config:/config
    networks:
      - caddy

volumes:
  caddy_data:
    external: true
  caddy_config:
  timwilson_caddy_data:
    external: true
  timwilson_caddy_config:

networks:
  caddy:
    external: true

d. My complete Caddy config:

Here’s the Caddyfile inside the timwilson container. This container is a self-contained image of my timwilson.info website. This has obviously been changed since I tested serving the site out of this container without using the proxy.

http://localhost:8000 {
	root * /srv
	file_server
}

Here’s the Caddyfile associated with the proxy container:

# Configure reverse proxies for all domains

timwilson.info {
	reverse_proxy timwilson:8000
}

3. The problem I’m having:

Visiting timwilson.info with Chrome produces a 200 status code and a blank screen from my MacBook Pro. With curl I get:

% curl -vL timwilson.info
*   Trying 143.42.120.163:80...
* Connected to timwilson.info (143.42.120.163) port 80 (#0)
> GET / HTTP/1.1
> Host: timwilson.info
> User-Agent: curl/7.86.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 308 Permanent Redirect
< Connection: close
< Location: https://timwilson.info/
< Server: Caddy
< Date: Sat, 11 Feb 2023 21:52:31 GMT
< Content-Length: 0
<
* Closing connection 0
* Clear auth, redirects to port from 80 to 443
* Issue another request to this URL: 'https://timwilson.info/'
*   Trying 143.42.120.163:443...
* Connected to timwilson.info (143.42.120.163) port 443 (#1)
* ALPN: offers h2
* ALPN: offers http/1.1
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=timwilson.info
*  start date: Feb 10 00:00:00 2023 GMT
*  expire date: May 11 23:59:59 2023 GMT
*  subjectAltName: host "timwilson.info" matched cert's "timwilson.info"
*  issuer: C=AT; O=ZeroSSL; CN=ZeroSSL ECC Domain Secure Site CA
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* h2h3 [:method: GET]
* h2h3 [:path: /]
* h2h3 [:scheme: https]
* h2h3 [:authority: timwilson.info]
* h2h3 [user-agent: curl/7.86.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x13d811400)
> GET / HTTP/2
> Host: timwilson.info
> user-agent: curl/7.86.0
> accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 200
< alt-svc: h3=":443"; ma=2592000
< date: Sat, 11 Feb 2023 21:52:31 GMT
< server: Caddy
< server: Caddy
< content-length: 0
<
* Connection #1 to host timwilson.info left intact

4. Error messages and/or full log output:

I’m sure I’m missing something here, but I enabled { debug } in the Caddyfile and got the following when I ran docker-compose up.

$ sudo docker-compose up
Starting proxy ... done
Starting timwilson ... done
Attaching to proxy, timwilson
proxy        | {"level":"info","ts":1676153297.0067668,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
proxy        | {"level":"warn","ts":1676153297.0219114,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
proxy        | {"level":"info","ts":1676153297.025541,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]}
proxy        | {"level":"info","ts":1676153297.0266457,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
proxy        | {"level":"info","ts":1676153297.026745,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
proxy        | {"level":"info","ts":1676153297.0310156,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
proxy        | {"level":"info","ts":1676153297.0311542,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
proxy        | {"level":"debug","ts":1676153297.0312426,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":true}
proxy        | {"level":"info","ts":1676153297.0313027,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
proxy        | {"level":"debug","ts":1676153297.0313692,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
proxy        | {"level":"info","ts":1676153297.0314105,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
proxy        | {"level":"info","ts":1676153297.0314357,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["timwilson.info"]}
proxy        | {"level":"debug","ts":1676153297.0317888,"logger":"tls","msg":"loading managed certificate","domain":"timwilson.info","expiration":1683849600,"issuer_key":"acme.zerossl.com-v2-DV90","storage":"FileStorage:/data/caddy"}
proxy        | {"level":"debug","ts":1676153297.0321038,"logger":"tls.cache","msg":"added certificate to cache","subjects":["timwilson.info"],"expiration":1683849600,"managed":true,"issuer_key":"acme.zerossl.com-v2-DV90","hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187","cache_size":1,"cache_capacity":10000}
proxy        | {"level":"debug","ts":1676153297.0321705,"logger":"events","msg":"event","name":"cached_managed_cert","id":"3bf616da-53e0-4c23-ae26-a5aae059a143","origin":"tls","data":{"sans":["timwilson.info"]}}
proxy        | {"level":"info","ts":1676153297.0323803,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
proxy        | {"level":"info","ts":1676153297.033613,"msg":"serving initial configuration"}
proxy        | {"level":"info","ts":1676153297.0338423,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0003ed340"}
proxy        | {"level":"info","ts":1676153297.034003,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
proxy        | {"level":"info","ts":1676153297.034693,"logger":"tls","msg":"finished cleaning storage units"}
timwilson    | {"level":"info","ts":1676153297.2739356,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
timwilson    | {"level":"info","ts":1676153297.2755892,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
timwilson    | {"level":"info","ts":1676153297.276729,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
timwilson    | {"level":"info","ts":1676153297.2771003,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
timwilson    | {"level":"info","ts":1676153297.277306,"msg":"serving initial configuration"}
timwilson    | {"level":"info","ts":1676153297.2778182,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00018b3b0"}
timwilson    | {"level":"info","ts":1676153297.27805,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
timwilson    | {"level":"info","ts":1676153297.2785308,"logger":"tls","msg":"finished cleaning storage units"}
proxy        | {"level":"debug","ts":1676153302.5431063,"logger":"events","msg":"event","name":"tls_get_certificate","id":"3053f093-785b-4a29-b092-1ec03cccec85","origin":"tls","data":{"client_hello":{"CipherSuites":[4867,4866,4865,52393,52392,52394,49200,49196,49192,49188,49172,49162,159,107,57,65413,196,136,129,157,61,53,192,132,49199,49195,49191,49187,49171,49161,158,103,51,190,69,156,60,47,186,65,49169,49159,5,4,49170,49160,22,10,255],"ServerName":"timwilson.info","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2054,1537,1539,2053,1281,1283,2052,1025,1027,513,515],"SupportedProtos":["h2","http/1.1"],"SupportedVersions":[772,771,770,769],"Conn":{}}}}
proxy        | {"level":"debug","ts":1676153302.5432198,"logger":"tls.handshake","msg":"choosing certificate","identifier":"timwilson.info","num_choices":1}
proxy        | {"level":"debug","ts":1676153302.5432305,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"timwilson.info","subjects":["timwilson.info"],"managed":true,"issuer_key":"acme.zerossl.com-v2-DV90","hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187"}
proxy        | {"level":"debug","ts":1676153302.5432374,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"73.228.137.91","remote_port":"61493","subjects":["timwilson.info"],"managed":true,"expiration":1683849600,"hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187"}
proxy        | {"level":"debug","ts":1676153302.5920255,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"timwilson:8000","total_upstreams":1}
proxy        | {"level":"debug","ts":1676153302.593623,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"timwilson:8000","duration":0.001532686,"request":{"remote_ip":"73.228.137.91","remote_port":"61493","proto":"HTTP/2.0","method":"GET","host":"timwilson.info","uri":"/","headers":{"Accept":["*/*"],"X-Forwarded-For":["73.228.137.91"],"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["timwilson.info"],"User-Agent":["curl/7.86.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"timwilson.info"}},"headers":{"Server":["Caddy"],"Date":["Sat, 11 Feb 2023 22:08:22 GMT"],"Content-Length":["0"]},"status":200}

5. What I already tried:

I’ve read a lot of helpful articles that made various suggestions, but no luck so far. My gut tells me that I’m misunderstanding some fundamental concept here, but I haven’t been able to spot it yet.

I tried using http://0.0.0.0:8000 in the Caddyfile of my website container. That produces the same white screen, but this output from docker-compose:

$ sudo docker-compose up
Starting proxy ... done
Recreating timwilson ... done
Attaching to proxy, timwilson
proxy        | {"level":"info","ts":1676153717.5174556,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
proxy        | {"level":"warn","ts":1676153717.5235305,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
proxy        | {"level":"info","ts":1676153717.526987,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
proxy        | {"level":"info","ts":1676153717.529653,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
proxy        | {"level":"info","ts":1676153717.529693,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
proxy        | {"level":"info","ts":1676153717.5329645,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
proxy        | {"level":"info","ts":1676153717.533575,"logger":"tls","msg":"finished cleaning storage units"}
proxy        | {"level":"info","ts":1676153717.5331573,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0008713b0"}
proxy        | {"level":"info","ts":1676153717.5336237,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
proxy        | {"level":"info","ts":1676153717.5336862,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
proxy        | {"level":"debug","ts":1676153717.5337331,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":true}
proxy        | {"level":"info","ts":1676153717.5337403,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
proxy        | {"level":"debug","ts":1676153717.5337603,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
proxy        | {"level":"info","ts":1676153717.5337656,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
proxy        | {"level":"info","ts":1676153717.5337684,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["timwilson.info"]}
proxy        | {"level":"debug","ts":1676153717.533994,"logger":"tls","msg":"loading managed certificate","domain":"timwilson.info","expiration":1683849600,"issuer_key":"acme.zerossl.com-v2-DV90","storage":"FileStorage:/data/caddy"}
proxy        | {"level":"debug","ts":1676153717.534228,"logger":"tls.cache","msg":"added certificate to cache","subjects":["timwilson.info"],"expiration":1683849600,"managed":true,"issuer_key":"acme.zerossl.com-v2-DV90","hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187","cache_size":1,"cache_capacity":10000}
proxy        | {"level":"debug","ts":1676153717.5342438,"logger":"events","msg":"event","name":"cached_managed_cert","id":"0d3db73a-0d14-44bc-bbf0-d7b1fb829cc0","origin":"tls","data":{"sans":["timwilson.info"]}}
proxy        | {"level":"info","ts":1676153717.536924,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
proxy        | {"level":"info","ts":1676153717.536932,"msg":"serving initial configuration"}
timwilson    | {"level":"info","ts":1676153717.768483,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
timwilson    | {"level":"warn","ts":1676153717.7694535,"logger":"caddyfile","msg":"Site block has an unspecified IP address which only matches requests having that Host header; you probably want the 'bind' directive to configure the socket","address":"0.0.0.0"}
timwilson    | {"level":"info","ts":1676153717.770575,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]}
timwilson    | {"level":"info","ts":1676153717.7716923,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
timwilson    | {"level":"info","ts":1676153717.7720072,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
timwilson    | {"level":"info","ts":1676153717.7722497,"msg":"serving initial configuration"}
timwilson    | {"level":"info","ts":1676153717.7726567,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000171420"}
timwilson    | {"level":"info","ts":1676153717.772897,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
timwilson    | {"level":"info","ts":1676153717.7733343,"logger":"tls","msg":"finished cleaning storage units"}
proxy        | {"level":"debug","ts":1676153738.4203053,"logger":"events","msg":"event","name":"tls_get_certificate","id":"fef75016-bc1a-4c4c-8510-c8fe5e427d6a","origin":"tls","data":{"client_hello":{"CipherSuites":[4867,4866,4865,52393,52392,52394,49200,49196,49192,49188,49172,49162,159,107,57,65413,196,136,129,157,61,53,192,132,49199,49195,49191,49187,49171,49161,158,103,51,190,69,156,60,47,186,65,49169,49159,5,4,49170,49160,22,10,255],"ServerName":"timwilson.info","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2054,1537,1539,2053,1281,1283,2052,1025,1027,513,515],"SupportedProtos":["h2","http/1.1"],"SupportedVersions":[772,771,770,769],"Conn":{}}}}
proxy        | {"level":"debug","ts":1676153738.4204757,"logger":"tls.handshake","msg":"choosing certificate","identifier":"timwilson.info","num_choices":1}
proxy        | {"level":"debug","ts":1676153738.4205055,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"timwilson.info","subjects":["timwilson.info"],"managed":true,"issuer_key":"acme.zerossl.com-v2-DV90","hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187"}
proxy        | {"level":"debug","ts":1676153738.4205155,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"73.228.137.91","remote_port":"64580","subjects":["timwilson.info"],"managed":true,"expiration":1683849600,"hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187"}
proxy        | {"level":"debug","ts":1676153738.4713905,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"timwilson:8000","total_upstreams":1}
proxy        | {"level":"debug","ts":1676153738.473419,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"timwilson:8000","duration":0.001312661,"request":{"remote_ip":"73.228.137.91","remote_port":"64580","proto":"HTTP/2.0","method":"GET","host":"timwilson.info","uri":"/","headers":{"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["timwilson.info"],"User-Agent":["curl/7.86.0"],"Accept":["*/*"],"X-Forwarded-For":["73.228.137.91"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"timwilson.info"}},"headers":{"Content-Length":["0"],"Server":["Caddy"],"Date":["Sat, 11 Feb 2023 22:15:38 GMT"]},"status":200}
proxy        | {"level":"debug","ts":1676153748.6024773,"logger":"events","msg":"event","name":"tls_get_certificate","id":"40d4ccd5-381a-46b7-8f4d-245290637c63","origin":"tls","data":{"client_hello":{"CipherSuites":[4865,4866,4867],"ServerName":"timwilson.info","SupportedCurves":[29,23,24],"SupportedPoints":null,"SignatureSchemes":[1027,2052,1025,1283,2053,1281,2054,1537,513],"SupportedProtos":["h3"],"SupportedVersions":[772],"Conn":{}}}}
proxy        | {"level":"debug","ts":1676153748.6025596,"logger":"tls.handshake","msg":"choosing certificate","identifier":"timwilson.info","num_choices":1}
proxy        | {"level":"debug","ts":1676153748.6025734,"logger":"tls.handshake","msg":"default certificate selection results","identifier":"timwilson.info","subjects":["timwilson.info"],"managed":true,"issuer_key":"acme.zerossl.com-v2-DV90","hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187"}
proxy        | {"level":"debug","ts":1676153748.6025813,"logger":"tls.handshake","msg":"matched certificate in cache","remote_ip":"73.228.137.91","remote_port":"60671","subjects":["timwilson.info"],"managed":true,"expiration":1683849600,"hash":"86dbf152279d0b7131bcdf70f186a98b0fb91c46504ecaeb2bc4b2da91ab6187"}
proxy        | {"level":"debug","ts":1676153748.649677,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"timwilson:8000","total_upstreams":1}
proxy        | {"level":"debug","ts":1676153748.650095,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"timwilson:8000","duration":0.000338758,"request":{"remote_ip":"73.228.137.91","remote_port":"60671","proto":"HTTP/3.0","method":"GET","host":"timwilson.info","uri":"/","headers":{"X-Forwarded-For":["73.228.137.91"],"Sec-Ch-Ua":["\"Chromium\";v=\"110\", \"Not A(Brand\";v=\"24\", \"Google Chrome\";v=\"110\""],"Sec-Fetch-Site":["none"],"Accept-Language":["en-US,en;q=0.9"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"],"Sec-Fetch-User":["?1"],"Accept-Encoding":["gzip, deflate, br"],"X-Forwarded-Proto":["https"],"Sec-Fetch-Dest":["document"],"X-Forwarded-Host":["timwilson.info"],"Sec-Ch-Ua-Mobile":["?0"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"],"Cache-Control":["max-age=0"],"Sec-Ch-Ua-Platform":["\"macOS\""],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Mode":["navigate"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h3","server_name":"timwilson.info"}},"headers":{"Server":["Caddy"],"Date":["Sat, 11 Feb 2023 22:15:48 GMT"],"Content-Length":["0"]},"status":200}
proxy        | {"level":"debug","ts":1676153748.7050717,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"timwilson:8000","total_upstreams":1}
proxy        | {"level":"debug","ts":1676153748.7061706,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"timwilson:8000","duration":0.001034644,"request":{"remote_ip":"73.228.137.91","remote_port":"60671","proto":"HTTP/3.0","method":"GET","host":"timwilson.info","uri":"/favicon.ico","headers":{"Sec-Ch-Ua-Mobile":["?0"],"Referer":["https://timwilson.info/"],"Accept-Language":["en-US,en;q=0.9"],"Accept-Encoding":["gzip, deflate, br"],"Sec-Ch-Ua":["\"Chromium\";v=\"110\", \"Not A(Brand\";v=\"24\", \"Google Chrome\";v=\"110\""],"Sec-Fetch-Site":["same-origin"],"X-Forwarded-For":["73.228.137.91"],"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["timwilson.info"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"],"Sec-Fetch-Dest":["image"],"Sec-Ch-Ua-Platform":["\"macOS\""],"Sec-Fetch-Mode":["no-cors"],"Accept":["image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h3","server_name":"timwilson.info"}},"headers":{"Content-Length":["0"],"Server":["Caddy"],"Date":["Sat, 11 Feb 2023 22:15:48 GMT"]},"status":200}

I noticed this line in particular:

timwilson    | {"level":"warn","ts":1676153717.7694535,"logger":"caddyfile","msg":"Site block has an unspecified IP address which only matches requests having that Host header; you probably want the 'bind' directive to configure the socket","address":"0.0.0.0"}

6. Links to relevant resources:

Nothing to report on this.

Any and all help appreciated!

Hi @timwilson, welcome to the Caddy community. This is an excellent post with a ton of information! Thanks for being thorough.

Lets start with the proxy Caddy. From the logs you have:

proxy | {"level":"debug","ts":1676153302.593623,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"timwilson:8000","duration":0.001532686,"request":{"remote_ip":"73.228.137.91","remote_port":"61493","proto":"HTTP/2.0","method":"GET","host":"timwilson.info","uri":"/","headers":{"Accept":["*/*"],"X-Forwarded-For":["73.228.137.91"],"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["timwilson.info"],"User-Agent":["curl/7.86.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"timwilson.info"}},"headers":{"Server":["Caddy"],"Date":["Sat, 11 Feb 2023 22:08:22 GMT"],"Content-Length":["0"]},"status":200}

We can see that the proxy is reaching upstream and getting a 200 back, but with length 0. The other Caddy is responding with a blank page and the proxy is just doing its job. So we can look to the upstream Caddy to investigate what’s going on. Why respond this way?

That doesn’t seem like file_server behaviour! Some useful information to note here is that whenever Caddy receives a request for which it is not configured to return any specific response - such as, notably, when requesting a site you haven’t configured Caddy for, or a route that isn’t handled - Caddy responds with a 200 OK. This is because the HTTP exchange was completed successfully, with no errors on either end - it’s just that, well, Caddy hasn’t been told to give you anything. So… Zero-length 200 OK is being issued by the upstream Caddy and faithfully passed back by the proxy Caddy.

We can look a little deeper, specifically, in the request details, that we have "host":"timwilson.info". This is coming from the proxy Caddy, directed at the upstream Caddy; the proxy is just passing through the host it was requested to serve.

Here’s where I can see a problem: the upstream Caddy is not configured to serve a site timwilson.info. It’s configured to serve a site called localhost over HTTP on port 8000. When it gets a request to port 8000 (which it has a HTTP listener active), but the request isn’t for localhost, the upstream Caddy doesn’t have any configuration for that site. Hence, the empty 200 OK.

When you swapped from localhost to 0.0.0.0, you noted this line in your logs:

timwilson | {"level":"warn","ts":1676153717.7694535,"logger":"caddyfile","msg":"Site block has an unspecified IP address which only matches requests having that Host header; you probably want the 'bind' directive to configure the socket","address":"0.0.0.0"}

And you’re astute to pick up on it, because it’s telling you something useful here. Site block has an unspecified IP address which only matches requests having that Host header. It’s a warning designed to help someone pick up on probably-unwanted configuration. Specifically, the proxy Caddy is still requesting Host:timwilson.info but the upstream Caddy is only configured to serve the literal website 0.0.0.0.

Now, at this point you probably just want Caddy to just serve to any HTTP request coming in, and you’re thinking, great, so the problem is that it’s being picky about the Host. How do I just make it serve any request?

Easy - just omit the hostname entirely from the site address. Specify only the parts you need - HTTP, and port 8000 - and leave the rest ambiguous.

TL;DR:

Change http://localhost:8000 (or http://0.0.0.0:8000) on your upstream Caddyfile to http://:8000.

Or, of course, to http://timwilson.info:8000.

1 Like

Or, even more simply, :8000 because the default for non-port-443 and non-domain site addresses is HTTP.

1 Like

Thank you, @Whitestrake and @francislavoie, for the generous help. It works!

So for the record, here is the version of the Caddyfile in my timwilson container that is serving up my site:

:8000 {
	root * /srv
	file_server
}

Compared to other web servers I’ve configured in the past, this is amazing!

At the risk of going off-topic, I have one other question as long as we’re here. If you look at the volumes: section in my docker-compose.yml file above it has the following:

volumes:
  caddy_data:
    external: true
  caddy_config:
  timwilson_caddy_data:
    external: true
  timwilson_caddy_config:

Do I need to have separate data and config directories for each container running caddy? It makes sense that I would, but given how elegantly caddy is configured otherwise, I wondered if there’s a way to simplify this. Also, I noted that there’s currently nothing in either of those directories. That suggests a misconfiguration to me.

Thanks again for the help!

1 Like

That’s one of the biggest goals and identity of the Caddyfile is that configuration doesn’t need to be arcane, complicated, and difficult like we often see elsewhere. It’s always good to see Caddy delivering on that goal for people.

You can find some information about what’s stored in these locations, here: Conventions — Caddy Documentation

In short, the data directory would hold your TLS assets (certificates and keys, etc) and the config directory holds current running config (for use with caddy run --resume, for example).

Sharing the config directory would be inadvisable because config should be unique to each instance of Caddy. Sharing the data directory, though, might be useful!

Any Caddy instances that are configured to use the same storage will automatically share those resources and coordinate certificate management as a cluster.
Automatic HTTPS — Caddy Documentation

Multiple Caddy servers can fairly seamlessly share the data directory, enabling them to do all sorts of neat things such as solve well-known challenges on each others’ behalf and share valid certificates.

An instance of Caddy which manages no certificates may not create anything in a data directory, but I believe all Caddy instances should write to the config directory.

That’s interesting. I just checked all four directories specified in docker-compose.yml, and all are empty.

Everything lives in ~/caddyconf for the non-privileged user I use to log in.

localhost:~/caddyconf$ tree -L 2
.
├── Caddyfile
├── caddy_config
├── caddy_data
├── docker-compose.yml
├── timwilson_caddy_config
└── timwilson_caddy_data

I would have expected something to be in the main caddy_data directory at least.

Well I took another look at docker-compose.yml and realized that I hadn’t specified the locations of those directories relative to the current directory. Here’s the new version.

version: "3.7"

services:
  proxy:
    image: caddy:2-alpine
    container_name: proxy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./caddy_data:/data
      - ./caddy_config:/config
    networks:
      - caddy

  timwilson:
    image: timothydwilson/timwilson.info:latest
    container_name: timwilson
    restart: unless-stopped
    depends_on:
      - proxy
    expose:
      - "8000"
    volumes:
      - ./caddy_data:/data
      - ./timwilson_caddy_config:/config
    networks:
      - caddy

volumes:
  caddy_data:
    external: true
  caddy_config:
  timwilson_caddy_config:

networks:
  caddy:
    external: true

Notice the ./ preceding the directories in the volumes: section of each container. After restarting the containers, the directories are now populated except for timwilson_caddy_data since there is no SSL negotation happening there.

Taking the advice of @Whitestrake, I removed the timwilson_caddy_data volume reference and pointed that container at caddy_data like the proxy container. That change is also reflected above. Everything seems to be running smoothly.

This is a “named volume”, and the volume is the one at the bottom of your docker-compose file in the volumes section. Docker itself manages the storage of these, usually in /var/lib/docker somewhere. You can run docker volume inspect <name> to see where it is stored.

Using external: true tells docker-compose that you will have created that volume yourself ahead of time, and that docker-compose itself should not try to manage that volume (by default it would create volumes prefixed with the compose project name so it’s unique).

When you do this, you’re doing what’s called a “bind mount”, i.e. binding a particular directory as a mount to the container. Using it this way, it doesn’t use the volume config from the bottom of your docker-compose at all.

1 Like

@francislavoie, that is really interesting. Is there a best practice here? Since I’m doing bind mounts throughout my docker-compose.yml, can I remove the separate volumes: section entirely?

1 Like

If you’re looking to bind mount the data and config directories to a directory on the host, and you’re not interested in using Docker volumes, you can remove the volumes key entirely.

You might want to remove the volumes later on with the docker volume ls and docker volume rm commands.

1 Like

Thanks everyone! I’m going to write all of this up in a couple blog posts and put it online. Hopefully it will help someone else who’s trying to do the same thing.

1 Like

As promised, here’s a link to a short blog post I created documenting my Docker-based Caddy-powered web hosting setup. I hope it’s helpful to someone.

Hosting Multiple Sites on One Host With a Caddy Proxy Server

Corrections and suggestions are welcomed.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.