Caddy w/ docker-proxy fails (?) silently when on swarm overlay network

1. Output of caddy version:

avirut@manatee:~/src/caddy$ docker exec -it caddy /bin/sh
/srv # caddy version
v2.6.2 h1:wKoFIxpmOJLGl3QXoo6PNbYvGW4xLEgo32GPBEjWL8o=

2. How I run Caddy:

a. System environment:

Oracle Cloud VPS running Ubuntu 22.04, using Docker containers to run Caddy alongside a handful of other services. Caddy uses the caddy-docker-proxy plugin to automatically ingress other services as defined in their compose files. This setup has been working for me for quite a while, but I now want to use other hosts as well, so I created a private network using headscale (self-hosted Tailscale), which allows me to build a Docker swarm with an overlay network that spans both my cloud VPS and some servers at home. My next step is to just switch out the local bridge network with my swarm overlay network. My expectation was that caddy-docker-proxy would continue working the same, but now be able to reverse proxy services on other hosts within the swarm as well. Instead, I’m seeing no response from Caddy when I use the overlay network.

b. Command:

docker compose up -d

c. Service/unit/compose file:

Dockerfile:

ARG CADDY_VERSION=2.6.2

FROM caddy:${CADDY_VERSION}-builder AS builder

RUN xcaddy build \
        --with github.com/lucaslorentz/caddy-docker-proxy/v2@v2.8.1 \
        --with github.com/caddy-dns/cloudflare@ed330a8 \
        --with github.com/greenpau/caddy-security@v1.1.16 \
        --with github.com/greenpau/caddy-trace@v1.1.10

FROM caddy:${CADDY_VERSION}-alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

CMD ["caddy", "docker-proxy"]

I have two composes, my old one that works versus what I’m trying now that doesn’t. The only difference between the two is in the networks.

Here’s a compose that works:

---
version: "3.9"

services:
  caddy:
    build:
     context: .
     dockerfile: Dockerfile
    image: caddy:v2.6.2
    container_name: caddy
    restart: unless-stopped
    labels:
      caddy_0.acme_dns: "cloudflare {env.CF_API_TOKEN}"
      caddy_0.email: "{env.EMAIL}"
      caddy_1: kro.ac
      caddy_1.respond: "hello"
    env_file:
      - .env
    environment:
      - CADDY_INGRESS_NETWORKS=caddy2
    volumes:
      # for caddy-docker-proxy
      - /var/run/docker.sock:/var/run/docker.sock
      # for caddy itself
      - ~/data/caddy/data:/data
      - ~/data/caddy/config:/config
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    networks:
      - caddy2

# caddy2 is a local bridge network, 
# I'm happy to share the results of `docker network inspect`
# if that helps as well
networks:
  caddy2:
    external: true

versus one that doesn’t:

---
version: "3.9"

services:
  caddy:
    build:
     context: .
     dockerfile: Dockerfile
    image: caddy:v2.6.2
    container_name: caddy
    restart: unless-stopped
    labels:
      caddy_0.acme_dns: "cloudflare {env.CF_API_TOKEN}"
      caddy_0.email: "{env.EMAIL}"
      caddy_1: kro.ac
      caddy_1.respond: "hello"
    env_file:
      - .env
    environment:
      - CADDY_INGRESS_NETWORKS=caddy
    volumes:
      # for caddy-docker-proxy
      - /var/run/docker.sock:/var/run/docker.sock
      # for caddy itself
      - ~/data/caddy/data:/data
      - ~/data/caddy/config:/config
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    networks:
      - caddy

# caddy is a swarm overlay network
networks:
  caddy:
    external: true
    driver: overlay

Ignoring whether or not the swarm networking/ingress/etc works at all, the former puts up hello at https://kro.ac whereas the latter doesn’t work at all.

d. My complete Caddy config:

Either compose yields the exact same Caddyfile:
(pulled with sudo cat ~/data/caddy/config/caddy/Caddyfile.autosave)

{
        acme_dns cloudflare {env.CF_API_TOKEN}
        email {env.EMAIL}
}
kro.ac {
        respond hello
}

3. The problem I’m having:

Largely described above, but -

The working compose neatly prints hello in the browser, and curl -v https://kro.ac yields:

avirut@manatee:~/src/caddy$ curl -v https://kro.ac
*   Trying 129.146.73.219:443...
* Connected to kro.ac (129.146.73.219) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=kro.ac
*  start date: Dec 19 06:04:34 2022 GMT
*  expire date: Mar 19 06:04:33 2023 GMT
*  subjectAltName: host "kro.ac" matched cert's "kro.ac"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* Using Stream ID: 1 (easy handle 0xaaaaed30dc90)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET / HTTP/2
> Host: kro.ac
> user-agent: curl/7.81.0
> accept: */*
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
< HTTP/2 200
< alt-svc: h3=":443"; ma=2592000
< content-type: text/plain; charset=utf-8
< server: Caddy
< content-length: 5
< date: Thu, 22 Dec 2022 23:25:06 GMT
<
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection #0 to host kro.ac left intact

The compose which does not work shows in the browser:

This site can’t be reached

**kro.ac** took too long to respond.

…and curl -v https://kro.ac yields:

avirut@manatee:~/src/caddy$ curl -v https://kro.ac
*   Trying 129.146.73.219:443...
* connect to 129.146.73.219 port 443 failed: Connection timed out
* Failed to connect to kro.ac port 443 after 129395 ms: Connection timed out
* Closing connection 0
curl: (28) Failed to connect to kro.ac port 443 after 129395 ms: Connection timed out

4. Error messages and/or full log output:

Haven’t put debug in my Caddyfile but will do that next and update below. Here’s what I’ve got so far:
Working compose:

avirut@manatee:~/src/caddy$ docker logs caddy --tail 10
{"level":"info","ts":1671751915.3906748,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
{"level":"info","ts":1671751915.3907554,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671751915.3907666,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["kro.ac"]}
{"level":"info","ts":1671751915.391717,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x4000255810"}
{"level":"info","ts":1671751915.3917224,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671751915.3918576,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1671751915.391757,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1671751915.392152,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1671751915.3929923,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1671751915.393023,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}

Not working compose:

avirut@manatee:~/src/caddy$ docker logs caddy --tail 10
{"level":"info","ts":1671751627.5699437,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671751627.569971,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671751627.5699794,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["kro.ac"]}
{"level":"info","ts":1671751627.5703514,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1671751627.571152,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1671751627.5712466,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40002bdb90"}
{"level":"info","ts":1671751627.571715,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671751627.571736,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1671751627.5718656,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1671751627.5729554,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}

They look roughly the same to me, and nothing else stands out from older logs either. Everything is with "level": "info", nothing with error in it.

5. What I already tried:

I brought the “what isn’t working” down to the bare minimum difference that I can show. I’ve also looked around and read some posts about caddy-docker-proxy with swarm, but nothing stands out to me as relevant, particularly as most of the existing posts on this forum have some actual errors to show when things aren’t working. At a bit of a dead end since I don’t really have error logs to work through, so would super appreciate any insights/help.

6. Links to relevant resources:

n/a

Here are some logs with debug -

Working compose file produces:

avirut@manatee:~/src/caddy$ docker logs caddy
{"level":"info","ts":1671752209.7618089,"logger":"docker-proxy","msg":"Running caddy proxy server"}
{"level":"info","ts":1671752209.7627208,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1671752209.762939,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671752209.7629557,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
{"level":"info","ts":1671752209.7635164,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","LabelPrefix":"caddy","PollingInterval":30,"ProcessCaddyfile":true,"ProxyServiceTasks":true,"IngressNetworks":"[caddy2]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
{"level":"info","ts":1671752209.7648075,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
{"level":"info","ts":1671752209.7650921,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[9432b91308d1fd5ae146e614833091fe46b87e7f7ed8a4af9fd9eb3d25c6751f:true]"}
{"level":"info","ts":1671752209.7722118,"logger":"docker-proxy","msg":"Swarm is available","new":true}
{"level":"info","ts":1671752209.7748249,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"{\n\tacme_dns cloudflare {env.CF_API_TOKEN}\n\tdebug\n\temail {env.EMAIL}\n}\nkro.ac {\n\trespond hello\n}\n"}
{"level":"info","ts":1671752209.7752464,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"logging\":{\"logs\":{\"default\":{\"level\":\"DEBUG\"}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"kro.ac\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"body\":\"hello\",\"handler\":\"static_response\"}]}]}],\"terminal\":true}]}}},\"tls\":{\"automation\":{\"policies\":[{\"subjects\":[\"kro.ac\"],\"issuers\":[{\"challenges\":{\"dns\":{\"provider\":{\"api_token\":\"{env.CF_API_TOKEN}\",\"name\":\"cloudflare\"}}},\"email\":\"{env.EMAIL}\",\"module\":\"acme\"},{\"challenges\":{\"dns\":{\"provider\":{\"api_token\":\"{env.CF_API_TOKEN}\",\"name\":\"cloudflare\"}}},\"email\":\"{env.EMAIL}\",\"module\":\"zerossl\"}]}]}}}}"}
{"level":"info","ts":1671752209.7752988,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1671752209.775895,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"55970","headers":{"Accept-Encoding":["gzip"],"Content-Length":["642"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1671752209.7763703,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1671752209.7765586,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1671752209.7765765,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1671752209.7766507,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x4000337f80"}
{"level":"info","ts":1671752209.7767212,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1671752209.776779,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
{"level":"info","ts":1671752209.7768404,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"debug","ts":1671752209.776847,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":true}
{"level":"info","ts":1671752209.7769058,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"debug","ts":1671752209.7769308,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
{"level":"info","ts":1671752209.7769396,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671752209.7769423,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["kro.ac"]}
{"level":"debug","ts":1671752209.7772765,"logger":"tls","msg":"loading managed certificate","domain":"kro.ac","expiration":1679205874,"issuer_key":"acme-v02.api.letsencrypt.org-directory","storage":"FileStorage:/data/caddy"}
{"level":"debug","ts":1671752209.7775235,"logger":"tls.cache","msg":"added certificate to cache","subjects":["kro.ac"],"expiration":1679205874,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"7292b256de0f4954ac54b0ae4e46af30c4e93f480c8efcfd740639aaf526522e","cache_size":1,"cache_capacity":10000}
{"level":"debug","ts":1671752209.7775493,"logger":"events","msg":"event","name":"cached_managed_cert","id":"71df9b9e-db94-455c-a117-0a6a26a22325","origin":"tls","data":{"sans":["kro.ac"]}}
{"level":"info","ts":1671752209.7776463,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671752209.777659,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1671752209.7778163,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1671752209.7778313,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"info","ts":1671752209.7787426,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"debug","ts":1671752210.4982831,"logger":"events","msg":"event","name":"tls_get_certificate","id":"f746ca67-36af-4878-aa77-67180b4a5d69","origin":"tls","data":{"client_hello":{"CipherSuites":[49195,49199,49196,49200,52393,52392,49161,49171,49162,49172,156,157,47,53,49170,10,4865,4866,4867],"ServerName":"hs.kro.ac","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2052,1027,2055,2053,2054,1025,1281,1537,1283,1539,513,515],"SupportedProtos":null,"SupportedVersions":[772,771],"Conn":{}}}}
{"level":"debug","ts":1671752210.4984062,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"hs.kro.ac"}
{"level":"debug","ts":1671752210.498412,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.kro.ac"}
{"level":"debug","ts":1671752210.4984143,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.*.ac"}
{"level":"debug","ts":1671752210.4984167,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.*.*"}
{"level":"debug","ts":1671752210.49842,"logger":"tls.handshake","msg":"all external certificate managers yielded no certificates and no errors","remote_ip":"129.153.79.80","remote_port":"50718","sni":"hs.kro.ac"}
{"level":"debug","ts":1671752210.498424,"logger":"tls.handshake","msg":"no certificate matching TLS ClientHello","remote_ip":"129.153.79.80","remote_port":"50718","server_name":"hs.kro.ac","remote":"129.153.79.80:50718","identifier":"hs.kro.ac","cipher_suites":[49195,49199,49196,49200,52393,52392,49161,49171,49162,49172,156,157,47,53,49170,10,4865,4866,4867],"cert_cache_fill":0.0001,"load_if_necessary":true,"obtain_if_necessary":true,"on_demand":false}
{"level":"debug","ts":1671752210.4987152,"logger":"http.stdlib","msg":"http: TLS handshake error from 129.153.79.80:50718: no certificate available for 'hs.kro.ac'"}
{"level":"debug","ts":1671752210.5332654,"logger":"events","msg":"event","name":"tls_get_certificate","id":"7715c67c-7aeb-471a-9beb-24ae667bdcfa","origin":"tls","data":{"client_hello":{"CipherSuites":[52393,52392,49195,49199,49196,49200,49161,49171,49162,49172,156,157,47,53,49170,10,4867,4865,4866],"ServerName":"hs.kro.ac","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2052,1027,2055,2053,2054,1025,1281,1537,1283,1539,513,515],"SupportedProtos":null,"SupportedVersions":[772,771],"Conn":{}}}}
{"level":"debug","ts":1671752210.5333202,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"hs.kro.ac"}
{"level":"debug","ts":1671752210.5333269,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.kro.ac"}
{"level":"debug","ts":1671752210.5333326,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.*.ac"}
{"level":"debug","ts":1671752210.533335,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.*.*"}
{"level":"debug","ts":1671752210.5334768,"logger":"tls.handshake","msg":"all external certificate managers yielded no certificates and no errors","remote_ip":"70.121.44.166","remote_port":"36518","sni":"hs.kro.ac"}
{"level":"debug","ts":1671752210.5334911,"logger":"tls.handshake","msg":"no certificate matching TLS ClientHello","remote_ip":"70.121.44.166","remote_port":"36518","server_name":"hs.kro.ac","remote":"70.121.44.166:36518","identifier":"hs.kro.ac","cipher_suites":[52393,52392,49195,49199,49196,49200,49161,49171,49162,49172,156,157,47,53,49170,10,4867,4865,4866],"cert_cache_fill":0.0001,"load_if_necessary":true,"obtain_if_necessary":true,"on_demand":false}
{"level":"debug","ts":1671752210.5335956,"logger":"http.stdlib","msg":"http: TLS handshake error from 70.121.44.166:36518: no certificate available for 'hs.kro.ac'"}
{"level":"debug","ts":1671752211.3757794,"logger":"events","msg":"event","name":"tls_get_certificate","id":"8af994e7-c2f3-440b-ac4f-2c36a1286dcf","origin":"tls","data":{"client_hello":{"CipherSuites":[49195,49199,49196,49200,52393,52392,49161,49171,49162,49172,156,157,47,53,49170,10,4865,4866,4867],"ServerName":"hs.kro.ac","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2052,1027,2055,2053,2054,1025,1281,1537,1283,1539,513,515],"SupportedProtos":null,"SupportedVersions":[772,771],"Conn":{}}}}
{"level":"debug","ts":1671752211.3758328,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"hs.kro.ac"}
{"level":"debug","ts":1671752211.3758388,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.kro.ac"}
{"level":"debug","ts":1671752211.3758419,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.*.ac"}
{"level":"debug","ts":1671752211.375844,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*.*.*"}
{"level":"debug","ts":1671752211.3758695,"logger":"tls.handshake","msg":"all external certificate managers yielded no certificates and no errors","remote_ip":"73.206.133.31","remote_port":"49384","sni":"hs.kro.ac"}
{"level":"debug","ts":1671752211.375876,"logger":"tls.handshake","msg":"no certificate matching TLS ClientHello","remote_ip":"73.206.133.31","remote_port":"49384","server_name":"hs.kro.ac","remote":"73.206.133.31:49384","identifier":"hs.kro.ac","cipher_suites":[49195,49199,49196,49200,52393,52392,49161,49171,49162,49172,156,157,47,53,49170,10,4865,4866,4867],"cert_cache_fill":0.0001,"load_if_necessary":true,"obtain_if_necessary":true,"on_demand":false}
{"level":"debug","ts":1671752211.3759663,"logger":"http.stdlib","msg":"http: TLS handshake error from 73.206.133.31:49384: no certificate available for 'hs.kro.ac'"}

And non-working puts out:

avirut@manatee:~/src/caddy$ docker logs caddy
{"level":"info","ts":1671752317.6766307,"logger":"docker-proxy","msg":"Running caddy proxy server"}
{"level":"info","ts":1671752317.6774282,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]}
{"level":"info","ts":1671752317.6778314,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671752317.6778557,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
{"level":"info","ts":1671752317.6787035,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","LabelPrefix":"caddy","PollingInterval":30,"ProcessCaddyfile":true,"ProxyServiceTasks":true,"IngressNetworks":"[caddy]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
{"level":"info","ts":1671752317.679536,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
{"level":"info","ts":1671752317.680253,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[mirf2s13hfoytek3tegz4k8se:true]"}
{"level":"info","ts":1671752317.6879752,"logger":"docker-proxy","msg":"Swarm is available","new":true}
{"level":"info","ts":1671752317.6907487,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"{\n\tacme_dns cloudflare {env.CF_API_TOKEN}\n\tdebug\n\temail {env.EMAIL}\n}\nkro.ac {\n\trespond hello\n}\n"}
{"level":"info","ts":1671752317.691402,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"logging\":{\"logs\":{\"default\":{\"level\":\"DEBUG\"}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"kro.ac\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"body\":\"hello\",\"handler\":\"static_response\"}]}]}],\"terminal\":true}]}}},\"tls\":{\"automation\":{\"policies\":[{\"subjects\":[\"kro.ac\"],\"issuers\":[{\"challenges\":{\"dns\":{\"provider\":{\"api_token\":\"{env.CF_API_TOKEN}\",\"name\":\"cloudflare\"}}},\"email\":\"{env.EMAIL}\",\"module\":\"acme\"},{\"challenges\":{\"dns\":{\"provider\":{\"api_token\":\"{env.CF_API_TOKEN}\",\"name\":\"cloudflare\"}}},\"email\":\"{env.EMAIL}\",\"module\":\"zerossl\"}]}]}}}}"}
{"level":"info","ts":1671752317.6914515,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1671752317.6921337,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"46704","headers":{"Accept-Encoding":["gzip"],"Content-Length":["642"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1671752317.6925688,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1671752317.6927688,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1671752317.6927848,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1671752317.6930423,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x4000359500"}
{"level":"info","ts":1671752317.6930883,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1671752317.693117,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1671752317.6933396,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
{"level":"debug","ts":1671752317.693479,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":true}
{"level":"info","ts":1671752317.6935575,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"debug","ts":1671752317.6936438,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
{"level":"info","ts":1671752317.6936576,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671752317.6936617,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["kro.ac"]}
{"level":"debug","ts":1671752317.6941679,"logger":"tls","msg":"loading managed certificate","domain":"kro.ac","expiration":1679205874,"issuer_key":"acme-v02.api.letsencrypt.org-directory","storage":"FileStorage:/data/caddy"}
{"level":"info","ts":1671752317.6942823,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"debug","ts":1671752317.6945643,"logger":"tls.cache","msg":"added certificate to cache","subjects":["kro.ac"],"expiration":1679205874,"managed":true,"issuer_key":"acme-v02.api.letsencrypt.org-directory","hash":"7292b256de0f4954ac54b0ae4e46af30c4e93f480c8efcfd740639aaf526522e","cache_size":1,"cache_capacity":10000}
{"level":"debug","ts":1671752317.6945922,"logger":"events","msg":"event","name":"cached_managed_cert","id":"c8afe267-4a05-49ff-9428-440b162543c7","origin":"tls","data":{"sans":["kro.ac"]}}
{"level":"info","ts":1671752317.6947827,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671752317.694796,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1671752317.6951904,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1671752317.6973753,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}

Lastly, from when the compose puts the container on each network:

caddy2, local bridge, does work:

avirut@manatee:~/src/caddy$ docker network inspect caddy2
[
    {
        "Name": "caddy2",
        "Id": "9432b91308d1fd5ae146e614833091fe46b87e7f7ed8a4af9fd9eb3d25c6751f",
        "Created": "2022-12-22T22:25:27.640876266Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.27.0.0/16",
                    "Gateway": "172.27.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "36d91d0f96a40cd29bc4710a5206bd96d235c56a774e16b33d59dd1d7ecdabbc": {
                "Name": "caddy",
                "EndpointID": "b33073b4f7c98f00bf47eca8c0ec59a73a214fd9e24b26188c1f759a58141e50",
                "MacAddress": "02:42:ac:1b:00:02",
                "IPv4Address": "172.27.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

caddy, swarm overlay, does not work:

avirut@manatee:~/src/caddy$ docker network inspect caddy
[
    {
        "Name": "caddy",
        "Id": "mirf2s13hfoytek3tegz4k8se",
        "Created": "2022-12-22T23:38:36.867636114Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.1.0/24",
                    "Gateway": "10.0.1.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "99affb2dc44045f70cb247573aeef9fd0cb63b0ece878a1070ad94842cfce54f": {
                "Name": "caddy",
                "EndpointID": "09daa233063b3b7cd6ff1b5e62579ef0271b91d812b572b767e54193da808b31",
                "MacAddress": "02:42:0a:00:01:0f",
                "IPv4Address": "10.0.1.15/24",
                "IPv6Address": ""
            },
            "lb-caddy": {
                "Name": "caddy-endpoint",
                "EndpointID": "8265cf7547abc1569887beb895f3548e9698600541ab0bbe90e4ff4ac5bf415a",
                "MacAddress": "02:42:0a:00:01:10",
                "IPv4Address": "10.0.1.16/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "829ca1dddee5",
                "IP": "100.64.0.1"
            }
        ]
    }
]

Appreciate any help, and let me know if I can provide anything else that might be useful!

Probably best if you open an issue on the CDP GitHub repo.

This topic was automatically closed after 30 days. New replies are no longer allowed.