1. Output of caddy version
:
avirut@manatee:~/src/caddy$ docker exec -it caddy /bin/sh
/srv # caddy version
v2.6.2 h1:wKoFIxpmOJLGl3QXoo6PNbYvGW4xLEgo32GPBEjWL8o=
2. How I run Caddy:
a. System environment:
Oracle Cloud VPS running Ubuntu 22.04, using Docker containers to run Caddy alongside a handful of other services. Caddy uses the caddy-docker-proxy
plugin to automatically ingress other services as defined in their compose files. This setup has been working for me for quite a while, but I now want to use other hosts as well, so I created a private network using headscale (self-hosted Tailscale), which allows me to build a Docker swarm with an overlay network that spans both my cloud VPS and some servers at home. My next step is to just switch out the local bridge network with my swarm overlay network. My expectation was that caddy-docker-proxy
would continue working the same, but now be able to reverse proxy services on other hosts within the swarm as well. Instead, I’m seeing no response from Caddy when I use the overlay network.
b. Command:
docker compose up -d
c. Service/unit/compose file:
Dockerfile:
ARG CADDY_VERSION=2.6.2
FROM caddy:${CADDY_VERSION}-builder AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/v2@v2.8.1 \
--with github.com/caddy-dns/cloudflare@ed330a8 \
--with github.com/greenpau/caddy-security@v1.1.16 \
--with github.com/greenpau/caddy-trace@v1.1.10
FROM caddy:${CADDY_VERSION}-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"]
I have two composes, my old one that works versus what I’m trying now that doesn’t. The only difference between the two is in the networks.
Here’s a compose that works:
---
version: "3.9"
services:
caddy:
build:
context: .
dockerfile: Dockerfile
image: caddy:v2.6.2
container_name: caddy
restart: unless-stopped
labels:
caddy_0.acme_dns: "cloudflare {env.CF_API_TOKEN}"
caddy_0.email: "{env.EMAIL}"
caddy_1: kro.ac
caddy_1.respond: "hello"
env_file:
- .env
environment:
- CADDY_INGRESS_NETWORKS=caddy2
volumes:
# for caddy-docker-proxy
- /var/run/docker.sock:/var/run/docker.sock
# for caddy itself
- ~/data/caddy/data:/data
- ~/data/caddy/config:/config
ports:
- "80:80"
- "443:443"
- "443:443/udp"
networks:
- caddy2
# caddy2 is a local bridge network,
# I'm happy to share the results of `docker network inspect`
# if that helps as well
networks:
caddy2:
external: true
versus one that doesn’t:
---
version: "3.9"
services:
caddy:
build:
context: .
dockerfile: Dockerfile
image: caddy:v2.6.2
container_name: caddy
restart: unless-stopped
labels:
caddy_0.acme_dns: "cloudflare {env.CF_API_TOKEN}"
caddy_0.email: "{env.EMAIL}"
caddy_1: kro.ac
caddy_1.respond: "hello"
env_file:
- .env
environment:
- CADDY_INGRESS_NETWORKS=caddy
volumes:
# for caddy-docker-proxy
- /var/run/docker.sock:/var/run/docker.sock
# for caddy itself
- ~/data/caddy/data:/data
- ~/data/caddy/config:/config
ports:
- "80:80"
- "443:443"
- "443:443/udp"
networks:
- caddy
# caddy is a swarm overlay network
networks:
caddy:
external: true
driver: overlay
Ignoring whether or not the swarm networking/ingress/etc works at all, the former puts up hello
at https://kro.ac
whereas the latter doesn’t work at all.
d. My complete Caddy config:
Either compose yields the exact same Caddyfile:
(pulled with sudo cat ~/data/caddy/config/caddy/Caddyfile.autosave
)
{
acme_dns cloudflare {env.CF_API_TOKEN}
email {env.EMAIL}
}
kro.ac {
respond hello
}
3. The problem I’m having:
Largely described above, but -
The working compose neatly prints hello
in the browser, and curl -v https://kro.ac
yields:
avirut@manatee:~/src/caddy$ curl -v https://kro.ac
* Trying 129.146.73.219:443...
* Connected to kro.ac (129.146.73.219) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=kro.ac
* start date: Dec 19 06:04:34 2022 GMT
* expire date: Mar 19 06:04:33 2023 GMT
* subjectAltName: host "kro.ac" matched cert's "kro.ac"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* Using Stream ID: 1 (easy handle 0xaaaaed30dc90)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET / HTTP/2
> Host: kro.ac
> user-agent: curl/7.81.0
> accept: */*
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
< HTTP/2 200
< alt-svc: h3=":443"; ma=2592000
< content-type: text/plain; charset=utf-8
< server: Caddy
< content-length: 5
< date: Thu, 22 Dec 2022 23:25:06 GMT
<
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection #0 to host kro.ac left intact
The compose which does not work shows in the browser:
This site can’t be reached
**kro.ac** took too long to respond.
…and curl -v https://kro.ac
yields:
avirut@manatee:~/src/caddy$ curl -v https://kro.ac
* Trying 129.146.73.219:443...
* connect to 129.146.73.219 port 443 failed: Connection timed out
* Failed to connect to kro.ac port 443 after 129395 ms: Connection timed out
* Closing connection 0
curl: (28) Failed to connect to kro.ac port 443 after 129395 ms: Connection timed out
4. Error messages and/or full log output:
Haven’t put debug
in my Caddyfile but will do that next and update below. Here’s what I’ve got so far:
Working compose:
avirut@manatee:~/src/caddy$ docker logs caddy --tail 10
{"level":"info","ts":1671751915.3906748,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
{"level":"info","ts":1671751915.3907554,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671751915.3907666,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["kro.ac"]}
{"level":"info","ts":1671751915.391717,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x4000255810"}
{"level":"info","ts":1671751915.3917224,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671751915.3918576,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1671751915.391757,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1671751915.392152,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1671751915.3929923,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1671751915.393023,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
Not working compose:
avirut@manatee:~/src/caddy$ docker logs caddy --tail 10
{"level":"info","ts":1671751627.5699437,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671751627.569971,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1671751627.5699794,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["kro.ac"]}
{"level":"info","ts":1671751627.5703514,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1671751627.571152,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1671751627.5712466,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40002bdb90"}
{"level":"info","ts":1671751627.571715,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1671751627.571736,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1671751627.5718656,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1671751627.5729554,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
They look roughly the same to me, and nothing else stands out from older logs either. Everything is with "level": "info"
, nothing with error in it.
5. What I already tried:
I brought the “what isn’t working” down to the bare minimum difference that I can show. I’ve also looked around and read some posts about caddy-docker-proxy
with swarm, but nothing stands out to me as relevant, particularly as most of the existing posts on this forum have some actual errors to show when things aren’t working. At a bit of a dead end since I don’t really have error logs to work through, so would super appreciate any insights/help.
6. Links to relevant resources:
n/a