ok, I switched to only using the CloudFlare plugin after the suggestion. I believe I have the proper certificates being issued, but am having issues with accessing the sites. I ended up following the steps from this blog to get a little further, but I’m guessing I’m having issues because of a lack of port (443 & 80) exposure.
Ultimately when browsing to my sites I see the CloudFlare error 521.
Is this just a matter of choosing to expose ports, not necessarily 443 or 80?
Logs from caddy start
inside container:
2024/07/18 23:01:45.981 INFO http.auto_https enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2024/07/18 23:01:45.981 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc000120f80"}
2024/07/18 23:01:45.981 INFO http enabling HTTP/3 listener {"addr": ":443"}
2024/07/18 23:01:45.981 INFO failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
2024/07/18 23:01:45.981 INFO http.log server running {"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2024/07/18 23:01:45.982 INFO http.log server running {"name": "remaining_auto_https_redirects", "protocols": ["h1", "h2", "h3"]}
2024/07/18 23:01:45.982 INFO http enabling automatic TLS certificate management {"domains": ["vaultwarden.example.com", "app1.example.com", "app2.example.com"]}
2024/07/18 23:01:45.983 INFO autosaved config (load with --resume flag) {"file": "/config/caddy/autosave.json"}
2024/07/18 23:01:45.983 INFO serving initial configuration
Successfully started Caddy (pid=29) - Caddy is running in the background
2024/07/18 23:01:45.983 INFO tls storage cleaning happened too recently; skipping for now {"storage": "FileStorage:/data/caddy", "instance": "0f01db59-910c-4e10-bd40-984e7b00e63c", "try_again": "2024/07/19 23:01:45.983", "try_again_in": 86399.999999548}
2024/07/18 23:01:45.983 INFO tls finished cleaning storage units
docker logs caddy
output:
{"level":"info","ts":1721343664.5588794,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1721343664.559036,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000baf00"}
{"level":"info","ts":1721343664.559062,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1721343664.5590956,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1721343664.559458,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1721343664.5595055,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1721343664.5595841,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1721343664.5596397,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1721343664.5596592,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["vaultwarden.example.com","app1.example.com","app2.example.com"]}
{"level":"info","ts":1721343664.5642488,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1721343664.5642936,"msg":"serving initial configuration"}
{"level":"info","ts":1721343664.5679274,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
{"level":"info","ts":1721343664.5702736,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1721343664.7855651,"logger":"tls.issuance.acme.acme_client","msg":"got renewal info","names":["vaultwarden.example.com"],"window_start":1726349205,"window_end":1726522005,"selected_time":1726391728,"recheck_after":1721365264.7855573,"explanation_url":""}
{"level":"info","ts":1721343664.786398,"logger":"tls","msg":"updated ACME renewal information","identifiers":["vaultwarden.example.com"],"cert_hash":"dd43dd41d40606f14918f06c511da94a341d7ebdbcd269db6b5549fbf32db173","ari_unique_id":"nytfzzwhT50Et-0rLMTGcIvS1w0.AyEftawVLi1hg1ZBk-4ebQGt","cert_expiry":1729026405,"selected_time":1726465510,"next_update":1721365264.7855573,"explanation_url":""}
{"level":"info","ts":1721343664.8101387,"logger":"tls.issuance.acme.acme_client","msg":"got renewal info","names":["app1.example.com"],"window_start":1726348681.3333333,"window_end":1726521481.3333333,"selected_time":1726364505,"recheck_after":1721365264.8101313,"explanation_url":""}
{"level":"info","ts":1721343664.8109071,"logger":"tls","msg":"updated ACME renewal information","identifiers":["app1.example.com"],"cert_hash":"3182fa32b6d6c1691ee2ca0471bc03937087f36278f7aede682b8844b535e1de","ari_unique_id":"nytfzzwhT50Et-0rLMTGcIvS1w0.BEHzXl1HmNh58Onrnq_K3ZWn","cert_expiry":1729025881,"selected_time":1726493682,"next_update":1721365264.8101313,"explanation_url":""}
{"level":"info","ts":1721343664.8761413,"logger":"tls.issuance.acme.acme_client","msg":"got renewal info","names":["app2.example.com"],"window_start":1726349183.3333333,"window_end":1726521983.3333333,"selected_time":1726497360,"recheck_after":1721365264.8761332,"explanation_url":""}
{"level":"info","ts":1721343664.8770852,"logger":"tls","msg":"updated ACME renewal information","identifiers":["app2.example.com"],"cert_hash":"57736abed0517f6c010462d1cecd567507ad3b603821ed9597a09aaa647cbec3","ari_unique_id":"kydGmAOpUWiOmNbEQkjbI79YlNI.A_BskbzTlMOY_rkDLwU80atT","cert_expiry":1729026383,"selected_time":1726505572,"next_update":1721365264.8761332,"explanation_url":""}
updated Dockerfile
FROM caddy:2.8.4-builder AS builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare
FROM caddy:2.8.4
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
updated docker-compose for caddy
services:
caddy:
build:
context: ./
dockerfile: Dockerfile
image: caddy:cloudflare
container_name: caddy
CLOUDFLARE_API_TOKEN: "secret-secret"
networks:
- my-caddy-backend
- my-caddy-frontend
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./site:/srv
- data:/data
- config:/config
restart: unless-stopped
volumes:
data:
config:
networks:
my-caddy-backend:
my-caddy-frontend:
updated docker-compose for sample app
whoami2:
image: traefik/whoami
container_name: app2
expose:
- 80
networks:
- my-caddy-backend
restart: unless-stopped
networks:
my-caddy-backend:
updated Caddyfile - I’ve tried both reverse proxy implementations for vaultwarden
and app2
.
app1.example.com {
tls {
dns cloudflare {
api_token {env.CLOUDFLARE_API_TOKEN}
}
}
reverse_proxy app2:80
}
vaultwarden.example.com {
tls {
dns cloudflare {
api_token {env.CLOUDFLARE_API_TOKEN}
}
}
@vaultwarden host vaultwarden.example.com
handle @vaultwarden {
reverse_proxy vaultwarden:80
}
}