Some issue with Caddy certificates in the Cloudflare NS

1. The problem I’m having:

I’m trying to setup a reverse proxy in Caddy that connects to a FastAPI backend and runs a domain pointed to the Cloudflare NS. Even though nothing calls error, Caddy validates my Caddyfile but, in the Docker container, /data/caddy/certificates isn’t even being created, and Cloudflare raises error 525 and refuses to connect. It was running fine until my attempt to redirect robots.txt from the www subdomain to the one in the main, but after this it died even after I erased this change. I tried disabling the Cloudflare proxy and nothing worked, seems to be a problem with certificates within Caddy. Super weird.
I’m super new to Caddy, and happy to be here. Hoping to learn more about this software and the web in general. You will actually be my hero if you can solve this thing. Thanks.

Curl vL command:

curl -vL https://liquamonitor.dev
* Host liquamonitor.dev:443 was resolved.
* IPv6: 2606:4700:3033::6815:4bdb, 2606:4700:3034::ac43:b667
* IPv4: 104.21.75.219, 172.67.182.103
*   Trying [2606:4700:3033::6815:4bdb]:443...
* schannel: disabled automatic use of client certificate
* ALPN: curl offers http/1.1
* ALPN: server accepted http/1.1
* Established connection to liquamonitor.dev (2606:4700:3033::6815:4bdb port 443) from 2804:214:4013:fcb:8d5c:32f8:df61:9370 port 51030
* using HTTP/1.x
> GET / HTTP/1.1
> Host: liquamonitor.dev
> User-Agent: curl/8.18.0
> Accept: */*
>
* Request completely sent off
* schannel: remote party requests renegotiation
* schannel: renegotiating SSL/TLS connection
* schannel: SSL/TLS connection renegotiated
< HTTP/1.1 525 <none>
< Date: Fri, 24 Apr 2026 00:26:15 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 15
< Connection: keep-alive
< Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Expires: Thu, 01 Jan 1970 00:00:01 GMT
< Referrer-Policy: same-origin
< X-Frame-Options: SAMEORIGIN
< Server: cloudflare
< CF-RAY: 9f10f9b65d64e01a-GIG
< alt-svc: h3=":443"; ma=86400
<
error code: 525* Connection #0 to host liquamonitor.dev:443 left intact

2. Error messages and/or full log output:

2026/04/24 00:10:12.267 INFO    http.log.access.default handled request {"request": {"remote_ip": "162.158.187.84", "remote_port": "12058", "client_ip": "162.158.187.84", "proto": "HTTP/1.1", "method": "GET", "host": "liquamonitor.dev", "uri": "/.well-known/acme-challenge/mxyCxgUEDBG9QsMzl_DFmiB00dDsOMc199S_528VHJY", "headers": {"Cf-Ray": ["9f10e231ecc569bb-LAX"], "Connection": ["Keep-Alive"], "User-Agent": ["Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"], "Cdn-Loop": ["cloudflare; loops=1"], "Cf-Ipcountry": ["US"], "Cf-Visitor": ["{\"scheme\":\"http\"}"], "Accept-Encoding": ["gzip"], "Cf-Connecting-Ip": ["2600:3000:2710:300::83"], "X-Forwarded-Proto": ["http"], "Accept": ["*/*"], "X-Forwarded-For": ["2600:3000:2710:300::83"]}}, "bytes_read": 0, "user_id": "", "duration": 0.000092897, "size": 87, "status": 200, "resp_headers": {"Server": ["Caddy"], "Content-Type": ["text/plain"]}}

2026/04/24 00:10:12.435 INFO    http.log.access.default handled request {"request": {"remote_ip": "104.23.243.128", "remote_port": "11490", "client_ip": "104.23.243.128", "proto": "HTTP/1.1", "method": "GET", "host": "liquamonitor.dev", "uri": "/.well-known/acme-challenge/mxyCxgUEDBG9QsMzl_DFmiB00dDsOMc199S_528VHJY", "headers": {"Cf-Ray": ["9f10e2337a24eefa-CMH"], "Cf-Connecting-Ip": ["2600:1f16:13c:c401:2db9:bf42:b3e3:41ea"], "X-Forwarded-Proto": ["http"], "Accept-Encoding": ["gzip"], "Cf-Ipcountry": ["US"], "Cf-Visitor": ["{\"scheme\":\"http\"}"], "User-Agent": ["Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"], "Cdn-Loop": ["cloudflare; loops=1"], "Connection": ["Keep-Alive"], "X-Forwarded-For": ["2600:1f16:13c:c401:2db9:bf42:b3e3:41ea"], "Accept": ["*/*"]}}, "bytes_read": 0, "user_id": "", "duration": 0.000102352, "size": 87, "status": 200, "resp_headers": {"Server": ["Caddy"], "Content-Type": ["text/plain"]}}

2026/04/24 00:10:12.557 INFO    http.log.access.default handled request {"request": {"remote_ip": "172.68.175.107", "remote_port": "14058", "client_ip": "172.68.175.107", "proto": "HTTP/1.1", "method": "GET", "host": "liquamonitor.dev", "uri": "/.well-known/acme-challenge/mxyCxgUEDBG9QsMzl_DFmiB00dDsOMc199S_528VHJY", "headers": {"Cf-Connecting-Ip": ["2600:1f14:a8b:502:c446:3210:b1c3:916d"], "Accept": ["*/*"], "Cdn-Loop": ["cloudflare; loops=1"], "Cf-Ipcountry": ["US"], "Cf-Visitor": ["{\"scheme\":\"http\"}"], "X-Forwarded-For": ["2600:1f14:a8b:502:c446:3210:b1c3:916d"], "X-Forwarded-Proto": ["http"], "Connection": ["Keep-Alive"], "User-Agent": ["Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"], "Accept-Encoding": ["gzip"], "Cf-Ray": ["9f10e23398f7ff06-PDX"]}}, "bytes_read": 0, "user_id": "", "duration": 0.000114354, "size": 87, "status": 200, "resp_headers": {"Server": ["Caddy"], "Content-Type": ["text/plain"]}}

2026/04/24 00:10:12.666 INFO    http.log.access.default handled request {"request": {"remote_ip": "104.23.217.86", "remote_port": "9736", "client_ip": "104.23.217.86", "proto": "HTTP/1.1", "method": "GET", "host": "liquamonitor.dev", "uri": "/.well-known/acme-challenge/mxyCxgUEDBG9QsMzl_DFmiB00dDsOMc199S_528VHJY", "headers": {"Accept-Encoding": ["gzip"], "Cf-Ipcountry": ["SE"], "Connection": ["Keep-Alive"], "Accept": ["*/*"], "Cf-Ray": ["9f10e2343ca28055-ARN"], "Cf-Connecting-Ip": ["2a05:d016:dcc:9100:fedf:e25f:c8b5:3091"], "Cf-Visitor": ["{\"scheme\":\"http\"}"], "X-Forwarded-For": ["2a05:d016:dcc:9100:fedf:e25f:c8b5:3091"], "Cdn-Loop": ["cloudflare; loops=1"], "X-Forwarded-Proto": ["http"], "User-Agent": ["Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"]}}, "bytes_read": 0, "user_id": "", "duration": 0.000084124, "size": 87, "status": 200, "resp_headers": {"Content-Type": ["text/plain"], "Server": ["Caddy"]}}

2026/04/24 00:10:12.876 INFO    http.log.access.default handled request {"request": {"remote_ip": "162.158.171.5", "remote_port": "10005", "client_ip": "162.158.171.5", "proto": "HTTP/1.1", "method": "GET", "host": "liquamonitor.dev", "uri": "/.well-known/acme-challenge/mxyCxgUEDBG9QsMzl_DFmiB00dDsOMc199S_528VHJY", "headers": {"X-Forwarded-Proto": ["http"], "Connection": ["Keep-Alive"], "X-Forwarded-For": ["2406:da18:611:5701:73a0:7514:4ca0:737b"], "Accept-Encoding": ["gzip"], "Cf-Visitor": ["{\"scheme\":\"http\"}"], "Accept": ["*/*"], "Cdn-Loop": ["cloudflare; loops=1"], "Cf-Connecting-Ip": ["2406:da18:611:5701:73a0:7514:4ca0:737b"], "Cf-Ipcountry": ["SG"], "User-Agent": ["Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"], "Cf-Ray": ["9f10e2340b65a066-SIN"]}}, "bytes_read": 0, "user_id": "", "duration": 0.000101011, "size": 87, "status": 200, "resp_headers": {"Server": ["Caddy"], "Content-Type": ["text/plain"]}

3. Caddy version:

2.11.2

4. How I installed and ran Caddy:

Installed and ran via Docker Compose.

a. System environment:

Docker Compose running from an Ubuntu 24.04 LTS VPS.

b. Command:

sudo docker -f ./infra/docker/compose.yaml compose up caddy --build

c. Service/unit/compose file:

services:
  backend:
    build:
      context: ../..
      dockerfile: ./infra/docker/Dockerfile
    container_name: backend
    env_file: ../../.env
    depends_on:
      - postgres
      - redis
    expose:
      - "8000"

  postgres:
    image: postgres:17.9
    container_name: postgres
    env_file: ../../.env
    volumes:
      - postgres_data:/var/lib/postgresql/data
    expose:
      - "5432"

  redis:
    image: redis:8.6.2
    container_name: redis
    volumes:
      - ../../infra/redis/redis.conf:/redis/config/redis.conf
      - redis_data:/redis/data/
    command: >
      bash -c "chown -R 999:999 /redis/data/ &&
      redis-server /redis/config/redis.conf"
    expose:
      - "6379"

  caddy:
    image: caddy:2.11.2
    container_name: caddy
    cap_add:
      - NET_ADMIN
      - backend
    volumes:
      - ../../infra/caddy/:/etc/caddy
      - caddy_data:/data
      - caddy_config:/config
    ports:
     - "80:80"
     - "443:443"


volumes:
  postgres_data:
  redis_data:
  caddy_data:
  caddy_config:

d. My complete Caddy config:

{
	grace_period 15s
}

app.liquamonitor.dev {
	respond "Ooops! A aplicação web do Liqua Monitor não está pronta ainda. Até breve!"
}

www.liquamonitor.dev {
	header {
		X-Robots-Tag "noindex, nofollow"
	}
	handle {
		redir https://liquamonitor.dev{uri} 308
	}
}

liquamonitor.dev {
	encode zstd gzip
	log default {
		output stdout
		format console
		level debug
	}
	header {
		X-Robots-Tag "noindex, nofollow"
		Permissions-Policy interest-cohort=()
		Strict-Transport-Security "max-age=31536000; includeSubDomains"
		X-Content-Type-Options nosniff
		X-Frame-Options DENY
	}
	handle /api* {
		reverse_proxy backend:8000
	}
	handle /openapi.json {
		reverse_proxy backend:8000
	}
	handle /docs {
		reverse_proxy backend:8000
	}
	handle /redoc {
		reverse_proxy backend:8000
	}
	handle /robots.txt {
		root * /etc/caddy/
		rewrite * /robots.txt
		file_server
	}
	handle {
		redir * /docs 308
	}
}

You have configured Caddy to use the HTTP-01 challenge to issue certificates, which means that LetsEncrypt needs to connect to your web server. But with the proxy enabled, it’s connecting to Cloudflare instead, which can’t connect to your server because of your Cloudflare settings and because you don’t have a valid certificate.

The best solution is to configure Caddy to use the DNS-01 challenge instead, which means giving it API access to your Cloudflare account as it needs to add and remove DNS records automatically.

Alternatively you could download Cloudflare’s certificate for your domain and configure Caddy to use that instead.

The HTTP-01 challenge should work if you disable the Cloudflare proxy. You might get a cert issued once, but don’t turn the proxy back on then because it will fail to renew when it expires.

One practical way to keep this from recurring is to split the immediate cert fix from the drift checks around it.

For this Cloudflare 525 + Caddy cert-storage shape, I would check:

  • whether the hostname is proxied in Cloudflare while Caddy is still trying HTTP-01
  • whether the caddy_data volume is persistent and writable by the container
  • whether the Caddyfile path in Compose is the one actually loaded by the container
  • whether the container env file matches the config path you are editing
  • if switching to DNS-01, whether the Cloudflare token has only the needed zone read / DNS edit scope
  • after the fix, whether renewal still works with Cloudflare proxying enabled

This is the kind of route/env/provider mismatch I now keep as a small deployment-drift checklist. No sensitive values or production access needed for the checklist itself.