ACME timeouts upon server start

1. Caddy version (caddy version):

The caddy:latest docker image.

2. How I run Caddy:

I run caddy using a simple docker-compose file.

a. System environment:

uname -a:

 Linux vps 5.14.11-arch1-1 #1 SMP PREEMPT Sun, 10 Oct 2021 00:48:26 +0000 x86_64 GNU/Linux

(system time isn’t correct apparently)

docker -v:

Docker version 20.10.9, build c2ea9bc90b

docker-compose -v:

Docker Compose version 2.0.1

b. Command:

docker-compose up -d

c. Service/unit/compose file:


version: "3"

    image: caddy:latest
    container_name: caddy
    command: caddy run --config /etc/caddy/Caddyfile
      - 80:80
      - 443:443
      - ./resources/caddy/Caddyfile:/etc/caddy/Caddyfile
      - ./resources/caddy/data:/data
      - ./resources/caddy/config:/config

d. My complete Caddyfile or JSON config:


	servers {
		timeouts {
			read_body 60s
			read_header 60s
			write 60s
			idle 60s
} {
	respond "Hello, World!"

	tls {
		issuer acme {
			timeout 60s

3. The problem I’m having:

Hello everybody,

I’m trying to spin up a minimally complete server instance to try out my tls certificate, which I just issued using certbot this morning.

However my VPS seems to be encountering some serious latency right now. While serving on port 80 is possible with some delay, the ACME requests time out.

I can however reach the acme directory via curl.

As one can see from the Caddyfile, I have tried to increase timeouts, but I do not believe they are taken into account properly, since even with a timeout of 60s, it shows 30s have elapsed before timeout in the caddy log output.

I have opened ports :80, :443 and my ssh port, and am able to serve unencrypted http on port 80.
Could the issue have to do with other relevant ports being blocked?

How should I proceed here? Any help is much appreciated :slight_smile:

4. Error messages and/or full log output:

docker-compose logs -f:

caddy  | {"level":"info","ts":1634199110.1639936,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
caddy  | {"level":"info","ts":1634199110.1684558,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019",""]}
caddy  | {"level":"info","ts":1634199110.1688757,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy  | {"level":"info","ts":1634199110.1689687,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy  | {"level":"debug","ts":1634199110.169312,"logger":"http","msg":"starting server loop","address":"[::]:443","http3":false,"tls":true}
caddy  | {"level":"debug","ts":1634199110.169486,"logger":"http","msg":"starting server loop","address":"[::]:80","http3":false,"tls":false}
caddy  | {"level":"info","ts":1634199110.169578,"logger":"http","msg":"enabling automatic TLS certificate management","domains":[""]}
caddy  | {"level":"info","ts":1634199110.1699407,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy  | {"level":"info","ts":1634199110.1700227,"msg":"serving initial configuration"}
caddy  | {"level":"info","ts":1634199110.1701257,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
caddy  | {"level":"info","ts":1634199110.1706033,"logger":"tls","msg":"finished cleaning storage units"}
caddy  | {"level":"info","ts":1634199110.170836,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00055ebd0"}
caddy  | {"level":"info","ts":1634199110.1712968,"logger":"tls.obtain","msg":"acquiring lock","identifier":""}
caddy  | {"level":"info","ts":1634199110.1742458,"logger":"tls.obtain","msg":"lock acquired","identifier":""}
caddy  | {"level":"debug","ts":1634199110.175821,"logger":"tls.obtain","msg":"trying issuer 1/1","issuer":""}
caddy  | {"level":"warn","ts":1634199120.1845093,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"","error":"performing request: Get \"\": dial tcp: lookup on read udp> i/o timeout"}
caddy  | {"level":"warn","ts":1634199130.4366188,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"","error":"performing request: Get \"\": dial tcp: lookup on read udp> i/o timeout"}
caddy  | {"level":"warn","ts":1634199140.6898685,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"","error":"performing request: Get \"\": dial tcp: lookup on read udp> i/o timeout"}
caddy  | {"level":"error","ts":1634199140.6900551,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"","issuer":"","error":"registering account [] with server: provisioning client: performing request: Get \"\": dial tcp: lookup on read udp> i/o timeout"}
caddy  | {"level":"error","ts":1634199140.6901,"logger":"tls.obtain","msg":"will retry","error":"[] Obtain: registering account [] with server: provisioning client: performing request: Get \"\": dial tcp: lookup on read udp> i/o timeout","attempt":1,"retrying_in":60,"elapsed":30.515195522,"max_duration":2592000}


  "I8tqBOqYkqM": "",
  "keyChange": "",
  "meta": {
    "caaIdentities": [
    "termsOfService": "",
    "website": ""
  "newAccount": "",
  "newNonce": "",
  "newOrder": "",
  "revokeCert": ""

5. What I already tried:

I tried to ensure that the relevant ports are open.

docker ps:

CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS                                                                                NAMES
f84f25959e7a   caddy:latest   "caddy run --config …"   About a minute ago   Up About a minute>80/tcp, :::80->80/tcp,>443/tcp, :::443->443/tcp, 2019/tcp   caddy

I invoke dockerd with --iptables=false, since I want all relevant rules to come from nftables.
iptables -L:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

nft list ruleset:

table inet filter {
	chain input {
		type filter hook input priority filter; policy drop;
		ct state invalid drop comment "early drop of invalid connections"
		ct state { established, related } accept comment "allow tracked connections"
		iifname "lo" accept comment "allow from loopback"
		ip protocol icmp accept comment "allow icmp"
		meta l4proto ipv6-icmp accept comment "allow icmp v6"
		tcp dport 80 accept comment "allow http"
		tcp dport 443 accept comment "allow https"
		tcp dport 12345 accept comment "allow sshd"
		meta pkttype host limit rate 5/second counter packets 753 bytes 41811 reject with icmpx admin-prohibited
		counter packets 3934 bytes 236099

	chain forward {
		type filter hook forward priority filter; policy drop;

6. Links to relevant resources:

Oh interesting, didn’t realize Compose v2 was released two weeks ago :open_mouth: news to me! :slight_smile: Release v2.0.0 · docker/compose · GitHub

You can remove this line in your docker-compose.yml, it’s redundant.

This error is actually that Caddy wasn’t able to resolve DNS for using the system’s resolver. You might want to reconfigure DNS resolution to make sure this can properly resolve.

The reason the timeout doesn’t have any effect is that Caddy never actually connects to Let’s Encrypt servers, so that timeout never comes into play. The dialing step uses a different timeout (which doesn’t seem configurable right now).

You are apparently right about that.
It seems to work if I get docker-compose to provide the host’s /etc/resolv.conv, for instance by using the default bridge network via network_mode: bridge for the caddy service (in the docker-compose.yml). Using network_mode: host works also.


Still, according to the documentation, with a user-defined network (which docker-compose automatically defines, called *_default, Docker’s embedded DNS server should be used, which should resolve external addresses using the host’s DNS servers… :thinking:


Unfortunately, I’m not too sure what went wrong there. I’m pretty proficient with Docker, but its underlying networking stack is not one of my areas of expertise. You might need to ask on some Docker forums to get an explanation of what went wrong there.

Glad you got it working now though!

1 Like

I believe setting iptables=false is the culprit here, and using nftables instead of iptables is not simple.


This topic was automatically closed after 30 days. New replies are no longer allowed.