Security rules for certificate renewal

I am running Caddy on a Scaleway instance and use Caddy primarily as reverse proxy. On my Scaleway instance, I want to configure the firewall (security group) to block all inbound traffic except for required ports. However, it looks like I am missing some ports because whenever I block inbound traffic, I do not get new certificates for new sites I add and I also do not get my certificates renewed.

Here are the rules I have set up:

ACCEPT 80/TCP for all IPs
ACCEPT 443/TCP for all IPs

The log files then give me these errors:

caddy         | Activating privacy features... 2019/06/15 17:27:19 [INFO][FileStorage:/etc/caddycerts] Started certificate maintenance routine
caddy         | 2019/06/15 17:27:39 get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:55790->127.0.0.11:53: i/o timeout
caddy         | exit status 1
caddy         | Activating privacy features... 2019/06/15 17:27:42 [INFO][FileStorage:/etc/caddycerts] Started certificate maintenance routine
caddy         | 2019/06/15 17:28:02 get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:39280->127.0.0.11:53: i/o timeout
caddy         | exit status 1
caddy         | Activating privacy features... Activating privacy features... 2019/06/15 17:28:03 [INFO][FileStorage:/etc/caddycerts] Started certificate maintenance routine
caddy         | 2019/06/15 17:28:24 get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:43320->127.0.0.11:53: i/o timeout
caddy         | exit status 1
caddy         | Activating privacy features... 2019/06/15 17:28:25 [INFO][FileStorage:/etc/caddycerts] Started certificate maintenance routine
caddy         | 2019/06/15 17:28:45 get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:48832->127.0.0.11:53: i/o timeout
caddy         | exit status 1

It looks like the letsencrypt is trying different (random?) ports. Any idea on how I can configure the security group correctly so that I can block most ports by default? If I allow all incoming ports, I do not have any certificate issues.

My setup: caddy is running as docker container on version 0.11.5 with Cloudflare as DNS provider:

caddy:
    container_name: caddy
    build:
      context: github.com/abiosoft/caddy-docker.git
      args:
        - plugins=git,filebrowser,cors,realip,expires,cache,cloudflare
    ports:
      - 80:80/tcp
      - 443:443/tcp
    environment:
      - "CADDYPATH=/etc/caddycerts"
      - "ACME_AGREE=true"
      - "ENABLE_TELEMETRY=false"
      - "CLOUDFLARE_EMAIL=my_email"
      - "CLOUDFLARE_API_KEY=my_api_key"
    volumes:
      - ${PWD}/caddy/Caddyfile:/etc/Caddyfile
      - ${HOME}/.caddy:/etc/caddycerts
    restart: unless-stopped

My Caddyfile:

subdomain.myserver.com {
    proxy / ip:8080
    tls {
        dns cloudflare
    }
}

You have to open outbound port 53 (and 80 and 443) for the ACME challenges to succeed, it’s not just inbound traffic that you need to allow. Those errors indicate that the outbound DNS lookups are being blocked.

Hi Matt,

thanks for the hint that also outbound rules are required. I currently do not restrict outbound traffic at all and the behavior I am describing only changes whenever I modify inbound traffic rules.

Here is a screenshot of my current setup. Notice the default policies: they are set to ACCEPT:

The moment I change the inbound policy to DROP, certificates do not get refreshed. Therefore my assumption that I am missing certain inbound rules (port 53 maybe?). I never changed the outbound default policy so far meaning all outgoing traffic is accepted.

127.0.0.11:53 is Docker DNS. I have to say, I’m not sure exactly how Scaleway’s firewall blocks Docker’s internal networking.

Things to test:

  • Does it work if you run Caddy outside of Docker?
  • Does it work if you disable Scaleway’s firewall and implement your own (via ufw or a similar tool)?
1 Like

That sounds like a solid approach to identify the issue. It will be a few days before I can continue with the analysis, but once I found the culprit, I will report back.

Thanks!

1 Like

Just a heads up, too. I (since posting the previous comment) ran into a bug with one of my containers in that it couldn’t resolve DNS. Got an IO timeout, you guessed it, to 127.0.0.11:53. It was solved by tearing down and recreating (not restarting) the container; I used docker-compose rm -fs [container] (force stop and remove without confirmation) and then docker-compose up -d [container] in my case.

Oh I see, this sounds like exactly my case. I am using docker-compose as well. Keeping my fingers crossed…

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.