On-Demand TLS rate limits

Does the On Demand TLS feature prevent issues with hitting rate limits with Let’s Encyrpt? I just hit that this week with one of my services, presumably because I was taking it down and bringing it back up somewhat frequently

You shouldn’t be hitting rate limits as long as you’re properly persisting Caddy’s data storage, because Caddy will take care to back-off before rate limits are reached, using the storage as memory about frequency. Make sure you’re doing that correctly.

But yes, the addition of ZeroSSL should aid as a fallback if you couldn’t get a cert from Let’s Encrypt, as of Caddy v2.3

Pretty sure I followed the setup listed on the DockerHub page. Here’s the relevant snippets from the Dockerfile if you wouldn’t mind taking a look:

---
version: "3.4"

services:
  caddy:
    build: .
    env_file:
      - .caddy-env
    container_name: caddy
    volumes:
      - /var/www/files:/var/www
      - /var/run/docker.sock:/var/run/docker.sock
      - caddy_data:/data
    ports:
      - 80:80
      - 443:443
    labels:
      caddy_1.email: "i.am@chrisrees.dev"
      caddy_1.admin: "off"
      caddy_2: (cloudflare-tls)
      caddy_2.tls.dns: "cloudflare {env.CLOUDFLARE_API_KEY}"
      caddy_3: files.chrisrees.dev
      caddy_3.file_server: "browse"
      caddy_3.root: "* /var/www"
      caddy_3.import: "cloudflare-tls"
    restart: unless-stopped

  lidarr:
    image: ghcr.io/linuxserver/lidarr
    container_name: lidarr
    environment:
      - PUID=994 # lidarr
      - PGID=995 # media
      - TZ=America/New_York
      - UMASK_SET=002
    volumes:
      - /data/config/lidarr:/config
      - /data:/data
    ports:
      - 8686:8686
    depends_on:
      - caddy
    labels:
      caddy: lidarr.chrisrees.dev
      caddy.reverse_proxy: "{{upstreams 8686}}"
      caddy.reverse_proxy.flush_interval: -1
      caddy.import: "cloudflare-tls"
    restart: unless-stopped


volumes:
  caddy_data: {}

I also thought the purpose of caddy_data was to reuse certificates and not cause the issues I’m seeing, but my logs show

yesterday at 3:32 AM  {"level":"error","ts":1609403522.7708068,"logger":"tls.obtain","msg":"will retry","error":"[lidarr.chrisrees.dev] Obtain: [lidarr.chrisrees.dev] creating new order: request to https://acme-v02.api.letsencrypt.org/acme/new-order failed after 1 attempts: HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new order :: too many certificates already issued for exact set of domains: lidarr.chrisrees.dev: see https://letsencrypt.org/docs/rate-limits/ (ca=https://acme-v02.api.letsencrypt.org/directory)","attempt":1,"retrying_in":60,"elapsed":4.751438741,"max_duration":2592000}

My setup of the config dir hasn’t changed since I switched over to running in Docker

EDIT:
As an aside, do you know the normal turnaround time for a release hitting DockerHub so I can also update to 2.3?

The example on Docker Hub doesn’t have {}. Not sure if that makes a difference :thinking:

Try a docker volume inspect caddy_data and see where it’s located (it might be prefixed with your project name because of docker-compose though so you might need to find it with docker volume ls), make sure it looks okay.

Mine looks something like this:

[
    {
        "CreatedAt": "2020-03-21T20:18:50Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/caddy_data/_data",
        "Name": "caddy_data",
        "Options": null,
        "Scope": "local"
    }
]

I think you might be right. Not sure where I got that {}. It looks like mine was created late last night when I restarted some stuff:

[
    {
        "CreatedAt": "2020-12-31T02:16:43-05:00",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "app-config",
            "com.docker.compose.version": "1.23.2",
            "com.docker.compose.volume": "caddy_data"
        },
        "Mountpoint": "/var/lib/docker/volumes/app-config_caddy_data/_data",
        "Name": "app-config_caddy_data",
        "Options": null,
        "Scope": "local"
    }
]

I’ll remove the {} and try again

It looks like that may have done the trick. Caddy started up but the volume creation date stayed the same. I’m not 100% sure that that is the case since I recently merged all of my separate apps into a single docker-compose, but there was no caddy_caddy_data that would have lined up with the old setup I had so I bet it wasn’t actually creating a volume or it was doing something like recreating the volume on startup

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.