How to check health, configure using json and enable API in Caddy docker with On-Demand TLS

1. Caddy version (caddy version):

v2.4.6

2. How I run Caddy:

a. System environment:

Docker

b. Command:

I run caddy on AWS ECS

c. Service/unit/compose file:

First, I use this file to build my own caddy image called tecknovice/caddy-s3


RUN apt update

RUN apt install -y wget curl debian-keyring debian-archive-keyring apt-transport-https

RUN wget https://golang.org/dl/go1.17.3.linux-amd64.tar.gz

RUN rm -rf /usr/local/go

RUN tar -C /usr/local -xzf go1.17.3.linux-amd64.tar.gz

RUN curl -1sLf 'https://dl.cloudsmith.io/public/caddy/xcaddy/gpg.key' |  tee /etc/apt/trusted.gpg.d/caddy-xcaddy.asc

RUN curl -1sLf 'https://dl.cloudsmith.io/public/caddy/xcaddy/debian.deb.txt' |  tee /etc/apt/sources.list.d/caddy-xcaddy.list

RUN apt update

RUN apt install xcaddy

ENV PATH="${PATH}:/usr/local/go/bin"

RUN xcaddy build --with github.com/ss098/certmagic-s3

FROM caddy:2.4.6-alpine

COPY --from=builder /caddy /usr/bin/caddy

Dockerfile

FROM tecknovice/caddy-s3

COPY $PWD/Caddyfile /etc/caddy/Caddyfile

d. My complete Caddyfile or JSON config:

My system have more than 1000 sites, so I use on demand TLS and save the certs into AWS S3

{
   storage s3 {
       host "{$S3_HOST}"
       bucket "{$S3_BUCKET}"
       access_id "{$S3_ACCESS_ID}"
       secret_key "{$S3_SECRET_KEY}"
       prefix "{$S3_PREFIX}"
   }
    on_demand_tls {
        ask {$ASK_ENDPOINT}
    }
}

https:// {
    tls {
        on_demand
    }
    reverse_proxy {$UPSTREAM}
}

3. The problem I’m having:

I create an ECS service for caddy, and using Network load balancer (NLB ) point to this service on port 443. The problem is ECS require a path for health check, but it seem impossible to return HTTP code 200 from the 443 port without the correct domain

Moreover, I want to know how to

  1. run caddy docker with json file (currently caddy docker will automatically start and load /etc/caddy/Caddyfile)
  2. enable API in Caddyfile as well as json file
  3. return HTTP 404 with some specific domains got from ASK_ENDPOINT (currently it will return 200 with the domains got from ASK_ENDPOINT, but in the future some domains will be deleted from my system, so can Caddy be able to return 404 with these domains)
  4. redirect from root domain to www domain with the domains got from ASK_ENDPOINT (all these domains are third-level or fourth-level domains, such as www.tecknovice.com or www.tecknovice.co.jp)

4. Error messages and/or full log output:

5. What I already tried:

I have created some other Caddyfiles, but not successful

{
   storage s3 {
       host "{$S3_HOST}"
       bucket "{$S3_BUCKET}"
       access_id "{$S3_ACCESS_ID}"
       secret_key "{$S3_SECRET_KEY}"
       prefix "{$S3_PREFIX}"
   }
    on_demand_tls {
        ask {$ASK_ENDPOINT}
    }
}

http:// {
    respond /health-check-caddy 200

    redir https://{host}{uri}
}

https:// {
    tls {
        on_demand
    }

    respond /health-check-caddy 200

    reverse_proxy {$UPSTREAM}
}

6. Links to relevant resources:

Before we get to your actual questions, I do want to point out that using S3 as a storage backend is a bad idea, particularly the plugin you’re using, because S3 doesn’t provide atomic operations:

That will almost certainly cause problems down the road, ones that are difficult to narrow down. If you need to use S3 for storage, then consider using a plugin that supports atomic operations.

Have you done the Getting Started guide? This walks through how to do precisely that (you’ll just have to adjust for the Docker setup).

Caddy’s API is enabled either way. If not so, it could not be configured.

I’m not sure I understand. The ask endpoint is powered by some backend right? What does that have to do with Caddy exactly?

1 Like

Thank you for your response!

First, you missed my question about health check. How can I check the health status of caddy instances which have on-demand TLS?

Second, the getting started guide only instruct me about how to upload json file to caddy server via API. I need a Dockerfile with json file already loaded.

Third, API is not enable in caddy, because calling curl http://localhost:2019 return error after running the image built from the Dockerfile.

sudo docker run -d -p 80:80 -p 443:443 -p 2019:2019 -e S3_HOST=S3_HOST  -e S3_BUCKET=S3_BUCKET -e S3_ACCESS_ID=S3_ACCESS_ID -e S3_SECRET_KEY=S3_SECRET_KEY -e S3_PREFIX=ssl  --env ASK_ENDPOINT=ASK_ENDPOINT --env UPSTREAM=upstream reverse-proxy:latest

How to enable API?

Fourth, I have ASK_ENDPOINT which is powered by my backend. Can Caddy use this endpoint to return HTTP 404 code for domains which are not in my database.

Fifth, Can I redirect from root domain to www domain with the domains got from ASK_ENDPOINT? In the example at Common Caddyfile Patterns — Caddy Documentation , redirection can only be configured with domains which are already written in the Caddyfile.

That’s a pretty strange way to build your image tbh. Best if you use the official Docker image, which comes with a builder variant which you can use to make custom builds. See on Docker Hub

Also as Matt said, I don’t think that certmagic plugin is a good idea. Some users have used EFS with success (because it’s an actual filesystem), but S3 doesn’t have locking support due to how it works, so it’s not ideal. Or you can use Redis plugin which is known to work well GitHub - gamalan/caddy-tlsredis: Redis Storage using for Caddy TLS Data

You can make another site block like :8080 which just does respond "OK" or something. If Caddy is running, then it’s “probably” healthy.

Change the CMD in your Docker image to caddy run --environ --config /path/to/caddy.json. That’s it.

The default listen address for the admin API is localhost:2019, which means it only accepts connections from localhost, i.e. 127.0.0.1 which means from inside the container. So if you need to use the admin API from outside, change the admin listener address to :2019. This can be done via global options in the Caddyfile.

Yes, Caddy makes requests to the ask endpoint whenever it gets a request for a TLS handshake for a domain it doesn’t have a certificate for yet. If you return anything other than a 200, then it’ll not attempt to issue a cert, and fail the TLS handshake.

You can with request matchers.

@has-www header_regexp www ^www\.(.*)$
redir @has-www https://{re.www.1}{uri}
1 Like

I am quite busy these days, so I can not follow this topic.
Can you show me the setting in Caddyfile and caddy json to expose API to the world?

It’s a global option: