1. Caddy version:
v2.6.3 h1:QRVBNIqfpqZ1eJacY44I6eUC1OcxQ8D04EKImzpj7S8=
2. How I installed, and run Caddy:
Docker compose
a. System environment:
Running custom build of caddy on alpine linux/amd64 with github.com/abiosoft/caddy-exec
and github.com/gamalan/caddy-tlsredis
.
b. Command:
docker-compose up
c. Service/unit/compose file:
docker-compose.yml
services:
caddy:
build:
context: .
platform: linux/amd64
container_name: caddy
environment:
CADDY_TLS_EMAIL: name@example.com
STORAGE_REDIS_HOST: redis
STORAGE_REDIS_PORT: 6379
STORAGE_REDIS_DB_INDEX: 0
ports:
- 80:80
- 443:443
- 443:443/udp
links:
- redis
redis:
image: redis:alpine
ports:
- 6379:6379
Dockerfile
FROM public.ecr.aws/docker/library/caddy:2.6-builder-alpine AS builder
RUN xcaddy build \
--with github.com/abiosoft/caddy-exec \
--with github.com/gamalan/caddy-tlsredis
FROM public.ecr.aws/docker/library/caddy:2.6-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
LABEL com.datadoghq.ad.logs='[{"source": "caddy"}]'
# Upgrade alpine packages (useful for security fixes)
RUN apk upgrade --no-cache
# Install dependencies
RUN apk add --no-cache redis curl
# Copy caddy config
COPY Caddyfile /etc/caddy/Caddyfile
d. My complete Caddy config:
{
debug
order exec before file_server
storage redis {
host {$STORAGE_REDIS_HOST}
port {$STORAGE_REDIS_PORT}
db {$STORAGE_REDIS_DB_INDEX}
}
on_demand_tls {
ask http://localhost:9876
}
}
http://localhost:9876 {
log {
output stdout
}
@domain query domain=*
route @domain {
exec {
command /usr/bin/redis-cli
args --raw -h {$STORAGE_REDIS_HOST} -p {$STORAGE_REDIS_PORT} -n {$STORAGE_REDIS_DB_INDEX} SISMEMBER certificates:domain_whitelist {http.request.uri.query.domain}
timeout 1s
foreground
log stdout
}
}
}
:443 {
tls {$CADDY_TLS_EMAIL} {
on_demand
}
respond "Content" 200
}
3. The problem I’m having:
I’m working on migrating an existing openresty
(nginx + lua) webserver that uses lua-resty-auto-ssl for on-demand certs to caddy server. Our existing server uses a redis SET to maintain the domains allowlist. I’m attempting to do that same with Caddy using the caddy-exec module to execute the redis-cli during the on_demand_tls
lookup.
Executing redis-cli is working. The problem I’m facing though is that the {http.request.uri.query.domain}
placeholder is not getting interpolated. The environment placeholders {$STORAGE_REDIS_HOST}
are getting replaced with the correct values. Replacing the command
in the exec
directive with "echo"
prints the output to the logs, which shows that the placeholder is not interpolated in the args
.
4. Error messages and/or full log output:
Using curl
to test the internal on_demand_tls callback.
docker-compose up
docker exec -w /etc/caddy -it caddy curl -vL "http://localhost:9876?domain=example.com"
* Trying 127.0.0.1:9876...
* Connected to localhost (127.0.0.1) port 9876 (#0)
> GET /?domain=example.com HTTP/1.1
> Host: localhost:9876
> User-Agent: curl/7.83.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: application/json
< Server: Caddy
< Date: Tue, 14 Feb 2023 15:44:26 GMT
< Content-Length: 21
<
{"status":"success"}
* Connection #0 to host localhost left intact
docker-compose stdout
caddy | {"level":"info","ts":1676389443.8652089,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy | {"level":"info","ts":1676389443.9106903,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
caddy | {"level":"info","ts":1676389443.917317,"caller":"caddy-tlsredis@v0.2.9/storageredis.go:278","msg":"TLS Storage are using Redis, on redis:6379"}
caddy | {"level":"info","ts":1676389443.9378443,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00038a230"}
caddy | {"level":"info","ts":1676389443.9397414,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy | {"level":"info","ts":1676389443.9400246,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy | {"level":"debug","ts":1676389443.9447172,"logger":"http","msg":"starting server loop","address":"[::]:9876","tls":false,"http3":false}
caddy | {"level":"info","ts":1676389443.9449089,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
caddy | {"level":"debug","ts":1676389443.945157,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
caddy | {"level":"info","ts":1676389443.9452372,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy | {"level":"info","ts":1676389443.9454584,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
caddy | {"level":"info","ts":1676389443.9474325,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
caddy | {"level":"debug","ts":1676389443.949505,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":true}
caddy | {"level":"info","ts":1676389443.9495735,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy | {"level":"info","ts":1676389443.951966,"logger":"tls","msg":"cleaning storage unit","description":"{\"Client\":{},\"ClientLocker\":{},\"Logger\":{},\"address\":\"redis:6379\",\"host\":\"redis\",\"port\":\"6379\",\"db\":0,\"username\":\"\",\"password\":\"\",\"timeout\":5,\"key_prefix\":\"caddytls\",\"value_prefix\":\"caddy-storage-redis\",\"aes_key\":\"\",\"tls_enabled\":false,\"tls_insecure\":true}"}
caddy | {"level":"info","ts":1676389443.9522595,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy | {"level":"info","ts":1676389443.953177,"msg":"serving initial configuration"}
caddy | {"level":"info","ts":1676389443.954956,"logger":"tls","msg":"finished cleaning storage units"}
caddy | 0
caddy | {"level":"info","ts":1676389466.2783957,"logger":"http.handlers.exec.exit","msg":"","command":["/usr/bin/redis-cli","--raw","-h","redis","-p","6379","-n","0","SISMEMBER","certificates:domain_whitelist","{http.request.uri.query.domain}"],"duration":0.022111125}
caddy | {"level":"info","ts":1676389466.2803998,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"127.0.0.1","remote_port":"42238","proto":"HTTP/1.1","method":"GET","host":"localhost:9876","uri":"/?domain=example.com","headers":{"User-Agent":["curl/7.83.1"],"Accept":["*/*"]}},"user_id":"","duration":0.024639792,"size":21,"status":200,"resp_headers":{"Server":["Caddy"],"Content-Type":["application/json"]}}
The 0
here is the response from redis-cli
, meaning that the value wasn’t found in the Set.
The subsequent log line shows the command that exec
executed. The issue is that the "{http.request.uri.query.domain}"
isn’t interpolated to the actual value, in this case it should be "example.com"
.
Ultimately, redis-cli
is telling me that the value {http.request.uri.query.domain}
doesn’t exist in the set. Which, while true, isn’t helpful.
5. What I already tried:
I originally tried with the {query.domain}
shorthand placeholder, which is expanded to the longer form in the logs and command, but still didn’t interpolate to the actual domain
value.
I’m new to Caddy, and may be completely misunderstanding how placeholders work here. I’m open to other solutions to the on_demand_tls
callback as well. I just need some way to pass the domain
value to the exec
. I’d like to shy away from spinning up another process/instance to handle the on_demand_tls
HTTP request outside of Caddy.