1. The problem I’m having:
I have ask
setup and working nicely, but my logs show something I think is not quite right.
2. Error messages and/or full log output:
{"level":"error","ts":1725036521.1167374,"logger":"tls.obtain","msg":"will retry","error":"[172.21.0.2] Obtain: subject does not qualify for a public certificate: 172.21.0.2","attempt":2,"retrying_in":120,"elapsed":60.005017771,"max_duration":2592000}
{"level":"info","ts":1725036641.1100104,"logger":"tls.obtain","msg":"releasing lock","identifier":"172.21.0.2"}
3. Caddy version:
v2.7.6 h1:w0NymbG2m9PcvKWsrXO6EEkY9Ru4FJK8uQbYcev1p3A=
4. How I installed and ran Caddy:
a. System environment:
Alpine Linux. Docker compose. Caddyfile.
b. Command:
See C.
c. Service/unit/compose file:
version: "3.9"
services:
caddy:
container_name: caddy
image: caddy:alpine
ports:
- 2019:2019
- 80:80
- 443:443
- 443:443/udp
restart: unless-stopped
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./volumes/caddy/config:/config/caddy
- ./volumes/caddy/data:/data/caddy
working_dir: /srv/www
d. My complete Caddy config:
########################################################################
#
# Variables that get imported
#
########################################################################
# Secret public domain verification
(import_ask) {
on_demand_tls {
ask http://REDACTED/check
}
}
# Common configuration
(import_common) {
encode gzip
file_server
}
# Reverse Proxy
(import_reverse_proxy) {
reverse_proxy {
to http://REDACTED
}
}
# TLS
(import_tls) {
tls {
on_demand
}
}
# Headers
(import_headers) {
# Passive Headers
header {
?Cache-Control max-age=3600
}
# Normal Headers
header {
# X
X-Jjj https://jjj.software
X-Content-Type-Options nosniff
X-Download-Options noopen
X-Environment Production
X-Frame-Options SAMEORIGIN
X-SSL-Domain {host}
X-XSS-Protection 1; mode=block
# CSRF
Access-Control-Allow-Origin *
#Strict-Transport-Security max-age=31536000; includeSubDomains; preload
# Permissions
Permissions-Policy interest-cohort=()
# CSP
Content-Security-Policy "default-src 'self'; connect-src 'self' https:; style-src 'self' 'unsafe-inline' data: https:; script-src 'self' 'unsafe-eval' 'unsafe-inline' https:; font-src 'self' data: https:; img-src 'self' blob: data: https: http:; media-src 'self' blob: data: https:; frame-src 'self' blob: https:; frame-ancestors 'self' blob: https:; object-src 'self'; worker-src 'self' blob: https:"
}
}
########################################################################
#
# Definitions
#
########################################################################
# Global
{
email admin@jjj.domains
import import_ask
}
# Checker
http:// {
handle /check {
import import_reverse_proxy
}
}
# App
:443 {
import import_tls
import import_headers
import import_reverse_proxy
}
5. Links to relevant resources:
The Caddy docs state:
hostnames qualify for publicly-trusted certificates if they … are not an IP address
I interpreted this to mean that IPs (like the Docker network IPs) would not qualify for public certificates, which clearly Caddy is seeing & logging.
I am guessing that one of the server blocks is overriding the default IP omission? If so, what is the syntax to exclude/skip/prevent these Docker IPs (or really any IP/non-FQDM) from being included in the on-demand?