1. Caddy version (caddy version
):
2.5.1
2. How I run Caddy:
Systemd as caddy user
a. System environment:
Linux ip-172-31-29-132 5.13.0-1022-aws #24~20.04.1-Ubuntu SMP Thu Apr 7 22:10:15 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
b. Command:
service caddy start
c. Service/unit/compose file:
[Unit]
Description=Caddy HTTP/2 web server
Documentation=https://caddyserver.com/docs
After=network-online.target remote-fs.target nginx.service
Wants=network-online.target systemd-networkd-wait-online.service
Requires=nginx.service
[Service]
Type=notify
Restart=on-abort
User=caddy
Group=caddy
; Letsencrypt-issued certificates will be written to this directory.
Environment=CADDYPATH=/etc/ssl/caddy/v2
; Always set "-root" to something safe in case it gets forgotten in the Caddyfile.
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile_v2 -watch
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile_v2 -watch
; Use graceful shutdown with a reasonable timeout
KillMode=mixed
KillSignal=SIGQUIT
TimeoutStopSec=5s
; Limit the number of file descriptors; see `man systemd.exec` for more limit settings.
LimitNOFILE=1048576
; Unmodified caddy is not expected to use more than that.
LimitNPROC=2048
PrivateTmp=true
PrivateDevices=false
ProtectHome=true
ProtectSystem=full
ReadWriteDirectories=/etc/ssl/caddy/v2
AmbientCapabilities=CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
d. My complete Caddyfile or JSON config:
# Global Options
{
admin off
order rate_limit before basicauth
on_demand_tls {
ask https://sites.example.com/allowed
interval 2m
burst 20
}
grace_period 10m
storage file_system /etc/ssl/caddy/v2
}
:443 {
tls {
on_demand
}
reverse_proxy @geofilter http://127.0.0.1:7777
@geofilter {
maxmind_geolocation {
db_path /etc/caddy/GeoLite2-Country.mmdb
deny_countries RU RO VN
}
}
rate_limit {
zone dynamic {
key {remote_host}
events 25
window 60s
}
}
uri replace s3.amazonaws.com/example static.example.com
header {
Strict-Transport-Security max-age=31536000;
X-Content-Type-Options nosniff
X-XSS-Protection "1; mode=block"
}
encode {
gzip 9
match {
header Content-Type text/*
}
}
log {
output file /var/log/caddy/access.log {
roll_size 25
roll_keep 10
roll_keep_for 7d
}
format filter {
wrap console {
time_format rfc3339
time_key timestamp
}
fields {
request>headers>Authorization delete
request>headers>Accept delete
}
}
}
}
#Include Redirs
import redir2/*
import redir2MultiDomain/*
3. The problem I’m having:
We are migrating from Caddy V1 to Caddy V2
Our current issue seem to point to caddy trying to obtain certificates for domains listed in our redir directives and not only for those actual incoming requests.
As some of these domain may have been moved away from our cluster or are yet to be pointed at our cluster, this causes a number of certificate obtaining errors as the DNS is invalid.
Our understanding was that only requested domains would be verified against the ask endpoint, and not those in the configs
4. Error messages and/or full log output:
May 11 09:58:28 ip-172-31-29-132 caddy[11287]: {“level”:“error”,“ts”:1652263108.54393,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“www.example.com”,“error”:“no information found to solve challenge for identifier: www.example.com”}
May 11 09:57:47 ip-172-31-29-132 caddy[11287]: {“level”:“error”,“ts”:1652263067.306053,“logger”:“tls.obtain”,“msg”:“could not get certificate from issuer”,“identifier”:“example.com”,“issuer”:“acme-v02.api.letsencrypt.org-directory”,“error”:“HTTP 400 urn:ietf:params:acme:error:malformed - JWS verification error”}
5. What I already tried:
This setup is working on a Caddy v1 cluster. We have updated the config to match the Caddy v2 directives.
We have not noticed this on v1 implementation.
We cross referenced this wiki article (Serving tens of thousands of domains over HTTPS with Caddy) to confirm that our config is setup as it should be
6. Links to relevant resources:
The redir files we have are as follows
example.com {
redir https://www.example.com:{uri} 301
}