1. Caddy version (caddy version
):
v2.4.5 h1:P1mRs6V2cMcagSPn+NWpD+OEYUYLIf6ecOa48cFGeUg=
2. How I run Caddy:
a. System environment:
$ cat /etc/*release*
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
b. Command:
sudo systemctl start caddy-api
c. Service/unit/compose file:
$ cat /lib/systemd/system/caddy-api.service
# caddy-api.service
#
# For using Caddy with its API.
#
# This unit is "durable" in that it will automatically resume
# the last active configuration if the service is restarted.
#
# See https://caddyserver.com/docs/install for instructions.
[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target
[Service]
Type=notify
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --resume
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
d. My complete Caddyfile or JSON config:
https://10.252.252.21
{
tls internal
log {
format console
output file /var/log/caddy/caddy.log {
roll_size 5MiB
roll_keep 10
}
}
route /auth* {
authp {
backends {
local_backend {
method local
path /opt/caddy/auth/user_db.json
realm local
}
}
registration {
dropbox /opt/caddy/auth/registrations_db.json
title "User Registration"
}
crypto default token lifetime 21600
crypto key sign-verify AnExampleSecretString123
crypto key token name access_token
ui {
links {
"Elite Manager" /em
"My Auth Portal Settings" /auth/settings icon "las la-cog"
"who am i check" /auth/whoami icon "las la-star"
"Add MFA Authentication App" /auth/settings/mfa/add/app
}
}
}
}
route /gr/* {
jwt {
allow roles authp/admin authp/manager authp/user
inject headers with claims
}
reverse_proxy http://localhost:3000
}
route /version {
respond * "2.0.0-a" 200
}
route /ui/* {
jwt {
allow roles authp/admin authp/manager authp/user
inject headers with claims
}
reverse_proxy http://localhost:1880
}
route /* {
jwt {
allow roles authp/admin authp/manager authp/user
set auth url /auth
primary yes
crypto key token name access_token
crypto key verify AnExampleSecretString123
inject headers with claims
}
reverse_proxy http://localhost:1081
}
}
3. The problem I’m having:
We do all Caddy installation/setup steps through a bash script, which mostly works except for one small issue when providing PKI config through the API.
Here’s how we provide the initial config, by adapting our Caddyfile to JSON then sending that JSON to the /load endpoint:
HEADERS=(-H 'Content-Type: application/json')
OPTIONS=(-X POST)
URL="localhost:2019/load"
CONF="$(caddy adapt --config /home/pi/elite-pi/elite-caddy/Caddyfile)"
curl -v "${OPTIONS[@]}" "${URL}" "${HEADERS[@]}" --data "${CONF}"
This works, and can be confirmed by a quick curl localhost:2019/config/
.
The very next step in our script is providing the PKI block with a unique root_common_name
:
CA_CN_ID=$(uuidgen)
curl -v -X POST \
--url http://localhost:2019/config/apps/pki \
-H 'Content-Type: application/json' \
--data "{
\"certificate_authorities\": {
\"local\":{
\"install_trust\": true,
\"root_common_name\": \"Elite Local Root CA - $CA_CN_ID\",
\"intermediate_common_name\": \"Elite Intermediate CA\",
\"name\": \"Elite CA\"
}
}
}"
4. Error messages and/or full log output:
When the above 2 steps are performed one after the other, the PKI curl
returns this error stemming from the caddy-auth-portal
plugin we’re using:
{"error":"loading new config: loading http app module: provision http: server srv0: setting up route handlers: route 0: loading handler modules: position 0: loading module 'subroute': provision http.handlers.subroute: setting up subroutes: route 2: loading handler modules: position 0: loading module 'subroute': provision http.handlers.subroute: setting up subroutes: route 0: loading handler modules: position 0: loading module 'subroute': provision http.handlers.subroute: setting up subroutes: route 0: loading handler modules: position 0: loading module 'authp': provision http.handlers.authp: found more than one primaryInstance instance of the plugin for default context"}
5. What I already tried:
If I either manually run that same PKI curl
after our setup script completes, OR if I simply add sleep 5
between the initial config POST and the PKI POST, there are no errors. So it seems like Caddy is still doing something even after the first curl
gets a response back.
Since I’m not sure what exactly Caddy is still doing during that time, I don’t know what to wait for before POSTing the PKI config. Sleeping works, but isn’t ideal. It would be nice to have something to check for so we know for sure “ok Caddy is ready for the next POST now”.
6. Links to relevant resources:
Could not find any helpful or relevant resources.