Assuming your organization has two different teams who are responsible for two different subdomains besides the main domain name, you would have a Caddyfile that is more of less as follows:
example.com {
root sites/root
file_server
}
team-1.example.com {
root sites/team-1
file_server
}
team-2.example.com {
root sites/team-2
file_server
}
If team-1 would like to update their service to be proxying to an upstream rather than serving files, they would have to reach the singular team who is managing Caddy for changes. In small organizations with a handful of team, this may be simple, but the flood of tickets as the team count increases is bound to be hectic. Caddy can help you allow limited autonomy to the teams to manage their own designated subdomains without stepping on others’ area of operation. The way this is typically solved is to assign the respective team their designated directory of configuration files guarded by the file system ACL, and a glob of the directory content is included/imported into the main configuration file of the web server, but this still requires the engagement of the singular web server admin team to reload the server configuration once the configuration files are updated. Of course, giving each team privilege to reload the server is not possible because the function to reload the server is guarded by the operating system user ACL which does not understand the semantics of web server operations.
Caddy Admin API to the Rescue
Fortunately, Caddy comes with ACL function to allow fine-grained control over the server configuration. It is paired with mTLS authentication to allow the server to authenticate the client and restrict access. It relies on 3 features of Caddy admin configuration:
-
Using
@id
: The@id
key allows you to assign a unique ID to a segment in the JSON configuration. This allows you to use theid
in the API to refer to a specific segment of the configuration for amendments or retrieval. -
Identity: Administrators can configure Caddy with the set of host names it’s identified as within the infrastructure for which Caddy will obtain/issue TLS certificates per the configured
issuers
. -
Remote Administration: It allows you to configure Caddy with the remote address of the admin server along with the Access Control List (ACL) for what users of certain public keys (certificates) are allowed to call what HTTP methods on certain paths of the Caddy admin API. That mouthful will be explained further below.
If I were configuring Caddy to support autonomy for each team, I’d simply… add the @id
keys to the configuration segment of the team’s subdomain, configure the server’s identity, and then adding the @id
path (and its expanded form) to the remote_admin
configuration segment of the main domain.
Let’s do this step-by-step.
Using @id
The @id
key in JSON is meta key that is not part of the config structure, but is detected an identified during parsing of the JSON configuration. It allows you to assign a unique name to a segment of the JSON configuration. You can then use the assigned name in the Caddy admin API as /id/<assigned-id>
instead of using the long path form to traverse the JSON config.
Utilizing this feature in our config to name and assign the designated segments to the subject team for them to use. This will allow them to configure their sites using the Caddy API, but other necessary configuration will be done later in this article. We start by adapting the earlier Caddyfile to JSON. Here’s the result:
{
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [
{
"match": [
{
"host": [
"team-1.example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-1"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"team-2.example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-2"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/root"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
We’ll use the @id
key to name the segments, and we’ll add explicit admin
configuration because we’ll have to customize it later anyways:
{
+ "admin": {
+ "listen": "localhost:2019"
+ },
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [
{
"match": [
{
"host": [
"team-1.example.com"
]
}
],
"handle": [
{
+ "@id": "team-1",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-1"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"team-2.example.com"
]
}
],
"handle": [
{
+ "@id": "team-2",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-2"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/root"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
It’s important to ensure the @id
is added in the subroute
handler after the host
matcher to ensure the team does not manipulate the matcher and take over other hosts, e.g. team-2.example.com
.
Run Caddy with the above configuration using the command: caddy run --config <path-JSON-config>
On a second terminal, run curl localhost:2019/id/team-1 | jq .
to inspect the retrieved configuration. You’ll have the following output:
{
"@id": "team-1",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-1"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
We see in the result the configuration for the var
handler, which the root
directive in the Caddyfile uses under the hood to set the file_server
root, and the configuration of the file_server
handler. The same can be done for /id/team-2
, and you’ll see similar results. We’re now left with the problem of either of these teams is able to view and change Caddy’s config, not only theirs. This issue will be resolved in the Remote Administration part of the configuration. We’ll have to configure the server’s identity before that.
Identity
The identity
configuration instructs Caddy to obtain a TLS certificate for the admin endpoint of the server and to use it to identify itself to the client. It’s also to allow for HTTPS communication with the admin API. N.B. although the identity
key neighbors the remote
key, they need each other to function correctly. Configuring identity
without remote
will have Caddy obtain certificates for the listed identifiers, but the certificates are not usable outside of the remote listener endpoint. The intermediate step of configuring identity
without remote
is only beneficial to allow you to obtain the certificates for distribution, if they’re issued by a non-distributed/non-trusted CA.
Let’s add 2 identities to our server, one with with external FQDN and the other with an internal FQDN. The external FQDN will have its certificates issued by trusted CA, while the internal FQDN will have its certificates issued by Caddy’s internal PKI function. We don’t want the external FQDN to receive its certificate from Caddy’s own PKI, rather the public FQDN should get its certificate from a public CA while the internal FQDN should receive the certificate from Caddy’s PKI. There’s currently no way to tell Caddy which CA to use for which FQDN, so we have to rely on the public CA rejection of the internal FQDN so Caddy falls back to its own PKI. Adding the respective bits into the configuration ends up with this:
{
"admin": {
- "listen": "localhost:2019"
+ "listen": "localhost:2019",
+ "identity": {
+ "identifiers": [
+ "caddy-admin.home.arpa",
+ "caddy-admin.example.com"
+ ],
+ "issuers": [
+ {
+ "module": "acme",
+ "email": "email@example.com",
+ "challenges": {
+ "dns": {
+ "provider": {
+ "name": "cloudflare",
+ "auth_token": "[redacted]"
+ }
+ }
+ }
+ },
+ {
+ "module": "internal",
+ "ca": "adminca"
+ }
+ ]
+ }
},
"apps": {
+ "pki": {
+ "certificate_authorities": {
+ "adminca": {
+ "name": "Caddy Admin endpoint CA",
+ "install_trust": false
+ }
+ }
+ },
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [
{
"match": [
{
"host": [
"team-1.example.com"
]
}
],
"handle": [
{
"@id": "team-1",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-1"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"team-2.example.com"
]
}
],
"handle": [
{
"@id": "team-2",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-2"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/root"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
Running the above configuration will not do anything special besides generating the certificates and storing them in Caddy default storage, i.e. the file-system. For robust production environment, it’s recommended to use Caddy with distributed storage, such as RDBMS, Redis, or other KV store.
Remote Administration
Now that we have our certificates and team designation in the config, let’s add the remote administration capability along with the ACL (Access Control List). We’ll start with the simple addition of the remote
listener address.
{
"admin": {
"listen": "localhost:2019",
"identity": {
"identifiers": [
"caddy-admin.home.arpa",
"caddy-admin.example.com"
],
"issuers": [
{
"module": "acme",
"email": "email@example.com",
"challenges": {
"dns": {
"provider": {
"name": "cloudflare",
"auth_token": "[redacted]"
}
}
}
},
{
"module": "internal",
"ca": "adminca"
}
]
- }
+ },
+ "remote": {
+ "listen": ":443"
+ }
},
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [
{
"match": [
{
"host": [
"team-1.example.com"
]
}
],
"handle": [
{
"@id": "team-1",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-1"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"team-2.example.com"
]
}
],
"handle": [
{
"@id": "team-2",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-2"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/root"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
It’s the time for the fun bits now! We need to add the access_control
part inside the remote
object. The access_control
takes an array of objects containing the team’s identity certificate and an array of the path/method (HTTP method) combination that the team is allowed to access. Note that the key name public_keys
is deceptive, and it actually takes TLS certificates out of which Caddy extracts the public key. The teams are free to generate their certificates however they like. One advanced approach is to use Caddy’s ACME server to generate the certificates for the teams, which allows the teams to use an ACME client (Caddy? ;)) to obtain their certificates for them. Enough talking. Let’s build the permissions
for each team.
The path
key in the permission object takes a path prefix, not an exact path. As said earlier, this allows users to control a sub-segment of the JSON config. Team-1 is allowed to access the path /id/team-1
which expands to /config/apps/http/servers/srv0/routes/0/handle/0
. Both paths must be added to the permission object because Caddy checks the permission twice, once with the short form and another one after expansion. This is done to ensure the administrator configuring the access-control knows exactly the short form and what it expands to. To make matters safer, we’re appending /routes/0
to the allowed paths to ensure the team doesn’t accidentally destruct their own segment. We’ll allow team-1 to use GET
, POST
, PUT
, and PATCH
. We’ll exclude DELETE
to avoid them accidentally deleting that part of the config. Deleting the wrong part of the config can be dangerous, so it’s best to be safe and leave it to the server admin for better alignment. Thus their permissions
array will contain a single object with the following:
[
{
"methods": [
"GET",
"POST",
"PUT",
"PATCH"
],
"paths": [
"/id/team-1/routes/0",
"/config/apps/http/servers/srv0/routes/0/handle/0/routes/0"
]
}
]
Similarly, team-2 is allowed to access the path /id/team-2
which expands to /config/apps/http/servers/srv0/routes/1/handle/0
. Their permissions
array is:
[
{
"methods": [
"GET",
"POST",
"PUT",
"PATCH"
],
"paths": [
"/id/team-2/routes/0",
"/config/apps/http/servers/srv0/routes/1/handle/0/routes/0"
]
}
]
The only remaining part is the public_keys
array for each access_control
object. The certificates should be in base64-encoded DER format. You can convert PEM certificates to DER with the following one-liner:
openssl x509 -in [cert-file] -outform der | base64
Putting the full configuration structure together gives us this magnificent JSON:
{
"admin": {
"listen": "localhost:2019",
"identity": {
"identifiers": [
"caddy-admin.home.arpa",
"caddy-admin.example.com"
],
"issuers": [
{
"module": "acme",
"email": "email@example.com",
"challenges": {
"dns": {
"provider": {
"name": "cloudflare",
"auth_token": "[redacted]"
}
}
}
},
{
"module": "internal",
"ca": "adminca"
}
]
},
"remote": {
"listen": ":443",
"access_control": [
{
"public_keys": ["MIIBxDCCAWugAwIBAgIRALpxPa8ddcgi9plvjfJ86DEwCgYIKoZIzj0EAwIwMzExMC8GA1UEAxMoQ2FkZHkgTG9jYWwgQXV0aG9yaXR5IC0gRUNDIEludGVybWVkaWF0ZTAeFw0yNDA2MjMxNTUxMDNaFw0yNDA2MjQwMzUxMDNaMAAwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARmWucmgyi2qLRpq+2DpNEJGorf5iPRzmKswpJlcZOs7gAn4Fjab+3QTgjvgiV3n+tKc0LkHoq+jhnIDWKpuL5Po4GSMIGPMA4GA1UdDwEB/wQEAwIHgDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFGMlQPdvkGZv7JQiepEWsle+AOPLMB8GA1UdIwQYMBaAFO7Zk94VxN1IiuQU2/hixYxGnmY2MB4GA1UdEQEB/wQUMBKCEHRlYW0tMS5sb2NhbGhvc3QwCgYIKoZIzj0EAwIDRwAwRAIgO/puV7z97RwvEQ2gCqfK88oDjw3XpNn0R844w9jIFUECIB60wlP5HntK3nKoUIVEr3TbPmeXbglCj69BM9luTQe/"],
"permissions": [
{
"methods": [
"GET",
"POST",
"PUT",
"DELETE"
],
"paths": [
"/id/team-1/routes/0",
"/config/apps/http/servers/srv0/routes/0/handle/0"
]
}
]
},
{
"public_keys": ["MIIBxDCCAWugAwIBAgIRAK/pL3qYlg7JZUqqkRtsqywwCgYIKoZIzj0EAwIwMzExMC8GA1UEAxMoQ2FkZHkgTG9jYWwgQXV0aG9yaXR5IC0gRUNDIEludGVybWVkaWF0ZTAeFw0yNDA2MjMxNTUxMDNaFw0yNDA2MjQwMzUxMDNaMAAwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASuI6bcXapOOIkYlLi5cCbr685HQwIiJO4QNgdOdb8mA44vbNjKLvYNu9JlBtPbmvHGaJhN9uQgAGjtxI2tlYI0o4GSMIGPMA4GA1UdDwEB/wQEAwIHgDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFKMUnsIrauNNH1qzZ3gI9e3jqMaTMB8GA1UdIwQYMBaAFO7Zk94VxN1IiuQU2/hixYxGnmY2MB4GA1UdEQEB/wQUMBKCEHRlYW0tMi5sb2NhbGhvc3QwCgYIKoZIzj0EAwIDRwAwRAIgXLmkoNu0IF/O71sDkKgD8viwIyp7NDFHPnhOSRQos5MCIEYVbvI3g7bV0uHNWkrB9xZbH1ZPEN+3KG5ctiPwFzmg"],
"permissions": [
{
"methods": [
"GET",
"POST",
"PUT",
"DELETE"
],
"paths": [
"/id/team-2/routes/0",
"/config/apps/http/servers/srv0/routes/0/handle/1"
]
}
]
}
]
}
},
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [
{
"match": [
{
"host": [
"team-1.example.com"
]
}
],
"handle": [
{
"@id": "team-1",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-1"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"team-2.example.com"
]
}
],
"handle": [
{
"@id": "team-2",
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/team-2"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
},
{
"match": [
{
"host": [
"example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "vars",
"root": "sites/root"
},
{
"handler": "file_server",
"hide": [
"./Caddyfile"
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
Run the above config. The teams are now able to control their own sites. For instance, team-1 can change their site’s configuration to return HTTP 413
by POSTing to https://caddy-admin.example.com/id/team-1/routes/0
the following segment with curl:
curl --cert team-1.localhost/team-1.localhost.crt --key team-1.localhost/team-1.localhost.key https://caddy-admin.example.com:443/id/team-1/routes/0 -H "Content-Type: application/json" -d '{"handle":[{"handler":"static_response","body":"I am a teapot","status_code":413}]}'
Caddy Empowers