You’ll likely need to reload like this instead:
curl -X POST "http://localhost:2019/load" \
-H "Content-Type: text/caddyfile" \
-H "Origin: whatever" \
--data-binary @/etc/caddy/Caddyfile
But I don’t think what you’re trying to do makes sense (see below).
Those are two separate concepts altogether. One is for the admin API, and the other is for your actual sites you’re serving.
There’s no existing plans for augmenting the enforce_origins
feature, but Caddy v2.4.0 will introduce new features to admin
that may be what you’re hoping for:
caddyserver:master
← caddyserver:remote-admin
opened 10:32PM - 25 Jan 21 UTC
This PR adds 3 separate, but very related features:
1. Automated server ident… ity management
2. Remote administration over secure connection
3. Dyanmic config loading at startup
## 1. Automated server identity management
How do you know you're connecting to the server you think you are? How do you know the server connecting to you is the server instance you think it is? Mutually-authenticated TLS (mTLS) answers both of these questions. Using TLS to authenticate requires a public/private key pair (and the peer must trust the certificate you present to it).
Fortunately, Caddy is really good at managing certificates by now. We tap into that power to make it possible for Caddy to obtain and renew its own identity credentials, or in other words, a certificate that can be used for both server verification when clients connect to it, and client verification when it connects to other servers. Its associated private key is essentially its identity, and TLS takes care of possession proofs.
This configuration is simply a list of identifiers and an optional list of custom certificate issuers. Identifiers are things like IP addresses or DNS names that can be used to access the Caddy instance. The default issuers are ZeroSSL and Let's Encrypt, but these are public CAs, so they won't issue certs for private identifiers. Caddy will simply manage credentials for these, which other parts of Caddy can use, for example: remote administration or dynamic config loading (described below).
A bare-bones config might look like this:
```json
{
"admin": {
"identity": {
"identifiers": [
"123.123.123.123",
"example.com",
"127.0.0.1",
"localhost"
],
"issuers": [
{
"module": "acme",
"ca": "https://my-acme-server.example.com/",
"trusted_roots_pem_files": ["my-acme-root.crt"]
}
]
}
}
}
```
Here, Caddy is told that its identities are those IP addresses and DNS names. It then will use your custom ACME server with a custom root certificate (to trust when connecting to it) to get certificates for those identifiers. Note that in this case, your CA would have to issue certs for localhost and 127.0.0.1, which most CAs don't do, since they can't be verified if they are remote.
## 2. Remote administration over secure connection
This feature adds generic remote admin functionality that is safe to expose on a public interface.
- The "remote" (or "secure") endpoint is optional. It does not affect the standard/local/plaintext endpoint.
- It's the same as the [API endpoint on localhost:2019](https://caddyserver.com/docs/api), but over TLS.
- TLS cannot be disabled on this endpoint.
- TLS mutual auth is required, and cannot be disabled.
- The server's certificate _must_ be obtained and renewed via automated means, such as ACME. It cannot be manually loaded.
- The TLS server takes care of verifying the client.
- The admin handler takes care of application-layer permissions (methods and paths that each client is allowed to use).\
- Sensible defaults are still WIP.
- Config fields subject to change/renaming.
Here's a basic example config, that I will explain:
```json
{
"admin": {
"identity": {
"identifiers": ["example.com"]
},
"remote": {
"access_control": [
{
"public_keys": ["base64-encoded DER certificate"]
}
]
}
}
}
```
Explanation:
- First we configure identity management. We tell Caddy that its identifier is `example.com`, so it will try to obtain and renew a certificate for that domain. By default, it will use publicly-trusted CAs. This is OK for DNS names that are properly configured. _Identity management is required when enabling remote administration, otherwise the server cannot present a TLS certificate to the client and secure the connection._
- We've enabled a secure admin endpoint at its default address **`:2021`** (you can customize it with `"listen": "..."` just like the regular admin endpoint) - note that the default address is not bound to a local interface, so it can be accessed remotely.
- A single public key is then added to the ACL. Only the sole bearer of the associated private key is allowed unrestricted access of the API.
We can also restrict different clients/users as for methods and paths they are allowed to access:
```json
{
"public_keys": ["base64-encoded DER certificate"],
"permissions": [{
"paths": ["/id/foo/"],
"methods": ["GET"]
}]
}
```
All the users specified in `public_keys` will be allowed to access all paths in the API starting with `/id/foo/` using only the `GET` method. As you can see, you can specify multiple paths and methods, and multiple groups of them, per group of public keys.
Other advanced functionality is a bit limited because we cannot import any Caddy modules: they all import this package instead! So, we cannot import the `caddyhttp` or `caddytls` packages and take advantage of their advanced routing or security logic. The admin controls are relatively simple, but I imagine this should be more than enough...?
Caddyfile config can probably be added later.
## 3. Dynamic config loading at startup
Since this feature was planned in tandem with remote admin, and depends on its changes, I am combining them into one PR.
Dynamic config loading is where you tell Caddy how to load its config, and then it loads and runs that. First, it will load the config you give it (and persist that so it can be optionally resumed later). Then, it will try pulling its _actual_ config using the module you've specified (dynamically loaded configs are _not_ persisted to storage, since resuming them doesn't make sense).
This PR comes with a standard config loader module called `caddy.config_loaders.http`.
Here's how it looks:
```json
{
"admin": {
"config": {
"load": {
"module": "http",
"url": "https://example.com/my_caddy_config.json"
}
}
}
}
```
Caddy will download the config at the given URL and run it.
You can also configure authentication -- both client and server -- to ensure you get only trusted configs. If you add this to your config:
```
"tls": {
"use_server_identity": true
}
```
then Caddy will use the configured identity (explained above) as a client certificate to present to the server it is connecting to. In this case, identity management must also be configured.
What are you looking to do with the API though? Because generally it’s either-or on Caddyfile vs JSON+API. Since Caddyfile to JSON is one-way, any time you make config changes via the JSON API, those changes will be lost the next time you reload from the Caddyfile.
(Although Caddy does persist an autosave.json
which can be paired with the --resume
option to make Caddy load from that on initial startup instead – you should probably use the caddy-api
service instead of caddy
if you’re going that route.)
So either go all-in on JSON, (and you may use the Caddyfile adapter to as a basis for your initial JSON config), or go all-in on the Caddyfile and limit yourself to the functionality it provides.