Protecting Caddy against malicious configs

1. The problem I’m having:

Suppose I’m running Caddy on a server; one Caddy instance serving multiple domains with different static websites and applications.

I’d like to keep the Caddy configs to each website in that website’s Git repository. That way, it’s easy to set up a local development server. Also, the config is right where the code is, ensuring that they don’t diverge.

In the most simple setup, I’d clone the Git repo to the web server and have it import the Caddyfile from the repo.

However, if a malicious actor gets access to one of the repositories, they can basically mess around with the whole server’s config, introduce new domains, serve every directory the Caddy user has access to (including the data directory with certificate keys) etc.

Since import is basically string replacement, even if I try to lock a website into a block like this {
    import /srv/

the imported Caddyfile can still break out of this by containing something like

} {
    root * /var/lib/caddy/.local/share/caddy/certificates
    file_server browse
  • What would be a good way to prevent this?
  • Does Caddy have any safety measures that allow restricting the directories it’s serving requests from, e.g. by defining some restrictions in the global options?

I recognize that I might have bigger problems when someone compromises the repository of any of the websites; this question is about damage control by utilizing multiple layers of security.

One thing I could do would be to:

  • have a top-level Caddy instance with a fixed config, that terminates TLS and reverse-proxies each of the websites
  • which are in turn running on a separate Caddy instance each and are “jailed” using the namespacing/chroot capabilities of my OS.

This would prevent the individual websites to access the rest of the system (including certificate keys), and also limit the amount of damage they could do with a maliciously modified configuration. For example, if they define additional domains, they would not be routed to them via the top-level instance.

Do you see other possibilities?

2. Error messages and/or full log output:

(Irrelevant, this is a conceptual/architectural question.)

3. Caddy version:


4. How I installed and ran Caddy:

Debian package.

a. System environment:

Debian 12 amd64 with systemd.

[removed the rest of the template questions since they’re not relevant here]

First off, this is too too too old (~2 years). The latest is v2.8.4

For this…

I got to say, if they got access to change code in your repositories, it’s game over. Caddy may be the least of your concerns.

That said,

You’re on the right track here. My first thought was to use systemd capabilities to constrain file-system access to specific directories and other protection measures:

That said, your TLS-terminating Caddy must be configured to have access to the directory of its storage, so you cannot lock it out of it. To get around that, you can configure Caddy to use another storage engine besides the filesystem. The link lists the storage plugins available for Caddy. Pick a storage engine that offers ACL and/or a form of authentication so adding a file_server cannot expose the keys:

In a Caddy Module Writing workshop, I asked the attendants to write a Caddy config file adapter that only accepts config that’s encrypted using age. Although it was just a fun exercise, the idea can be helpful for your scenario. The decryption key can be part of your systemd unit data. You can also store your config in a database behind authentication. There’s one already written for MySQL. You can write another one for other engines, e.g. Hashicorp Vault.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.