Now I am facing the same issue: I’d like to allow multiple users, which I don’t necessarily trust, to create a space for their web projects. As these users shouldn’t be able to read each others files, I cannot create a “www-data” usergroup or something like this.
My first attempt
I am working around this by changing the group of the domain’s document root to group “caddy”:
- name: Create a directory in each site-user's home directory
ansible.builtin.file:
path: "/home/{{ item.username }}/{{ item.domain }}"
state: directory
owner: "{{ item.username }}"
group: "caddy"
mode: "2750"
recurse: yes
loop: "{{ sites_list }}"
Using setgid with 2750 should inherit the group for all new files that are created inside that directory. I need to check, whether this could have security implications.
My question
Is it still out of scope for caddy to allow multi-user setups? Have there been alternative attempts by others that have been discussed here that I did not see yet?
The Caddyfile is simply not designed in such a way that it can have per-user isolation. For example, there’s only one global options block, defined snippets can leak across users, etc. So as Mohammed said, basically you’re on your own to implement your own DSL that produce a Caddy JSON config, with whatever isolation checks you want.
Keep in mind, Linux file permission checks require that for a file to be readable, all parent directories must be executable by that user. For that reason, service users (like caddy by default) typically have no access to things within /home.
I am writing a small piece of software that is designed to manage my team’s servers, which are currently based on Laravel Forge and NGINX.
This software uses a database to get a list of domains that should be created on the target host including all options that these domains may or may not have (enable tls, …).
What I am currently doing:
I am binding the admin api to a unix socket as recommended in this forum. This socket is owned and only readable by the user caddy.
I am including all files in /etc/caddy/sites.d in my main Caddyfile
Each of these files are generated dynamically
Reloads and reconfigurations should never happen by the user by leveraging the caddy admin interface, but only by rebuilding those config files.
I hope that this is not somewhat of an antipattern. It’s quite similar to how Laraevl Forge works with NGINX.
If this procedure is “good enough”, the thing to solve is the ability to actually read (and serve) files that are stored in the document root. One possibility I have in mind is to create a directory called /var/www/vhosts and create a folder for each vhost. Those folders could be symlinked by /home/domain_user/domain.com for easy access.
Now one thing remains: The caddy user needs to access those files and /var/www/vhosts would have to be executable.
But besides this (I am thinking about setfacl for example): Am I going a path that is an anti pattern or does this make sense to you?
It sounds like you’re exposing your own configuration surface to the users and then taking their inputs and maintaining your own Caddyfile configuration. That should be perfectly fine as long as you’re making sure that user-requested configuration is sane.
As for the files:
Linking probably wouldn’t work - links don’t have any control over the permissions of the target.
File ACLs are plausible but your users can easily override them if they own the files, breaking things. What you could do is chown the files to root:root and then set ACLs that allow modify permissions for the users.
Alternatively, bindfs can “proxy” a directory from another mountpoint and set permissions on the mounted directory. You could theoretically use this to mount /home/username/domain.example.com into /var/www/hosts/domain.example.com with rewritten permissions and ownership to allow Caddy to read it all.