Ye, the arch package’s systemd service leverages some additional systemd “hardening”, while the reference systemd service doesn’t:
vs
ProtectHome=true
does that:
[…] If true, the directories
/home/
,/root
, and/run/user
are made inaccessible and empty for processes invoked by this unit.
and that’s due to PrivateTmp=true
:
[…] If true, sets up a new file system namespace for the executed processes and mounts private
/tmp/
and/var/tmp/
directories inside it that are not shared by processes outside of the namespace. This is useful to secure access to temporary files of the process, but makes sharing between processes via/tmp/
or/var/tmp/
impossible. If true, all temporary files created by a service in these directories will be removed after the service is stopped.
Caddy already returns a http/403 whenever it runs into a permission error while serving file_server
Sure, will still be a blank page, which I get may seem odd coming to almost any other webserver.
But in your specific case (or the arch package’s version of caddy in general, actually), caddy has literally no chance of seeing those files in /tmp/
or /home/
- due to those additional systemd sandboxing flags.
Also, be aware that if you have a file in /home/xerus/website/index.html
and chmod
/home/xerus/website
to be world-accessible, you will still be unable to access that folder (and any files within it) as any other user, because /home/xerus
also needs to be world-accessible.
To reach a file in a tree of folders on Linux, the requesting user has to have permissions on any folder leading to that tree (e.g. /home
, /home/xerus
and /home/xerus/website
) and those folders need to have the execution bit set.
See Reverse proxy + static file serving results in 403 (Forbidden) for static files - #2 by emilylange for a bit more details
I would highly recommend considering /srv/
or /var/www/
instead