Blank grey screen on static server (http/403)

1. Output of caddy version:

v2.5.2 h1:eCJdLyEyAGzuQTa5Mh3gETnYWDClo1LjtQm2q9RNZrs=

2. How I run Caddy:

a. System environment:

Ubuntu 22.04 LTS
systemd: yes
docker: no

b. Command:

sudo systemctl enable caddy

c. Service/unit/compose file:

systemd unit file:

#
# See https://caddyserver.com/docs/install for instructions.
#
# WARNING: This service does not use the --resume flag, so if you
# use the API to make changes, they will be overwritten by the
# Caddyfile next time the service is restarted. If you intend to
# use Caddy's API to configure it, add the --resume flag to the
# `caddy run` command or use the caddy-api.service file instead.

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target

[Service]
Type=notify
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

d. My complete Caddy config:

# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.

girlbulge.gay {
	# static site
	root * /home/ubuntu/web
	file_server
}

tabletop.girlbulge.gay {
	redir https://tt.girlbulge.gay
}

tt.girlbulge.gay {
	# proxy all requests to port 30000 (foundryVTT)
	reverse_proxy localhost:30000
	encode zstd gzip
}

# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile

3. The problem I’m having:

I want to serve static files, but when i try to load a webpage i get a blank grey screen. There are static files populating the configured static root directory, but they are not being served. I have provided the results of curl -v below.

I am also running foundryVTT on the same VPS using a subdomain and reverse proxy. Foundry works flawlessly, but the static site remains broken.

4. Error messages and/or full log output:

I am not getting any error messages on the server.

request repsonse from server:

[I] ⋊> ~ curl -v https://girlbulge.gay                                                           20:44:14
*   Trying 132.145.161.85:443...
* Connected to girlbulge.gay (132.145.161.85) port 443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=girlbulge.gay
*  start date: Jun  7 03:21:03 2022 GMT
*  expire date: Sep  5 03:21:02 2022 GMT
*  subjectAltName: host "girlbulge.gay" matched cert's "girlbulge.gay"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* h2h3 [:method: GET]
* h2h3 [:path: /]
* h2h3 [:scheme: https]
* h2h3 [:authority: girlbulge.gay]
* h2h3 [user-agent: curl/7.84.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x556a5025d5f0)
> GET / HTTP/2
> Host: girlbulge.gay
> user-agent: curl/7.84.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 403
< server: Caddy
< content-length: 0
< date: Thu, 28 Jul 2022 00:44:24 GMT
<
* Connection #0 to host girlbulge.gay left intact

journalctl log:

ubuntu@kobold:~$ sudo journalctl -f -u caddy
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.8688753,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.8689814,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.8689942,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.869234,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["tt.girlbulge.gay","girlbulge.gay","tabletop.girlbulge.gay"]}
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.869398,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40004b33b0"}
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.9059136,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0x400027fb20"}
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.9061599,"msg":"autosaved config (load with --resume flag)","file":"/var/lib/caddy/.config/caddy/autosave.json"}
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.9062421,"logger":"admin.api","msg":"load complete"}
Jul 28 00:38:14 kobold systemd[1]: Reloaded Caddy.
Jul 28 00:38:14 kobold caddy[213696]: {"level":"info","ts":1658968694.9373593,"logger":"admin","msg":"stopped previous server","address":"tcp/localhost:2019"}

5. What I already tried:

I found this thread, but as I am not using docker the provided solution is not applicable to me. This is the only similar issue i could find with a search engine.

I have read the documentation on static sites, and cannot find anything wrong with my caddyfile.

I have ensured caddy can find my static files with chmod --recursive a+r ~/web

6. Links to relevant resources:

Hi :wave:

You are running into permission issues, despite your chmod --recursive a+r ~/web attempt.

Will explain in a second, but first:

The blank page you are seeing has an error attached, actually.
Sure, there is nothing to see on the website itself, but the browser devtools report a http/403 status code.

Your curl (thank for providing that, btw), shows it too:


A user (here: caddy), which tries to read a file (here: your website) on Linux, must have execution permissions (e.g chmod -R a+x) on every directory leading to your file.
This is just how Linux file permissions work :innocent:

I’ve explained that a bit in the past in

but I haven’t tidied up it to a proper wiki article here yet *sigh*

Anyhow, so the tl;dr-ish version would be:
caddyserver runs as user caddy

Your website’s webroot is in /home/ubuntu/web

But the caddy user is missing the execution permissions on /home/ubuntu and maybe on /home/ubuntu/web.
You could just add the missing permission bit via chmod, but please don’t.
Just move your website into a proper location like /srv or /var/www and you are good.

The /home/$USER/ directory is missing the execution bit by default for a reason.
Other users aren’t supposed to have access to any file or directories in another users’ /home directory.

1 Like

yes that seems to have done it! thanks. I didn’t realize the static files would need execution permissions.

1 Like

Great! :slight_smile:

But just to clarify real quick :innocent:

Your static files don’t need the execution permissions.
Only the directory itself, not any of the files in it.
Imagine /a/b/d.file
If a user wants to read (or write to) file /a/b/d.file, Linux checks whether that very user, has execution permissions for /a, then /a/b and then if read (or write) permissions for d.file

You can think of it as “execution bits on a directory are needed to ls it”.
And every directory in a chain of directories leading to your file has to get ls-ed to reach your file.

2 Likes

so then would it be reasonable to do something like this?

# chown caddy:caddy /var/www
# usermod  -a -G caddy ubuntu
# chmod -R g+rw /var/www
# sudo find /var/www -type d -exec chmod g+s {}

Ye, that looks alright :innocent:

Edit2:
I misunderstood your
find /var/www -type d -exec chmod g+s {}
I think you want
find /var/www -type d -exec chmod g+x {}
instead.

Original Edit (click to expand)

Assuming your /var/www already has x for group, which it should by default.
Only /home/$USER and /root (which is also a home directory) are missing those.
An additional x in chmod -R g+rwx /var/www would be required if - for whatever reason - the /var/www directory is missing the execution bit.

This could, for example, happen when your umask is very restrictive.

There are a few pointers on this if you want to go down that rabbit hole in

2 Likes