Hi there,
The website is working well, as it should.
However, I would like to make it as secure as possible.
I made an automated pentest with Acunetix. I was able to fix some mid/low level vulnerabilities, but I am not able to fix the “FastCGI Unauthorized Access Vulnerability” that occurs because “It was confirmed that the FastCGI port 9000 is publicly accessible.”
Acunetix propose this fix: “The FastCGI port should not be publicly accessible. FastCGI should be configured to listen only on the local interface (127.0.0.1) or to use a unix socket.”
My php_fastcgi directive is indeed listening 127.0.0.1, but the error still persist.
4. Error messages and/or full log output:
5. What I already tried:
I tried to change the php_fastcgi directive to php (my container php service), but in this case I stumble on a 504 error, php files does not exists.
When running inside a container, 127.0.0.1 refers to this same container. The Caddy container won’t have php-fpm running, so you need to tell Caddy to talk to the other container. So that means doing it like this: php_fastcgi php:9000 (where php is the name of your other docker service)
However if I remove this network rule, my website is no longer available and I stumble on a ERR_SSL_PROTOCOL_ERROR.
It still persists after a docker-compose down -v and docker-compose up -d
However my grafana dashboard is still available, so… I do not know what to think…
What are in your logs? How does it look if you make the request with curl -v? You omitted the domain from your Caddyfile in your post, so it’s unclear to me whether you’re trying to use local HTTPS or not, is that the case?
Oh yeah, I just noticed the domain is missing.
Well, not the domain, but the IP. It is supposed to be a local website available for all the LAN.
I tried localhost, 127.0.0.1 and 192.168.19.128 the host IP.
I recommend using :80 instead for the site address instead of 192.168.19.128 if it’s only meant to be accessible locally.
When you use an IP address as the site address, Automatic HTTPS is enabled, with Caddy’s internal CA. This means that the certificates are not issued by a public CA, so they will not be trusted by any browsers/clients by default, unless you install the internal CA’s root certificate to your system/browser trust stores.
If you’re just serving this locally, then HTTPS isn’t really necessary. (Also, remove the Strict-Transport-Security header, otherwise your browser will forever “remember” to redirect to HTTPS and you won’t be able to access the site over HTTP).
If you absolutely need HTTPS, but aren’t making this site publicly accessible, it’s much more complicated to set up. Your options:
Use a real domain, but use the DNS challenge to have a publicly trusted certificate issued, while the site is not publicly accessible. The DNS challenge is the only way to get a publicly trusted certificate without it being publicly accessible. This involves building Caddy with a DNS plugin for your DNS provider. You could use DuckDNS for a free domain though (I wrote the DuckDNS plugin for Caddy).
Use local HTTPS, but this requires grabbing the root CA certificate from Caddy’s storage (i.e. /data/pki/authorities/local/root.crt) and installing it on all the devices you’ll be making requests to Caddy. This is all manual and annoying to do, and in some cases close to impossible on certain kinds of devices (big pain in the ass on a smart TV for example).
Yeah it is meant to be locally available, but within an organization network. So, I would like to keep HTTPS.
I would rather un untrusted certificate (that I know leggit) than no certificate at all.
Is it not possible to just go with the certificate generated by Caddy? The warning saying the certificate can’t be trusted by the browser is not so annoying for us.
An untrusted certificate is essentially the same as no security at all. Anyone could perform a man-in-the-middle attack by inserting themselves between the client and server, and decrypting the connection from the server, then re-encrypting the connection with their own untrusted certificate on the way back to the client. Trust is the thing that makes it secure.
But that is the situation you have right now anyways. Caddy is serving an untrusted certificate, so your browser responds with ERR_SSL_PROTOCOL_ERROR because it doesn’t trust it.
P.S. I updated my comment above while you replied.
According to my understanding, there’s no problem with a self signed certificate, despite the warnings. It is just the root CA that can’t be verified since it’s not a recognized one.
Isn’t it the same for the ones generated by caddy?
It is the same, except Caddy issues from a CA that it maintains, rather than actual self-signed certificates. A self signed certificate is where the certificate was its own CA, but Caddy sets up a chain. This makes it possible to have short-lived leaf certificates with a long-lived root CA certificate, and the root will be the same for the lifetime of the Caddy instance.
Well then that means that something else is preventing Caddy from receiving the request. You’ll need to do some digging to figure out what else in your system/setup might be getting in the way.