How to setup PHP FastCGI securely with docker

1. Caddy version (caddy version):

v2.3.0 h1:fnrqJLa3G5vfxcxmOH/+kJOcunPLhSBnjgIvjXV/QTA=

2. How I run Caddy:

With a docker compose file:

services:

...
  caddy:
    image: caddy
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "2019:2019"
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - $PWD/conf/Caddyfile:/etc/caddy/Caddyfile
      - $PWD/html:/srv
      - $PWD/conf/caddy_data:/data
      - $PWD/conf/caddy_config:/config
      - $PWD/logs:/var/log/caddy
    network_mode: host

  php:
    image: mrnonoss/php8.0.5-pdo-pgsql
    container_name: php
    restart: unless-stopped
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./html:/srv
      - ./conf/php.ini:/usr/local/etc/php/php.ini:ro
    network_mode: host...  

a. System environment:

Linux john@doe 5.4.0-72-generic #80-Ubuntu SMP Mon Apr 12 17:35:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
john@doe:~/case$ docker -v
Docker version 20.10.6, build 370c289
john@doe:~/case$ docker-compose -v
docker-compose version 1.25.0, build unknown

b. Command:

c. Service/unit/compose file:

d. My complete Caddyfile or JSON config:

	root * /srv
	file_server
	encode zstd gzip
	php_fastcgi 127.0.0.1:9000
	header {
		Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
		X-Xss-Protection "1; mode=block"
		X-Content-Type-Options "nosniff"
		X-Frame-Options "DENY"
		Content-Security-Policy default-src 'self'
		Referrer-Policy "strict-origin-when-cross-origin"
		Cache-Control "public, max-age=15, must-revalidate"
		Feature-Policy "accelerometer 'none'; ambient-light-sensor 'none'; autoplay 'self'; camera 'none'; encrypted-media 'none'; fullscreen 'self'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; payment 'none'; picture-in-picture *; speaker 'none'; sync-xhr 'none'; usb 'none'; vr 'none'"
		Server "No."
	}
	log {
		output file /var/log/caddy/access.log {
			roll_size 10MiB
			roll_keep 100
		}		
		level debug
	}
}

3. The problem I’m having:

Hi there,
The website is working well, as it should.
However, I would like to make it as secure as possible.

I made an automated pentest with Acunetix. I was able to fix some mid/low level vulnerabilities, but I am not able to fix the “FastCGI Unauthorized Access Vulnerability” that occurs because “It was confirmed that the FastCGI port 9000 is publicly accessible.”

Acunetix propose this fix: “The FastCGI port should not be publicly accessible. FastCGI should be configured to listen only on the local interface (127.0.0.1) or to use a unix socket.”
My php_fastcgi directive is indeed listening 127.0.0.1, but the error still persist.

4. Error messages and/or full log output:

5. What I already tried:

I tried to change the php_fastcgi directive to php (my container php service), but in this case I stumble on a 504 error, php files does not exists.

6. Links to relevant resources:

When running inside a container, 127.0.0.1 refers to this same container. The Caddy container won’t have php-fpm running, so you need to tell Caddy to talk to the other container. So that means doing it like this: php_fastcgi php:9000 (where php is the name of your other docker service)

Many thanks for the input

I already tried this directive and in that case, caddy acts as if no php exists in root directory (error 504).

On the other hand127.0.0.1 works well despite this security issue.

Oh right, cause you’re using:

If you stop using host mode, then the above would work correctly.

Many thanks again @francislavoie

However if I remove this network rule, my website is no longer available and I stumble on a ERR_SSL_PROTOCOL_ERROR.

It still persists after a docker-compose down -v and docker-compose up -d
However my grafana dashboard is still available, so… I do not know what to think…

What are in your logs? How does it look if you make the request with curl -v? You omitted the domain from your Caddyfile in your post, so it’s unclear to me whether you’re trying to use local HTTPS or not, is that the case?

Oh yeah, I just noticed the domain is missing.
Well, not the domain, but the IP. It is supposed to be a local website available for all the LAN.
I tried localhost, 127.0.0.1 and 192.168.19.128 the host IP.

Caddyfile:

192.168.19.128 {
	root * /srv
	file_server
	encode zstd gzip
	php_fastcgi php:9000
	header {
		Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
		X-Xss-Protection "1; mode=block"
		X-Content-Type-Options "nosniff"
		X-Frame-Options "DENY"
		Content-Security-Policy default-src 'self'
		Referrer-Policy "strict-origin-when-cross-origin"
		Cache-Control "public, max-age=15, must-revalidate"
		Feature-Policy "accelerometer 'none'; ambient-light-sensor 'none'; autoplay 'self'; camera 'none'; encrypted-media 'none'; fullscreen 'self'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; payment 'none'; picture-in-picture *; speaker 'none'; sync-xhr 'none'; usb 'none'; vr 'none'"
		Server "No."
	}
	log {
		output file /var/log/caddy/access.log {
			roll_size 10MiB
			roll_keep 100
		}		
		level debug
	}
}

Curl returns an SSL internal error:

$ curl -v https://192.168.19.128
*   Trying 192.168.19.128:443...
* TCP_NODELAY set
* Connected to 192.168.19.128 (192.168.19.128) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, internal error (592):
* error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error
* Closing connection 0
curl: (35) error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error

Caddy redirects requests automaticaly HTTP to HTTPS.
Without the docker host network directive, nothing happens in the logs.

I recommend using :80 instead for the site address instead of 192.168.19.128 if it’s only meant to be accessible locally.

When you use an IP address as the site address, Automatic HTTPS is enabled, with Caddy’s internal CA. This means that the certificates are not issued by a public CA, so they will not be trusted by any browsers/clients by default, unless you install the internal CA’s root certificate to your system/browser trust stores.

If you’re just serving this locally, then HTTPS isn’t really necessary. (Also, remove the Strict-Transport-Security header, otherwise your browser will forever “remember” to redirect to HTTPS and you won’t be able to access the site over HTTP).

If you absolutely need HTTPS, but aren’t making this site publicly accessible, it’s much more complicated to set up. Your options:

  • Use a real domain, but use the DNS challenge to have a publicly trusted certificate issued, while the site is not publicly accessible. The DNS challenge is the only way to get a publicly trusted certificate without it being publicly accessible. This involves building Caddy with a DNS plugin for your DNS provider. You could use DuckDNS for a free domain though (I wrote the DuckDNS plugin for Caddy).

  • Use local HTTPS, but this requires grabbing the root CA certificate from Caddy’s storage (i.e. /data/pki/authorities/local/root.crt) and installing it on all the devices you’ll be making requests to Caddy. This is all manual and annoying to do, and in some cases close to impossible on certain kinds of devices (big pain in the ass on a smart TV for example).

Yeah it is meant to be locally available, but within an organization network. So, I would like to keep HTTPS.
I would rather un untrusted certificate (that I know leggit) than no certificate at all.

Is it not possible to just go with the certificate generated by Caddy? The warning saying the certificate can’t be trusted by the browser is not so annoying for us.

An untrusted certificate is essentially the same as no security at all. Anyone could perform a man-in-the-middle attack by inserting themselves between the client and server, and decrypting the connection from the server, then re-encrypting the connection with their own untrusted certificate on the way back to the client. Trust is the thing that makes it secure.

But that is the situation you have right now anyways. Caddy is serving an untrusted certificate, so your browser responds with ERR_SSL_PROTOCOL_ERROR because it doesn’t trust it.

P.S. I updated my comment above while you replied.

I updated mine also ^^

According to my understanding, there’s no problem with a self signed certificate, despite the warnings. It is just the root CA that can’t be verified since it’s not a recognized one.

Isn’t it the same for the ones generated by caddy?

It is the same, except Caddy issues from a CA that it maintains, rather than actual self-signed certificates. A self signed certificate is where the certificate was its own CA, but Caddy sets up a chain. This makes it possible to have short-lived leaf certificates with a long-lived root CA certificate, and the root will be the same for the lifetime of the Caddy instance.

So the connexions to the website would be secured ?
So still better than no HTTPS for the inner policy :slight_smile:

Again, “not really” because there’s no trust, but yes, it would be an encrypted connection (that could trivially be intercepted).

I have to improve my knowlege regarding this topic. But it’s not caddy related. Sorry for the burden.

To come back to the problem, do you have an idea on why would it work with the network_mode: host and not without?

No, which is why I was asking for logs to reveal the problem. You could add this to the top of your Caddyfile to show a bit more in the logs:

{
	debug
}

As long as the connexion is not made, no input in the logs. Even with debut option.
I added the network_mode: host again, and it works…

Don’t know what to think… :confused:

Well then that means that something else is preventing Caddy from receiving the request. You’ll need to do some digging to figure out what else in your system/setup might be getting in the way.

In any case, many thanks for you help.
Funny fact is that my grafana/adminer dashboard are always accessible (on the same docker-compose).

Here is my full docker-compose

version: '3.6'
services:

  postgre:
    image: postgres:alpine
    container_name: postgre
    restart: unless-stopped
    network_mode: host
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - $PWD/data:/tmp/data
      - $PWD/databases:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: XXX
      POSTGRES_USER: XXX
      POSTGRES_DB: XXXX

  adminer:
    image: adminer
    container_name: adminer
    restart: unless-stopped
    network_mode: host
    ports:
      - 8080:8080

  caddy:
    image: caddy
    container_name: caddy
    restart: unless-stopped
    network_mode: host
    ports:
      - "80:80"
      - "443:443"
      - "2019:2019"
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - $PWD/conf/Caddyfile:/etc/caddy/Caddyfile
      - $PWD/html:/srv
      - $PWD/conf/caddy_data:/data
      - $PWD/conf/caddy_config:/config
      - $PWD/logs:/var/log/caddy

  php:
    image: mrnonoss/php8.0.5-pdo-pgsql
    container_name: php
    restart: unless-stopped
    network_mode: host
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./html:/srv
      - ./conf/php.ini:/usr/local/etc/php/php.ini:ro

  prometheus:
    image: prom/prometheus
    container_name: prometheus
    restart: unless-stopped
    network_mode: host
    ports:
      - "9090:9090"
    volumes:
      - $PWD/conf/prometheus.yml:/etc/prometheus/prometheus.yml

  grafana:
    image: grafana/grafana
    container_name: grafana
    restart: unless-stopped
    network_mode: host
    ports:
      - "3000:3000"

This topic was automatically closed after 30 days. New replies are no longer allowed.