1. Caddy version (caddy version
):
2 (latest)
2. How I run Caddy:
In this case I am running the official caddy:latest docker image. It is launched by a docker-compose.yml
file:
a. System environment:
I’m running docker images on my MacBook Pro M1 (OS 12.4)
Docker Desktop for Mac v 4.10.1 (82475)
b. Command:
docker-compose up
c. Service/unit/compose file:
caddy:
image: caddy:latest
restart: always
volumes:
- ./caddy/data:/data
- ./caddy/config:/config
- ./caddy/Caddyfile:/etc/caddy/Caddyfile
- ./caddy/logs:/logs
- ./php:/var/www/html
ports:
- "8888:80"
- "8899:443"
networks:
- web-network
d. My complete Caddyfile or JSON config:
:80 {
root * /var/www/html/mnr-be/webroot/
encode gzip
php_fastcgi php:9000
file_server
}
I have a customised php-fpm7 docker image:
FROM php:7-fpm
WORKDIR /tmp
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php -r "if (hash_file('sha384', 'composer-setup.php') === '55ce33d7678c5a611085589f1f3ddf8b3c52d662cd01d4ba75c0ee0459970c2200a51f492d557530c71c15d8dba01eae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN apt-get -y update && apt-get -y upgrade && apt-get install -y \
libicu-dev \
git \
zip \
unzip
RUN docker-php-source extract \
&& docker-php-ext-configure intl \
&& docker-php-ext-install -j$(nproc) intl \
&& docker-php-ext-configure pdo_mysql \
&& docker-php-ext-install -j$(nproc) pdo_mysql \
&& docker-php-source delete
Here’s the full docker-compose.yml file
version: "3.9"
networks:
web-network:
services:
caddy:
image: caddy:latest
restart: always
volumes:
- ./caddy/data:/data
- ./caddy/config:/config
- ./caddy/Caddyfile:/etc/caddy/Caddyfile
- ./caddy/logs:/logs
- ./php:/var/www/html
ports:
- "8888:80"
- "8899:443"
networks:
- web-network
php:
build: ./php
tty: true
restart: always
volumes:
- ./php:/var/www/html
networks:
- web-network
mariadb:
image: mariadb
restart: always
volumes:
- ./mariadb/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: localhost
MYSQL_DATABASE: mnr_dev_db
MYSQL_USER: mnr_dev
MYSQL_PASSWORD: slfid9fe898
ports:
- "23306:3306"
networks:
- web-network
3. The problem I’m having:
This works on http protocol only. I’m trying to set something up along the lines of this article: Run Local Development Over HTTPS Using Caddy (Traditional Setup) | by Ahmed Shendy | Medium
I.e. a local dev environment, which serves a PHP back-end and a VueJS front-end via HTTPS. The reason I’m doing this is that due to recent changes to the webkit implementation of cookie security and CORS my local MAMP environment is permanently broken – even if I get SSL certs installed on both front-end and back-end domains, the browser refuses to propagate the session cookie. I can’t make it work. So I’m looking for a solution where the front-end and back-end are seen as being on the same domain and there are no CORS or cookie security problems between the two for the browser to choke on. Also if I can get this working then I can potentially deploy it as a production environment too.
4. Error messages and/or full log output:
There’s no log output. I just don’t really understand how to map what the article is doing onto what I’m doing… mainly because the article doesn’t deal with php-fpm, so it’s unclear to me how to merge the two Caddyfiles.
5. What I already tried:
I tried this:
frontend.foo.bar {
tls ./certs/_wildcard.foo.bar.pem ./certs/_wildcard.foo.bar-key.pem
reverse_proxy localhost:8080 {
header_up Host "localhost"
header_up X-Real-IP {remote}
header_up X-Forwarded-Host "localhost"
header_up X-Forwarded-Server "localhost"
header_up X-Forwarded-For {port}
header_up X-Forwarded-Proto {scheme}
}
}
backend.foo.bar {
tls ./certs/_wildcard.foo.bar.pem ./certs/_wildcard.foo.bar-key.pem
root * /var/www/html/mnr-be/webroot/
encode gzip
php_fastcgi php:9000 {
header_up Host {host}
header_up Origin {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-Server {host}
header_up X-Forwarded-Port {port}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Proto {scheme}
header_down Access-Control-Allow-Origin https://frontend.foo.bar
header_down Access-Control-Allow-Credentials true
}
file_server
}
The docker-compose up
runs without error… but the virtual hosts yield:
ERR_CONNECTION_REFUSED
There are a few questions in my mind :
- I have php-fpm on port 80… the article is using a reverse proxy for the back-end. I’m not sure how to reconcile that… does their TLS solution require a reverse proxy? If so how to do that for php-fpm? If not then how to make the cert work with my php-fpm?
- The article is showing how to generate and install certs on the host machine but I’m not sure how that will map to docker.
- The article shows an
/etc/hosts
entry – is that relevant to a docker-based environment?