1. Caddy version (caddy version
):
~$ caddy version
v2.4.6 h1:HGkGICFGvyrodcqOOclHKfvJC0qTU7vny/7FhYp9hNw=
2. How I run Caddy:
I am currently running it in a docker container. I am using my own Dockerfile and building it with xcaddy because I am using various modules to get DNS and security.
The ultimate goal is to run caddy as a reverse-proxy and authentication portal for all my docker services. I am using this mostly as a learning experience, because I think there are less elaborate ways to play DND tabletop with friends, but I like over-complicating things I guess.
FROM arm64v8/caddy:builder-alpine AS builder
RUN xcaddy build \
--with github.com/greenpau/caddy-security \
--with github.com/greenpau/caddy-trace \
--with github.com/caddy-dns/googleclouddns
FROM arm64v8/caddy:alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
a. System environment:
OS: Ubuntu 20.04.4 LTS aarch64
Host: KVM Virtual Machine virt-4.2
Kernel: 5.11.0-1028-oracle
Uptime: 2 days, 20 hours, 50 mins
Packages: 795 (dpkg), 6 (snap)
Shell: bash 5.0.17
Resolution: 1024x768
Terminal: node
CPU: (4)
GPU: 00:01.0 Red Hat, Inc. Virtio GPU
Memory: 712MiB / 23996MiB
(I am runnning this on Oracle Cloud Infrastructure)
b. Command:
~$ docker compose -f ./caddy-test.yml up -d
c. Service/unit/compose file:
Disclaimer: I know that Docker secrets can only be used in Swarm, so that means no environment variables like the PWD I used. This is the one I’ve been using just to see if __FILE works at all.
Docker compose file I’m using for testing purposes:
version: "3.7"
#INTERNET CONNECTIONS==============================================================================================================================================================
networks:
caddy-test:
driver: bridge
external: true
services:
caddy:
image: oracleserver/caddy:local
container_name: caddy
restart: unless-stopped
build:
context: ./caddy
dockerfile: Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- ${PWD}/caddy/Caddyfile:/etc/caddy/Caddyfile
- ${PWD}/caddy/site:/srv
- caddy_data:/data
- caddy_config:/config
networks:
- caddy-test
environment:
- GITHUB_CLIENT_ID_FILE=${PWD}/secret/github_client_secret #Do environment secrets need their file extensions?
- GITHUB_CLIENT_SECRET_FILE=${PWD}/secrets/github_client_secret
- GCP_CREDENTIALS=${PWD}/secrets/credentials.json #What about for a file that needs to explicitly be json in order to be read? Will encrypting it in Docker swarm secrets make it unreadable?
extra_hosts:
- "host.docker.internal:host-gateway"
#SERVICES===============================================================================================================================================================================
dashy: #FRONT PAGE
image: lissy93/dashy
container_name: dashy
volumes:
- ${PWD}/services/dashy/config/conf.yml:/app/public/conf.yml
ports:
- 30001:80
environment:
- NODE_ENV=production
restart: unless-stopped
healthcheck:
test: [ 'CMD', 'node', '/app/services/healthcheck' ]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- caddy-test
depends_on:
- caddy
#STORAGE=================================================================================================================================================================================
volumes:
caddy_data:
caddy_config:
d. My complete Caddyfile or JSON config:
Caddyfile
{
debug
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
order authenticate before respond
order authorize before basicauth
security {
authentication portal myportal {
crypto default token lifetime 3600
backend github {env.GITHUB_CLIENT_ID} {env.GITHUB_CLIENT_SECRET}
cookie domain example.com
ui {
links {
"My Website" https://example.com:443/ icon "las la-star"
"My Identity" "/whoami" icon "las la-user"
}
password_recovery_enabled no
}
transform user {
match origin local
action add role authp/user
ui link "Portal Settings" /settings icon "las la-cog"
}
transform user {
match realm github
match sub github.com/oracleserver
action add role authp/user
}
}
authorization policy users_policy {
set auth url https://auth.example.com:443/
allow roles authp/admin authp/user
acl rule {
comment allow users
match role authp/user
allow stop log info
}
acl rule {
comment default deny
match any
deny log warn
}
}
authorization policy admins_policy {
set auth url https://auth.example.com:443/
allow roles authp/admin authp/user
acl rule {
comment allow users
match role authp/user
allow stop log info
}
acl rule {
comment default deny
match any
deny log warn
}
}
}
}
(dns_config) {
tls {
dns googleclouddns {
gcp_project {env.GCP_PROJECT}
gcp_application_default {env.GCP_CREDENTIALS}
}
}
}
auth.example.com {
import dns_config
authenticate with myportal
root * /usr/share/caddy
file_server
}
example.com {
import dns_config
authorize with users_policy
reverse_proxy dashy:80
}
3. The problem I’m having:
I am using environment variables to configure Github in the caddy module Github Security.
For example:
security {
authentication portal myportal {
crypto default token lifetime 3600
backend github {env.GITHUB_CLIENT_ID} {env.GITHUB_CLIENT_SECRET}
cookie domain example.com
But I also wanted to use Docker Secrets. When I append “FILE” to the environment variables, they don’t read the file, they read the path of the secret file.
Theoretically, GITHUB_CLIENT_ID_FILE should be “LV99999999”, but GITHUB_CLIENT_ID_FILE will read as “/var/run/secrets/github_client_id” instead, right?
According to this source, I can create an entrypoint script to configure the ability to read from FILE myself? I look at the entrypoint script of something like Nextcloud, and that seems to be the case, but I guess I wanted to ask you guys and make sure if I’m on the right track, or if thats already possible and I didn’t do it right. Or maybe its not worth the effort of using docker secrets in the first place?
Note: This isn’t unique to caddy-security, mind you. Its not part of the caddy base docker I think, because I have the same issue with other Caddy modules, like google DNS.
4. What I already tried:
I tried just doing a straight copy and paste form the source I listed below just to see what I would get and where to go from there.
FROM arm64v8/caddy:builder-alpine AS builder
RUN xcaddy build \
--with github.com/greenpau/caddy-security \
--with github.com/greenpau/caddy-trace \
--with github.com/caddy-dns/googleclouddns
FROM arm64v8/caddy:alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT [ "/docker-entrypoint.sh" ]
# docker-entrypoint.sh
#!/bin/bash
set -e
# usage: file_env VAR [DEFAULT]
# ie: file_env 'XYZ_DB_PASSWORD' 'example'
# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
# "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
exec "$@"
The error I got was this:
standard_init_linux.go:228: exec user process caused: exec format error
5. Links to relevant resources:
Github links to Caddy modules I’m using:
References:
6. Ultimate questions:
-
Is trying to write an entry-point script the right approach? Do I even need docker-secrets? I am trying to implement “best practices” so I can learn them properly.
-
I am not trying to run caddy as a runnable container so would adding a new entrypoint make it unusuable as a service? (Sidenote: I do however wish I knew how to do that to format Caddyfiles on the host terminal without installing caddy, but that’s maybe a different topic.)
-
Does the docker version of Caddy already have an entrypoint script I can edit?
-
Do the exec format errors occur because I have to somehow make an entrypoint specific for ARM64? Or am I doing something else wrong?
-
Is it a good idea to use docker swarm to encrypt the credentials json used for google dns?
-
Does Caddy already have an entrypoint script for Docker and I’m just not seeing it?