Caddy Reverse Proxy for Authentik

Hey there,

I have successfully deployed Authentik with Docker Compose in my Homelab.
Now I tried deploying it on a Hetzner VPS running Ubuntu 22.04 and Docker.
I have already implemented the DNS record for auth.hebel.schule.

I am running caddy in another docker container and pointed a reverse proxy entry to the Authentik instance.

My problem is, I just can’t get to to the initial setup page of Authentik through my https://auth.hebel.schule/if/flow/initial-setup/

I had it working a few hours ago but messed it up, so I know at least, it is possible.
Another problem is, I can’t even try on the local IP, since the machine is a remote VPS and I’d rather not setup a Window Manager and all the other things up.

I can post the config files, so maybe someone can point me in the right direction.
One short notice: The logs of the containers show no error, not even in trace mode!

Docker Compose of Authentik (I even tried host network, to rule out networking issues, but that didn’t help.)

services:
  postgresql:
    #container_name: authdb
    #networks:
      #- caddy
     # - backend
    network_mode: host
    image: docker.io/library/postgres:12-alpine
    restart: unless-stopped
    healthcheck:
      test:
        - CMD-SHELL
        - pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}
      start_period: 20s
      interval: 30s
      retries: 5
      timeout: 5s
    volumes:
      - database:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: ${PG_PASS:?database password required}
      POSTGRES_USER: ${PG_USER:-authentik}
      POSTGRES_DB: ${PG_DB:-authentik}
    env_file:
      - .env
  redis:
    #container_name: authredis
    network_mode: host
    #networks:
      #- caddy
     # - backend
    image: docker.io/library/redis:alpine
    command: --save 60 1 --loglevel warning
    restart: unless-stopped
    healthcheck:
      test:
        - CMD-SHELL
        - redis-cli ping | grep PONG
      start_period: 20s
      interval: 30s
      retries: 5
      timeout: 3s
    volumes:
      - redis:/data
  server:
    #container_name: auth
    network_mode: host
    image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.4.2}
    restart: unless-stopped
    command: server
    #networks:
      #- caddy
     # - backend
    environment:
      AUTHENTIK_REDIS__HOST: redis
      AUTHENTIK_POSTGRESQL__HOST: postgresql
      AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
      AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
      AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
    volumes:
      - ./media:/media
      - ./custom-templates:/templates
    env_file:
      - .env
    ports:
      - ${COMPOSE_PORT_HTTP:-9000}:9000
      - ${COMPOSE_PORT_HTTPS:-9443}:9443
    depends_on:
      - postgresql
      - redis
  worker:
    #container_name: authworker
    network_mode: host
    #networks:
      #- caddy
     # - backend
    image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.4.2}
    restart: unless-stopped
    command: worker
    environment:
      AUTHENTIK_REDIS__HOST: redis
      AUTHENTIK_POSTGRESQL__HOST: postgresql
      AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
      AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
      AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
    # `user: root` and the docker socket volume are optional.
    # See more for the docker socket integration here:
    # https://goauthentik.io/docs/outposts/integrations/docker
    # Removing `user: root` also prevents the worker from fixing the permissions
    # on the mounted folders, so when removing this make sure the folders have the correct UID/GID
    # (1000:1000 by default)
    user: root
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./media:/media
      - ./certs:/certs
      - ./custom-templates:/templates
    env_file:
      - .env
    depends_on:
      - postgresql
      - redis
volumes:
  database:
    driver: local
  redis:
    driver: local
#networks:
#  backend:
#    driver: bridge
#  caddy:
#    external: true

.env File

PG_PASS=xx
AUTHENTIK_SECRET_KEY=xx
AUTHENTIK_ERROR_REPORTING__ENABLED=true
# SMTP Host Emails are sent to
AUTHENTIK_EMAIL__HOST=mail.your-server.de
AUTHENTIK_EMAIL__PORT=587
# Optionally authenticate (don't add quotation marks to your password)
AUTHENTIK_EMAIL__USERNAME=xxx@hebel.schule
AUTHENTIK_EMAIL__PASSWORD=xx
# Use StartTLS
AUTHENTIK_EMAIL__USE_TLS=true
# Use SSL
AUTHENTIK_EMAIL__USE_SSL=false
AUTHENTIK_EMAIL__TIMEOUT=10
# Email address authentik will send from, should have a correct @domain
AUTHENTIK_EMAIL__FROM=xxx@hebel.schule

AUTHENTIK_LOG_LEVEL=trace

Caddyfile

{
    email xxx@hebel.schule
    debug
}

#Authentik WebIF

auth.hebel.schule {
    reverse_proxy authentik-server-1:9000
}

2. Error messages and/or full log output:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

3. Caddy version:

v2.8.0-rc.1 h1:OkAxqZMDUVP7jGtEpE+sh1NJTv9CkFWsopEmZ2eWAFc=

4. How I installed and ran Caddy:

Docker Compose

a. System environment:

Virtual Ubuntu 22.04, Docker, Docker Compose

c. Service/unit/compose file:

services:
  caddy:
    build:
      context: .
      dockerfile: Dockerfile
    network_mode: host
    #networks:
      #- caddy
    container_name: caddy2
    #image: caddy:2.8
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
    ports:
      - 80:80
      - 443:443
      - 443:443/udp
    volumes:
      - /home/wikiadmin/caddy2/Caddyfile:/etc/caddy/Caddyfile:ro
      - /home/wikiadmin/caddy2/site:/srv
      - /home/wikiadmin/caddy2/data:/data
      - /home/wikiadmin/caddy2/config:/config
      #- /home/wikiadmin/caddy/caddy.json:/etc/caddy/caddy.json

#networks:
#  caddy:
#    name: caddy
#    driver: bridge

d. My complete Caddy config:

{
    email xxx@hebel.schule
    debug
}

#Authentik WebIF

auth.hebel.schule {
    reverse_proxy authentik-server-1:9000
}

Any help is appreciated

What’s in your Caddy logs? Enable the debug global option and show your logs.

I just found out. I obviously defined the same network for all the containers.
But I forgot, that I had to create the network manually.
Once I did that, everything was good.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.