Docker not finding Caddyfile

1. The problem I’m having:

I’m using the standard caddy docker container launched via docker-compose:

    image: "lucaslorentz/caddy-docker-proxy:ci-alpine"

I’m trying to modify the global bind settings but caddy doesn’t seem to be picking up my caddy file.

I’m mounting the caddyfile from the host via:

- /var/run/docker.sock:/var/run/docker.sock:ro
- /opt/kuma-monitor/caddydata/:/data
- /opt/kuma-monitor/caddyconfig/:/config
- /opt/kuma-monitor/Caddyfile:/etc/caddy/Caddyfile

If I start the container up and cd to /etc/caddy I can see the Caddyfile has been mounted into the docker container and the contents are correct:.

docker exec -it kuma-monitor_caddy_1 /bin/sh
/ # cd /etc/caddy/
/etc/caddy # cat Caddyfile 
  default_bind [::]
/etc/caddy # 

However when caddy starts it is still trying to bind to ipv4 which makes me assume that it isn’t reading the Caddyfile.
I’ve tried putting some garbage into the caddy file to force an error (I was hoping that an error would prove that caddy was reading my Caddyfile).
But I seen no errors in the caddy logs:

My bad caddyfile

lkla 223!@#%!@!# garbage to break caddy
  default_bind [::]

The logs with the bad caddyfile:

docker-compose up
Creating network "kuma-monitor_default" with the default driver
Creating network "kuma-monitor_ip6net" with the default driver
Creating kuma-monitor_caddy_1       ... done
Creating kuma-monitor_uptime-kuma_1 ... done
Attaching to kuma-monitor_uptime-kuma_1, kuma-monitor_caddy_1
uptime-kuma_1  | ==> Performing startup jobs and maintenance tasks
uptime-kuma_1  | ==> Starting application with user 0 group 0
uptime-kuma_1  | Welcome to Uptime Kuma
uptime-kuma_1  | Your Node.js version: 18.19.0
caddy_1        | {"level":"info","ts":1709765422.9876347,"logger":"docker-proxy","msg":"Running caddy proxy server"}
caddy_1        | {"level":"info","ts":1709765422.9891837,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//","//localhost:2019","//[::1]:2019"]}
caddy_1        | {"level":"info","ts":1709765422.9894218,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy_1        | {"level":"info","ts":1709765422.9894392,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
caddy_1        | {"level":"info","ts":1709765422.990294,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","EnvFile":"","LabelPrefix":"caddy","PollingInterval":30,"ProxyServiceTasks":true,"ProcessCaddyfile":true,"ScanStoppedContainers":false,"IngressNetworks":"[proxy_network]","DockerSockets":[""],"DockerCertsPath":[""],"DockerAPIsVersion":[""]}
caddy_1        | {"level":"info","ts":1709765422.9910564,"logger":"docker-proxy","msg":"Connecting to docker events","DockerSocket":""}
caddy_1        | {"level":"info","ts":1709765422.9924078,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[ad44255a0712950d2b5ea4e85bc9d6dcc74603308d672a0112bcd8b1ede7a0f6:true proxy_network:true]"}
caddy_1        | {"level":"info","ts":1709765423.0031345,"logger":"docker-proxy","msg":"Swarm is available","new":false}
caddy_1        | {"level":"warn","ts":1709765423.005465,"logger":"docker-proxy","msg":"Container is not in same network as caddy","container":"dcbdbbe7f3f60a705d29bfe15bb36ddbbf9ccd839a791f7977c492a256a520a2","container id":"dcbdbbe7f3f60a705d29bfe15bb36ddbbf9ccd839a791f7977c492a256a520a2"}
caddy_1        | {"level":"info","ts":1709765423.0068781,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":" {\n\treverse_proxy *\n}\n"}
caddy_1        | {"level":"info","ts":1709765423.0073905,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\"}]}]}],\"terminal\":true}]}}}}}"}
caddy_1        | {"level":"info","ts":1709765423.0074458,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
caddy_1        | {"level":"info","ts":1709765423.009097,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"","remote_port":"51350","headers":{"Accept-Encoding":["gzip"],"Content-Length":["254"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
caddy_1        | {"level":"info","ts":1709765423.0105212,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//[::1]:2019","//","//localhost:2019"]}
caddy_1        | {"level":"info","ts":1709765423.0108244,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy_1        | {"level":"info","ts":1709765423.01086,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy_1        | {"level":"info","ts":1709765423.0112484,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000344880"}
caddy_1        | {"level":"info","ts":1709765423.0124874,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
caddy_1        | {"level":"info","ts":1709765423.0127246,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See for details."}
caddy_1        | {"level":"info","ts":1709765423.0131774,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy_1        | {"level":"info","ts":1709765423.013352,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy_1        | {"level":"info","ts":1709765423.0134156,"logger":"http","msg":"enabling automatic TLS certificate management","domains":[""]}
caddy_1        | {"level":"info","ts":1709765423.0155587,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy_1        | {"level":"info","ts":1709765423.01569,"logger":"admin.api","msg":"load complete"}
caddy_1        | {"level":"info","ts":1709765423.01602,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
caddy_1        | {"level":"warn","ts":1709765423.016368,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"295fd264-38cf-4f7b-a3c5-3ca6486a7163","try_again":1709851823.0163653,"try_again_in":86399.999999476}
caddy_1        | {"level":"info","ts":1709765423.0165293,"logger":"tls","msg":"finished cleaning storage units"}
caddy_1        | {"level":"info","ts":1709765423.0193162,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
uptime-kuma_1  | 2024-03-06T22:50:23Z [SERVER] INFO: Welcome to Uptime Kuma
uptime-kuma_1  | 2024-03-06T22:50:23Z [SERVER] INFO: Node Env: production
uptime-kuma_1  | 2024-03-06T22:50:23Z [SERVER] INFO: Inside Container: true
uptime-kuma_1  | 2024-03-06T22:50:23Z [SERVER] INFO: Importing Node libraries
uptime-kuma_1  | 2024-03-06T22:50:23Z [SERVER] INFO: Importing 3rd-party libraries
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Creating express and instance
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Server Type: HTTP
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Importing this project modules
uptime-kuma_1  | 2024-03-06T22:50:24Z [NOTIFICATION] INFO: Prepare Notification Providers
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Version: 1.23.11
uptime-kuma_1  | 2024-03-06T22:50:24Z [DB] INFO: Data Dir: ./data/
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Connecting to the Database
uptime-kuma_1  | 2024-03-06T22:50:24Z [DB] INFO: SQLite config:
uptime-kuma_1  | [ { journal_mode: 'wal' } ]
uptime-kuma_1  | [ { cache_size: -12000 } ]
uptime-kuma_1  | 2024-03-06T22:50:24Z [DB] INFO: SQLite Version: 3.41.1
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Connected
uptime-kuma_1  | 2024-03-06T22:50:24Z [DB] INFO: Your database version: 10
uptime-kuma_1  | 2024-03-06T22:50:24Z [DB] INFO: Latest database version: 10
uptime-kuma_1  | 2024-03-06T22:50:24Z [DB] INFO: Database patch not needed
uptime-kuma_1  | 2024-03-06T22:50:24Z [DB] INFO: Database Patch 2.0 Process
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Load JWT secret from database.
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: No user, need setup
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Adding route
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Adding socket handler
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Init the server
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVER] INFO: Listening on 3001
uptime-kuma_1  | 2024-03-06T22:50:24Z [SERVICES] INFO: Starting nscd

2. Error messages and/or full log output:

as above

Please use the preview pane to ensure it looks nice.

3. Caddy version:

uptime-kuma_1 | 2024-03-06T22:50:24Z [SERVER] INFO: Version: 1.23.11

This information really should be at the top of the caddy log output.

4. How I installed and ran Caddy:

As above using docker-compose and the standard caddy docker image.

a. System environment:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.4 LTS
Release: 22.04
Codename: jammy

docker --version
Docker version 24.0.5, build 24.0.5-0ubuntu1~22.04.1

b. Command:

docker-compose up 

c. Service/unit/compose file:

    name: 'proxy_network'
    image: louislam/uptime-kuma:1
    restart: unless-stopped
      - /opt/kuma-monitor/kumadata:/app/data
      - 2052:3001
      caddy.reverse_proxy: "* {{upstreams 2052}}"
    image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
      - "80:80" 
      - "443:443"
      - "443:443/udp" # I assume for quic
      - ip6net
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/kuma-monitor/caddydata/:/data
      - /opt/kuma-monitor/caddyconfig/:/config
      - /opt/kuma-monitor/Caddyfile:/etc/caddy/Caddyfile
    restart: unless-stopped
      - CADDY_INGRESS_NETWORKS=proxy_network
        #- CADDY_DOCKER_CADDYFILE_PATH=/data/CaddyFile

     enable_ipv6: true
         #- subnet: 2401:da80:1000:2::/64        
         - subnet: 2600:1900:4180:bfa7:0:0:0:0/64

d. My complete Caddy config:

lkla 223!@#%!@!# garbage to break caddy
  default_bind [::]

5. Links to relevant resources:

docker for the caddy image.

I’m following the below section of the documentation:

To override the default Caddyfile, you can mount a new one at /etc/caddy/Caddyfile:

$ docker run -d -p 80:80 \
    -v $PWD/Caddyfile:/etc/caddy/Caddyfile \
    -v caddy_data:/data \

Of course I translated this into docker-compose volume mounts.

You need this env var, and set it to /etc/caddy/Caddyfile.

CDP generates your Caddyfile from labels. You need to tell it where to get the base Caddyfile otherwise.

In a separate post Francis mentions creating an environment variable so that caddy loads the caddyfile, however I can’t find any doco on the environment variable:

The CDP docs also explain how you can set up a base Caddyfile if you prefer, you need to set an env var for it to get loaded.

I’ve tried add a couple of different environment variables with no change:

From the docker-compose.

      - DEBUG=${DEBUG}
      - CaddyfilePath=/etc/caddy/Caddyfile
      - CADDY_INGRESS_NETWORKS=proxy_network
      - CADDY_DOCKER_CADDYFILE_PATH=/etc/caddy/Caddyfile

When I do a docker inspect on the caddy container the above environment variables are present, I can also see them in the container:

docker exec -it caddy /bin/sh
/ # 
/ # 
/ # env
/ # 

It’s documented here:

That env var is the same as setting the --caddyfile-path option when running CDP, if you were to customize the command. But changing the command is not as nice as just setting an env var.

so it would appear that I have set the environment variable CADDY_DOCKER_CADDYFILE_PATH but caddy is ignoring it.


As an aside:

Can I suggest that we need to change the docker documentation on docker hub as it is misleading.

Under the basic usage section it states:

To override the default Caddyfile, you can mount a new one at /etc/caddy/Caddyfile:

$ docker run -d -p 80:80 \
    -v $PWD/Caddyfile:/etc/caddy/Caddyfile \
    -v caddy_data:/data \

There is no mention of needing to set an environment variable.

I can’t find any doco on --caddyfile-path

If I run (within the container)

entrypoint: /bin/caddy run --config /etc/caddy/Caddyfile

Then I get an error that the Caddyfile is invalid - I’m assuming that the --config switch is for the json version of the caddyfile,

When I try:

 entrypoint: /bin/caddy run --caddyfile-path /etc/caddy/Caddyfile

I get the error:

caddy          | Error: unknown flag: --caddyfile-path

Those docs are correct.

You’re not using the vanilla Caddy Docker image, you’re using lucaslorentz/caddy-docker-proxy which is a totally different image. Not the same thing.

That’s because they’re arguments to the docker-proxy command, provided by lucaslorentz/caddy-docker-proxy. Not the run command.

Ok, so it looks like the --config switch is actually the correct flag to load the Caddyfile.

So is this a documentation issue around the docker image?
Am I trying to use the --caddyfile-path switch in the wrong place?

Here is my final docker-compose.yaml:

    container_name: caddy
    image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
      - "80:80" 
      - "443:443"
      - "443:443/udp" # I assume for quic
      - ip6net
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/kuma-monitor/caddydata/:/data
      - /opt/kuma-monitor/caddyconfig/:/config
      - /opt/kuma-monitor/Caddyfile:/etc/caddy/Caddyfile
    restart: unless-stopped
      - CADDY_INGRESS_NETWORKS=proxy_network
    entrypoint: /bin/caddy run --config /etc/caddy/Caddyfile

So now I can load the CaddyFile I will resume my other post on how to get caddy to work with ipv6 as the configure on that still isn’t working.

I appreciate the help.

You shouldn’t use lucaslorentz/caddy-docker-proxy if you’re just going to override the command anyway. Just use the official Docker image.

By overriding the command like that, you’re turning off CDP’s label to Caddyfile conversion stuff.

I’m not looking to override the command, I’m simply trying to find a way to get caddy to start using my Caddyfile.

Apologies if the following comes across as a bit harsh; the caddy product looks great and your help through this process has been fantastic and the aim of a web server with simplified configuration is a brilliant idea.

My aim with this feedback is to try and help improve the documentation around caddy so that other newbies have an easier time onboarding to caddy.

From someone new to caddy the documentation has lead me down a number of false paths.

My path into caddy was starting from docker hub and it would appear the documentation there is incorrect:

To override the default Caddyfile, you can mount a new one at /etc/caddy/Caddyfile:

$ docker run -d -p 80:80 \
    -v $PWD/Caddyfile:/etc/caddy/Caddyfile \
    -v caddy_data:/data \

My testing shows that this doesn’t work. Your advice on using an environment variable also suggests that this doesn’t work. Having said that all my tests using the suggested environment variable also didn’t work.

The doco on the docker-proxy plugin is also built on assumptions about a users knowledge of caddy - there is also no link to the docker-proxy from the docker.hub documentation and it doesn’t mention the use of the docker-proxy.

It’s not clear from reading the doco how the plugin relates to the docker image but from you responses it sounds like the docker image is built on the plugin.
However this (to me) still doesn’t make sense - given my test failures.

From what I understand a plugin is able to add features to the caddy cli - as I understand it now, this is where the --caddy-file-path comes from.
However I’m not working with the caddy cli but rather the docker image.

So is the docker-proxy plugin built into the docker image - if so this is not clear?

If it is, then how do I pass in the ‘–caddyfile-path’ to the entrypoint in the docker file without changing the entry point or command which you seem to be recommending against doing?

Some notes on the docker-proxy documentation:

The intro starts with:

How does it work?
The plugin scans Docker metadata, looking for labels indicating that the service or container should be served by Caddy.

Then, it generates an in-memory Caddyfile with site entries and proxies pointing to each Docker service by their DNS name or container IP.

Every time a Docker object changes, the plugin updates the Caddyfile and triggers Caddy to gracefully reload, with zero-downtime.

Ok so the problem here is that it isn’t clear if the plugin should be installed into the hosts cli tooling or into the docker image’s cli tooling.

The image name sort of suggests that its in the image but a new user shouldn’t be left to guess.

The introduction line:

This plugin enables Caddy to be used as a reverse proxy for Docker containers via labels.

My suggested alteration:

This plugin enables Caddy to be run as a reverse proxy within a Docker container and allows caddy to be configured via Docker labels. The Caddy maintainers provide an official docker image which runs this plugin: Docker

Labels to Caddyfile conversion

From the context its not clear if a ‘label’ is a caddy concept or a docker concept.

This section should start with a worked example. Currently it assumes that a user knows where to place the labels. Whilst I’ve used docker containers quite a bit I wasn’t familiar with the use of labels. A docker run command using labels and a docker-compose using a label would be useful.

This section should probably start with:

Labels to Caddyfile conversion

Docker labels can be used to configure caddy. Docker labels are mapped to caddy directives creating a virtual in memory Caddyfile.

docker run caddy --label 'caddy.response: / "Hello World" 200

So I’ve only just realised that CDP stands for Caddy Docker Proxy.
I’m not not certain how I ended up using the cdp image rather than just the offical docker image.

Is the official docker image based on cdp?

If not when is it appropriate to use which tooling?

No, they’re completely separate.

It depends on what you want to do.

If you want to use Docker labels to automatically generate your Caddyfile based on which containers are running, then CDP can do that.

If you want to write your Caddyfile yourself (or do anything else like use the Caddy JSON API etc), then the official image is what you should use.

I don’t think it did. You made assumptions that were incorrect, and those were not because of the docs. You thought the official image and CDP were the same or related, and they’re not. Nowhere do the docs of either imply that they are related. You can’t read the docs of one and assume it applies for the other.

It literally does work though. You probably used a Caddyfile that wasn’t listening on port 80 or something. That’s meant as a simple bare-minimum example.

That was about CDP, not about the official Docker image. But I think you realized that by now.

Yep, because they’re separate products. CDP is third-party maintained, not an official Caddy product (but we actively help answer questions about it because it’s super awesome and we know Lucas doesn’t have the time to focus on support).

Yeah, CDP is implemented as a plugin which provides an additional command to Caddy which changes how Caddy runs, so that instead of reading a config from file directly (i.e. the run command), it reads config from labels continually by watching the Docker socket for changes to running containers (i.e. the docker-proxy command).

You’re not expected to know this regarding the CLI, it’s an implementation detail. I only pointed to it because you led yourself down the wrong path and said “but it’s not documented” regarding env vars, so I pointed you to where it was.

It’s a Docker concept. Docker object labels | Docker Docs The assumption is that you know enough about Docker before reading this. It’s specifically meant as an advanced usecase for experienced users.

There’s tons of examples both in the README and in the examples/ directory in the repo.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.