Default caddy container is just broken

By default caddy is using its default configuration from /etc, it’s just ignoring the ENV variable pointing to docker volume mountpoint

The solution I’ve found to be working is to change the default command.

This behaviour is not mentioned on the dockerhub page and you have to check the default container cmd line to understand why your config is not working.

The ENV variables look fine to me, assuming that portainer isn’t lying to me:

1. Caddy version (caddy version):

2. How I run Caddy:

I’m running it in docker container

docker run -d -p 80:80 -p 443:443 -v static_otherdomain.tld:/websites/otherdomain.tld -v static_funnydomain.tld/websites/funnydomain.tld -v caddy_data:/data -v caddy_config:/config caddy:2.3.0-alpine

a. System environment:

# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

b. Command:

docker run -d -p 80:80 -p 443:443 -v caddy_data:/data -v caddy_config:/config caddy:2.3.0-alpine

c. Service/unit/compose file:

Just plain docker command and config in a separate volume

d. My complete Caddyfile or JSON config:


/var/lib/docker/volumes/caddy_config/_data/caddy# cat caddyfile 
subdomain.funnydomain.tld {
        respond "subdomain.funnydomain.tld"
}

git.funnydomain.tld {
        reverse_proxy 172.19.0.2:3000
}

www.funnydomain.tld, funnydomain.tld {
reverse_proxy 172.19.0.2:3000
}

www.otherdomain.tld, otherdomain.tld {
        root * /websites/otherdomain.tld
        file_server
}


3. The problem I’m having:

Caddy listens only on http port (port 80) no attempt to generate ssl cert is being made and connections to port 443 timeout since it’s not listening on that port.

4. Error messages and/or full log output:

{"level":"info","ts":1612042946.67578,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"},
{"level":"info","ts":1612042946.676735,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]},
{"level":"info","ts":1612042946.6772525,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80},
{"level":"info","ts":1612042946.6775215,"msg":"autosaved config","file":"/config/caddy/autosave.json"},
{"level":"info","ts":1612042946.677561,"msg":"serving initial configuration"},
{"level":"info","ts":1612042946.6777406,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0001ff1f0"},
{"level":"info","ts":1612042946.6818185,"logger":"tls","msg":"cleaned up storage units"},

5. What I already tried:

Changing the default command line argument to point to the configuration on docker volume seems to be working

6. Links to relevant resources:

https://hub.docker.com/_/caddy

Looks like you never mounted your Caddyfile, plus you named it with a lowercase C. The Docker container is looking for the file at /etc/caddy/Caddyfile. Linux uses case sensitive filesystems.

The Docker container ships with a default Caddyfile, so it will still run if you don’t override it.

I’m more surprised that on some OSes it’s not the case :stuck_out_tongue:

Well, I can’t speak for how this container was configured before I started changing its configuration, however this worked before I re-created it.

And I mentioned the default cmdline, the point of this post is not really to get help (I think I may help someone later when they’ll search this forum for solution to their problem), its point is more or less to complain about the way everything is described on docker hub and I hope that someone in charge of that page will change it to be more specific.

I still think the exact name of the config file caddy will be looking for should be explicitly mentioned in the section about attaching config volume.

The working directory in the snippet you’re referring to is the docker volume (it think it was /var/lib/docker/volumes/caddy_config/_data/caddy/)

And as we can see from my original post it causes confusion when someone mounts a config volume and then caddy uses its default config anyway

In the container, that’s /config/caddy, not /etc/caddy. Please review the docs on Docker hub.

Ok, I’ve read again that page and I understand why I see a problem in it and why it’s not really obvious:

The basic usage section mentions that the caddy file should be overridden, so if someone wants to have their config split into a few files, or just the configuration on some external volume will read further and they will go into " Automatic TLS with the Caddy image" section which gives us this snippet:

docker run -d -p 80:80 -p 443:443 \
    -v /site:/srv \
    -v caddy_data:/data \
    -v caddy_config:/config \
    caddy caddy file-server --domain example.com
-v caddy_config:/config

Then in the ENV variables of the container we have:

XDG_CONFIG_HOME	/config

The first sentence on the page states:

Caddy requires write access to two locations: a data directory, and a configuration directory.

and links to: Conventions — Caddy Documentation

Which states that:

If the XDG_CONFIG_HOME environment variable is set, it is $XDG_CONFIG_HOME/caddy

So assumption could be made that it’s going to treat it like (for example) nginx treats /etc/nginx/conf.d/ on debian, which loads all files from that directory (ending with .conf, but still).

Yeah that’s not the case. The /config directory is where Caddy saves its autosave.json which is basically a backup of your current running configuration. It’s mostly relevant for users who primarily use Caddy’s config API instead of running with a Caddyfile.

Please make someone edit the docker hub page or the linked doc about xdg_config_home to say this because I’m most likely not the first and not the last to make this wrong assumption.

This topic was automatically closed after 30 days. New replies are no longer allowed.