Basic docker compose setup failing

Hi! I’m excited to start using caddy as a reverse proxy for my docker containers with docker compose, but after a few days of troubleshooting, I’m starting to think there may be an issue with the image that’s published to dockerhub. I’m running docker v19.03.5 and docker-compose v1.24.1 on macOS Catalina v10.15.2.

My Caddyfile reads:

localhost

respond "Hello, world"

My docker-compose.yml reads:

version: '3'
services:
  caddy:
    image: caddy/caddy:alpine
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
    ports:
      - "2015:2015"

I run docker-compose up, which yields no errors. Then I run curl localhost:2015, which gets no response but the server throws an error:

http.log.error	strconv.Atoi: parsing "Hello, world": invalid syntax	{"request": {"method": "GET", "uri": "/", "proto": "HTTP/1.1", "remote_addr": "192.168.16.1:39574", "host": "localhost:2015", "headers": {"Accept": ["*/*"], "User-Agent": ["curl/7.64.1"]}}, "status": 500, "err_id": "nq7jav3xm", "err_trace": "caddyhttp.StaticResponse.ServeHTTP (staticresp.go:114)"}

If I change the Caddyfile's first line from localhost to foo.local and run docker-compose up, it fails with the error:

run: loading initial config: loading new config: http app module: start: tcp: listening on :443: listen tcp :443: bind: permission denied

I’ve tried adding several combinations of ports to my docker-compose.yml file to resolve this (although I would expect that if this were the problem, there would be no error and I simply wouldn’t be able to access caddy over port 443): - "443:443", - "443:2015", - "2015:443" and this has no effect.

I’ve also tried adding a capability to the service in my docker-compose file per some of the research on this forum I’ve done, which had no effect either:

services:
  caddy:
    ...
    cap_add:
      - CAP_NET_BIND_SERVICE

When I try checking the version via docker-compose exec caddy caddy version, it says simply (devel).

For context, my ultimate goal is to have 3 reverse proxied sites pointing to other docker containers running in my docker-compose file. In local development I’ll use x.mysite.local, y.mysite.local, and z.mysite.local. In production I’ll use x.mysite.com, y.mysite.com, z.mysite.com.

Is it me, or is there something wrong with the docker image?

Hi @wilson29thid! Welcome around.

The respond "body content" is introduced in Caddy v2.0.0-beta.13, while the Docker image is still on beta12. In beta12, use:

respond 200 {
    body "Hello, world"
}

Thanks @Mohammed90, that fixed the first error. But any time I change it from localhost to something else, like x.mysite.local, I get the bind permission denied error about port 80 or 443. How do I get around that?

Seems like you’re running into this issue: 443: bind: permission denied · Issue #21 · caddyserver/caddy-docker · GitHub

I think the official Docker image for v2 has the permission for binding to low port numbers disabled.

What’s happening here is that because Caddy is seeing a valid domain, it tries to enable Automatic HTTPS which makes it try to bind to ports 80 and 443 to try to allow Let’s Encrypt to connect to complete the TLS challenge. Since your domain ends in .local, I assume this is just for development for now, so you probably don’t need this for now.

You can use :2015 for now to let any connection on that port through (i.e. the port you configured in your docker compose file), though this means TLS will be disabled.

Thanks @francislavoie I arrived at that conclusion as well. The trouble is I can’t figure out a way around it. Even if I use the location http://x.mysite.local:2015 it still throws that error and dies. I think it’s because it’s still trying to open port 80 for the acme test.

Perhaps the bigger question is how do we allow that port to be opened, so the acme test will work in production? This isn’t a special use-case, is it? Isn’t this meant to work out of the box?

Could you post your logs and your Caddyfile? The more complete the logs, the easier it is for us to understand what’s going on.

That said, I did some digging with the help of @hairyhenderson (who is maintaining the official Caddy v2 Docker image) and I think we can suggest a temporary fix for errors when Caddy attempts to bind to port 80/443.

Backstory: There was some debate about whether the Docker image should be built to run as root or as a non-root user. There’s some security concerns (mostly theoretical, very hard to exploit in practice) with running containers as root, i.e. concerns about container-escape vulnerabilities. Because of that, it was set up to run as a non-root user.

The problem is that running as non-root means that by default, processes don’t have permissions to bind to ports under 1024. This causes some issues for Caddy, because there’s a lot of assumptions in place to get Automatic HTTPS working. When Caddy tries to issue a cert from Let’s Encrypt with the HTTP challenge method, it needs to bind to port 80 to let LE connect. This currently isn’t configurable via Caddyfile (but it seems to be via JSON, see alternate_port).

This is all pretty complicated, and it shouldn’t be that difficult in the first place for users of Docker to get set up, so @hairyhenderson opened an issue to potentially switch back to root in the official image later on.

Anyways, in your docker-compose.yml, I think you can add the following under your caddy service to allow your container the permission to bind to low ports:

sysctls:
  - net.ipv4.ip_unprivileged_port_start: 0

This is the docker-compose documentation: Compose file | Docker Documentation
An open issue about this problem in Docker and where I learned about that sysctl option: Can't bind to privileged ports as non-root · Issue #8460 · moby/moby · GitHub

Also, because I bugged @hairyhenderson about it, there’s a new Docker image which was just built for beta.13. Docker Hub, so you should be able to use your original respond directive with that image, I think.

Thanks for diving in, @francislavoie! The sysctls setting threw a new error:

ERROR: for reverse-proxy  Cannot start service reverse-proxy: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"write sysctl key net.ipv4.ip_unprivileged_port_start: open /proc/sys/net/ipv4/ip_unprivileged_port_start: no such file or directory\"": unknown

ERROR: for reverse-proxy  Cannot start service reverse-proxy: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"write sysctl key net.ipv4.ip_unprivileged_port_start: open /proc/sys/net/ipv4/ip_unprivileged_port_start: no such file or directory\"": unknown

But, based on your other comments, I added user: root to the service definition in my docker-compose.yml file and that allowed me to progress past the port binding errors.

Now I just need to figure out how to use self-signed certs in local development (with the x.y.local hostnames) and Lets Encrypt in prod (with the x.y.com hostnames)! Perhaps I’ll post another topic about that one because I can’t seem to crack it :confused:

1 Like

What version of Docker and what linux kernel are you running? That option seems to be for kernel 4.11+ only (see Can't bind to privileged ports as non-root · Issue #8460 · moby/moby · GitHub)

I’m running docker v19.03.5 and docker-compose v1.24.1 on macOS Catalina v10.15.2. Regarding the linux kernal, I assume you mean on the caddy image? I’ve tried caddy/caddy:alpine and caddy/caddy:scratch. I’m not sure what the here-be-dragons one is; I assume an unstable development channel.

Ah, didn’t realize you were running on Mac. How docker works on mac is that it runs a lightweight Linux VM and runs containers in that. I don’t know what linux version and kernel is used on Mac though.

Anyways, seems like running with user: root should do the trick for you now.

The here-be-dragons tag is just the “latest” but without actually tagging “latest”. It’s a bit dangerous to run against that tag, because you’re more susceptible to breaking changes.

Caddy v2 doesn’t yet have anything in place for provisioning self-signed certs. For now, I suggest looking into GitHub - FiloSottile/mkcert: A simple zero-config tool to make locally trusted development certificates with any names you'd like.. You can use a docker volume to share the certs with Caddy and use the tls directive to specify the cert and key files to use.

Oh I’m using mkcert. I wasn’t sure if there was a specific place to mount the cert, but I can activate it using absolute paths (tls /root/certs/certname.crt /root/certs/certname.key). The trouble is (a) that disables other auto-https things like redirecting http to https, and (b) more importantly, that line of my Caddyfile should only apply in development mode. In prod I’d like to use letsencrypt. Should I have two separate Caddyfiles (one for prod, one for dev), or is there an easier way? Surely most caddy users need to do this too?

Yeah, typically people have separate production and development configs. You never run with the same domain name in production and development anyways.

For the http → https redirect, you can write your Caddyfile like this:

https://<domain> {
    ... your usual directives
}

http://<domain> {
    redir https://<domain>{uri}
}

Automatic HTTPS is more a production use-case, so a bit more manual work is required when in development.

Ah, okay, I was planning to use multiple hostnames, e.g. x.y.local, x.y.com { ... } to share configs between dev and prod. But I’ll try separating them if that’s what people do. Thanks for your time.

Thanks for looking into this. I think I have a similar problem like the to.
I tried running the container as root and non-root user without success.
I also added rights to the container, notice the different writing, the suggested sysctls led to an error msg for me:

sysctls:
  - net.ipv4.ip_unprivileged_port_start=0
cap_add:
  - CAP_NET_BIND_SERVICE

Caddy outside the docker works just fine but inside the docker it cannot get the let’s encrypt certificate. I am also confused which tag I should use…currently I am going with alpine.

Ultimately, the decision was to switch the container back to using root by default for the official image. See Consider running as root by default · Issue #24 · caddyserver/caddy-docker · GitHub

The tag situation for the docker image is still up in the air, but using the alpine tag for now is fine. Once v2 hits stable, each release will be tagged by the version, and at that point it would be prudent to use a specific version tag instead of something like latest in case a new version contains breaking changes.

May I suggest adding a “lts” tag for long term support versions without breaking changes? :slight_smile:

Caddy uses semantic versioning, so LTS isn’t necessary.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.