Dockerised Caddy gives gateway IP address instead of remote IP in $remote_addr

1. Caddy version (caddy version):

Version 2.2.1, built with caddy-auth-jwt :

FROM caddy:2.2.1-builder AS builder

RUN xcaddy build \
    --with github.com/greenpau/caddy-auth-jwt

FROM caddy:2.2.1

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

2. How I run Caddy:

a. System environment:

I run Caddy in docker, on a bridge network, on my Synology NAS.
As DNS challenge for OVH provider is not yet supported, I import my LE certificate in Caddy.

b. Command:

docker-compose up -d caddy

c. Compose file:

  caddy:
    container_name: caddy
    restart: always
    image : caddy-jwt
    #build: .
    network_mode: My_bridge
    volumes:
      - /volume1/docker/Caddy/caddyfile:/etc/caddy/Caddyfile
      - /volume1/docker/Caddy/:/data
      - /volume1/docker/Caddy/logs:/caddy/logs/
      - /volume1/docker/CertbotOVH/Certificates:/Certificates:ro
    ports:
      - 2828:443

d. My complete Caddyfile :

my-domain.com {

    tls /Certificates/fullchain.pem /Certificates/privkey.pem

    log {
      format json
      output file /caddy/logs/caddy.log {
        roll_size 10MiB
        roll_keep 2
      }
    }

    reverse_proxy 172.19.0.1:7773

    redir /service1 /service1/
    route /service1/* {
      uri strip_prefix /service1
      reverse_proxy service1:80
    }

    redir /service2 /service2/
    route /service2/* {
      jwt {
        primary yes
        allow group Admin
        trusted_tokens {
          static_secret {
            token_name <MyToken>
            token_secret <MySecret>
          }
        }
        auth_url https://my-domain.com/service1
      }
      reverse_proxy service2:8787
    }

    redir /service3 /service3 /
    route /service3 /* {
      jwt
      reverse_proxy service3 :9898
    }
}

subdomain.my-domain.com {
    log {
      output file /caddy/logs/caddy.log {
        roll_size 10MiB
        roll_keep 2
      }
    }
    tls /Certificates/fullchain.pem /Certificates/privkey.pem
    reverse_proxy 192.168.1.115
}

3. The problem I’m having:

I forward the port 443 of my router on the 2828 port of the NAS, so caddy can redirect me through the right service.
My problem is that the IP shown in the log of my applications is the gateway of my docker bridge and not the remote IP address. Here is the log of service1 for example :

date":"2020-11-10T10:37:03Z","type":"error","username":"sss","ip":"172.19.0.1","message":"Login Function - Wrong Password

In fact, I think my applications interpretation seems good because in caddy’s log, the remote_addr is also the bridge gateway :

....,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_addr":"172.19.0.1:57552","proto":"HTTP/2.0",...

What am I missing ?

4. What I already tried:

I searched for the same issue in this forum and on the net, with no luck. I also search for an “trusted_proxies” kind of setup in the caddy website, with no luck too.

But when I used caddy behind the Synology NGINX reverse proxy, I had the good remote IP !
Now I want to use Caddy as my primary reverse proxy (because Caddy is good and efficient :slight_smile:), but the IP address is not the remote one.

I think I’ve encountered a similar thing. Docker essentially runs NAT with port forwarding (as defined in the “ports” section of your compose file). So from caddy’s perspective all connections are indeed coming from the bridge gateway.

You’ll want to put this container in host network mode, so it directly attaches.

You’re right, this can be the way.
In fact, I already tried to run caddy on the host network, but Caddy didn’t start because port 443 and 80 are already in use on a Synology NAS, and I’m not confident enough to free them…

Do you know how to force Caddy to use different listening ports ?

You’d have to reconfigure it in the Caddyfile by adding :2828 (as seen in your compose file, but any other one will work) to end end of the domains. If you don’t define the port Caddy defaults to 443 with an automatic redirect.

Thanks, it worked for the port 443 ! But the http port still tries to listen on port 80 :

run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use

Maybe Caddy still tries to listen on port 80 because it needs it for the LE certification ?

Yeah Caddy needs port 80 and 443 to solve the HTTP and ALPN ACME challenges respectively, and port 80 to do the HTTP->HTTPS redirects.

You can use the global option auto_https off to turn off this behaviour. You could also set http_port to something else to make it use a different port.

It worked thank you ! I can run Caddy on docker host.
But now I get the ::1 address. Any clue ?

I’m not sure what you mean, but ::1 is the equivalent of 127.0.0.1 but for IPv6 instead, i.e. localhost.

Ok I see. Maybe using the docker host network is not the solution, because the applications I need to reverse proxy are in another network, and the communication between docker host network and custom bridge network is complicated (IP address have to be used instead of the container name).

@francislavoie do you have an idea for getting the remote IP address in Caddy running in a docker bridge network (in fact for my problem described in the first post) ?

Well, it depends on how the connections are handled. If you have something acting as a gateway that proxies the request (i.e. consumes it, then makes a new request), then it would need to make available the original IP address somehow.

Typically when it’s HTTP all the way down, headers like X-Forwarded-For are used to tell the next proxy upstream the remote IP. Caddy’s reverse_proxy directive does automatically, for example. If you’re doing the proxying at the TCP level, then headers can’t be used, so the PROXY protocol can be used (there’s a Caddy plugin for this, currently only supports JSON config, but Caddyfile support is coming soon).

Either way, whatever you have in front of Caddy needs to support either of these solutions, otherwise you’ll continue to just see the IP address from the proxy.

Thank you for this clarifications, they are very usefull.
In fact, it reveals that docker proxies at the TCP level from host to its networks, so all the HTTP headers are lost. This is problematic, and a lot of users are complaining about it.

So the solutions are :

  • run Caddy on the docker host to preserve the http headers, and reverse-proxy the applications (on a bridge network) with their IP (not ideal)
  • run a instance of Caddy on host and another Caddy on the bridge network for reverse-proxy the applications (with their container name).
  • run Caddy on a bridge network with the PROXY protocol

The third alternative seems more flexible, but I’m not comfortable with this protocol, and I don’t know the impact on the application I want to reverse-proxy : they have to be compatible with this protocol ? As I can see for the “Mautic” application, the container have to be rebuilt to support PROXY protocol.

The only things that need to support the PROXY protocol are whatever’s in front of Caddy, and Caddy itself. Basically it just adds some extra bytes to the front of the TCP packets (before any of the HTTP/TLS data) which the PROXY protocol plugin for Caddy will strip off and parse to override the request’s remote IP. It’s totally transparent to applications behind Caddy.

Ok, my setup is :

Web browser request mydomain.com:443/service1 ==> the router forward the 443 port to the 2828 port of the NAS ==> Docker forward the 2828 port to 443 port of Caddy (which is on a bridge network) ==> Caddy route the /service1 to port 80 of the IP of the service1

The remote IP is lost at the Docker level. If I simply switch to the PROXY protocol, it will “just work” and the remote IP will be transmitted to service1 ?

Yeah that’s the idea — but Docker would need to support PROXY protocol, I’m not certain if it does? I haven’t tried that out.

Are you using Docker in swarm mode? Cause I think that might be your problem… I’ve never had this issue myself running Docker in non-swarm mode.

No I don’t. I only have my NAS running one instance of docker, and all my applications are on a bridge network (My_bridge).
In fact, when I used the Synology embedded reverse proxy to forward the subdomain docker.mydomain.com to Caddy (on the My_bridge network), I had the good remote IP. But since I redirect the mydomain.com traffic directly to Caddy (still on the My_bridge network), I lost the remote IP…

Do you use Caddy on the docker host or on a bridge network ?

I use Caddy as a container with a docker-compose bootstrapped bridge network, but I don’t use Synology or anything like that. Just Docker installed right on ubuntu.

Ok, I will give the PROXY plugin a try because it seems that docker is compatible with PROXY protocol
Now I need to translate my Caddyfile in a JSON config file. I’ve seen in the caddy doc that it can be done with

caddy adapt --config /path/to/Caddyfile --pretty

And after that I just add the PROXY wrapper before the tls one and it’s done ? Or the translation in json needs more adaptation to be usable ?
Thank again for your help :slight_smile:

Yeah that’s about it. You can even use jq (cli tool, look it up) to make the modifications to your Caddyfile in an automated way.

caddy adapt --config /path/to/Caddyfile | jq '.apps.http.servers.srv0 += {"listener_wrappers":[{"wrapper": "proxy_protocol", "timeout": "5s", "allow": ["10.0.0.0/16"},{"wrapper":"tls"}]}' > caddy.json

Then you’d run Caddy with the caddy.json instead (just override the command used to run the Caddy container)

This would just be temporary though, v2.3.0 will likely come with support for configuring listener wrappers in Caddyfile:

https://github.com/caddyserver/caddy/pull/3836

1 Like

That link is about Docker in swarm mode btw… I’m still not convinced the issue is where you think it is.