RemoteIP returns tunnel's IP despite trusted_proxies

1. The problem I’m having:

First off here’s my setup:

  • Ive got a docker container with caddy running my unraid server
  • My homeserver itself is behind a dynamically IP’ed router, but:
  • An ssh tunnel with TCP passthrough is running up to an VPS from the same docker network. That VPS has a static IP.

This works. I can connect to my homeserver and get my minimal testpage displayed.

What doesn’t work is differentiating between inside traffic and outside traffic.

My testpage handles internal IPs before it does external ones, but always reports the same IP, namely 172.18.0.3, that of the ssh tunnel’s internal docker network address.
Despite the trusted_proxies global option, all traffic is treated as internal.

Why am i not seeing the client’s ‘real’ external IP?
What am i misunderstanding about this setup?

2. Error messages and/or full log output:

There are no errors:

{"level":"info","ts":1701436697.6766531,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
{"level":"info","ts":1701436697.6783764,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1701436697.6785798,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1701436697.678593,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1701436697.6786585,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0003a3b00"}
{"level":"info","ts":1701436697.678951,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1701436697.6792183,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1701436697.6802673,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1701436697.6807241,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1701436697.6807952,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["example.com"]}
{"level":"info","ts":1701436697.6858954,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1701436697.685914,"msg":"serving initial configuration"}
{"level":"info","ts":1701436697.7054136,"logger":"tls","msg":"finished cleaning storage units"}

3. Caddy version: v2.7.5

4. How I installed and ran Caddy:

Unraid’s Appstore, namely the ‘official’ CaddyV2 Image.
I made a seperate network for it and the autossh container.

a. System environment:

OS: unraid 6.12.4
Environment: Docker

b. Command:

The autossh running on the other container uses the following script:


```bash
autossh \
 -M 0 -N \
 -o StrictHostKeyChecking=no \
 -o ServerAliveInterval=30 \
 -o ServerAliveCountMax=3 \
 -i /id_rsa \
 -R :80:caddy:80 \
 -R :443:caddy:443 \
 -p 24720 \
 ${USER}@${SERVER}

c. Service/unit/compose file:

Unraid doesn’t seem to provide an output of the docker template…

d. My complete Caddy config:

{
        email <email>
        acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
        servers {
                trusted_proxies static private_ranges
        }
}

example.com {
   templates
   @internal {
       client_ip private_ranges
   }
   handle @internal {
       respond "Hello Inner World: {{.RemoteIP}}"
   }
   respond "Hello Outer World: {{.RemoteIP}}"
}

If the tunnel is essentially creating a new connection, then the source IP address on the TCP packets will be the tunnel’s instead of the original client’s.

This isn’t something Caddy can resolve on its own.

If you’re effectively doing TCP proxying, the typical solution is to prepend the TCP traffic with PROXY protocol header bytes, which the upstream server can parse to read the original client IP.

okay, Thanks for clearing that up …
I hadn’t realized that that would completely overwrite the header, I’d thought there was a sort of history on it, from which the trusted_proxies could be ignored.

Just to be clear: haproxy would need to be installed on the VPS for this to work?


Edit: I’ve been bashing my head against this for too long and missed the obvious solution – since all external traffic will always get the tunnel’s IP, I can simply not use “private_ranges” and define filter ranges for my internal net that don’t include that specific IP! headdesk

To clarify, trusted_proxies is only useful when you have a downstream HTTP proxy which is manipulating the X-Forwarded-For header, that you want to trust. It’s not useful when you have a TCP proxy or tunnel in front.

No, HAProxy is not relevant, it’s just that they invented the PROXY protocol spec.

Keep in mind that the X-Forwarded-For sent to your app by reverse_proxy will still be “wrong” because it’ll just be the IP address on the TCP packets that Caddy received.

Keep in mind that the X-Forwarded-For sent to your app by reverse_proxy will still be “wrong” because it’ll just be the IP address on the TCP packets that Caddy received.

True, but at least I’ll be able to differentiate between inside and outside requests until I’ve actually wrapped my head around getting proxy protocol to run (maybe even without installing another proxy server?)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.