Caddy + Cloudflare Tunnel not getting correct remote_ip and client_ip

1. The problem I’m having:

Hello, I’m wondering if there’s a way to get the actual IP of the client in my setup. I’m running Caddy and Cloudflare Tunnels (cloudflared) in LXC containers on Proxmox (doing a new install after moving Caddy away from Unraid). Each of the containers has their own IP.

I want to block certain domains to local access only or tailscale only (if possible) while leaving others free to be accessed by any IP range. Currently when accessing sites I get remote_ip and client_ip in logs as my Cloudflared LXC container’s IP.

Cloudflared LXC IP: 192.168.86.13

I’ve searched over the forums and google and tried multiple things but never was able to solve this. Gave up and now I’m here asking for help because it’s bugging me.

2. Error messages and/or full log output:

Just confirms what I’m seeing

2024/11/07 17:21:35.812	e[34mINFOe[0m	http.log.access.log2	handled request	{"request": {"remote_ip": "192.168.86.13", "remote_port": "52486", "client_ip": "192.168.86.13", "proto": "HTTP/1.1", "method": "GET", "host": "test.domain.com", "uri": "/", "headers": {"Connection": ["keep-alive"], "Dnt": ["1"], "Priority": ["u=0, i"], "Upgrade-Insecure-Requests": ["1"], "Accept-Language": ["en-US,en;q=0.9"], "Cf-Ipcountry": ["US"], "Cf-Visitor": ["{\"scheme\":\"https\"}"], "Sec-Fetch-Mode": ["navigate"], "Sec-Fetch-User": ["?1"], "Cache-Control": ["max-age=0"], "Cf-Warp-Tag-Id": ["f984e815-8ffe-4030-b167-bffba16c5ccd"], "Sec-Ch-Ua": ["\"Chromium\";v=\"130\", \"Google Chrome\";v=\"130\", \"Not?A_Brand\";v=\"99\""], "Cf-Ray": ["8def00251ba03b8c-IAD"], "Sec-Ch-Ua-Platform": ["\"macOS\""], "X-Forwarded-Proto": ["https"], "User-Agent": ["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36"], "Accept": ["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8"], "Sec-Ch-Ua-Mobile": ["?0"], "Sec-Fetch-Dest": ["document"], "Sec-Fetch-Site": ["none"], "X-Forwarded-For": ["173.163.124.234"], "Accept-Encoding": ["gzip, br"], "Cdn-Loop": ["cloudflare; loops=1"], "Cf-Connecting-Ip": ["173.163.124.234"]}, "tls": {"resumed": false, "version": 772, "cipher_suite": 4865, "proto": "", "server_name": "test.domain.com"}}, "bytes_read": 0, "user_id": "", "duration": 0.043235828, "size": 5, "status": 200, "resp_headers": {"Content-Type": ["text/plain; charset=utf-8"], "Server": ["Caddy"], "Alt-Svc": ["h3=\":443\"; ma=2592000"], "Referrer-Policy": ["strict-origin"], "Strict-Transport-Security": ["max-age=31536000; includeSubDomains;"], "X-Content-Type-Options": ["nosniff"], "X-Frame-Options": ["SAMEORIGIN"], "X-Robots-Tag": ["noindex, nofollow, nosnippet, noarchive"]}}

3. Caddy version:

Running Caddy v2.8.4

4. How I installed and ran Caddy:

a. System environment:

LXC container running Debian 12.

b. Command:

Ran using this script: ProxmoxVE/install/caddy-install.sh at main · community-scripts/ProxmoxVE · GitHub (the commands of interest are below)

curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' >/etc/apt/sources.list.d/caddy-stable.list
apt-get update
apt-get install -y caddy

c. Service/unit/compose file:

Type=notify
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force
TimeoutStopSec=5s
LimitNOFILE=1048576
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

d. My complete Caddy config:

{
	email caddy@domain.com
	servers {
		trusted_proxies cloudflare
		client_ip_headers CF-Connecting-IP
	}
}

(security_headers) {
	header {
		Strict-Transport-Security "max-age=31536000; includeSubDomains;"
		X-Frame-Options "SAMEORIGIN"
		X-Content-Type-Options "nosniff"
		Referrer-Policy "strict-origin"
		X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"
	}
}

(log_settings) {
	log {
		output file /var/log/caddy/caddylog.log {
			roll_size 10MiB
			roll_keep 2
			roll_keep_for 72h
		}
		level DEBUG
		format console
	}
}

(cloudflare) {
	tls {
		dns cloudflare <token>
	}
}

(simple_lb) {
	lb_policy first
	lb_try_duration 5s
	lb_try_interval 250ms

	fail_duration 10s
	max_fails 1
	unhealthy_status 5xx
	unhealthy_latency 5s
	unhealthy_request_count 1
}

:80 {
	import log_settings
	root * /usr/share/caddy
	file_server
}

*.domain.com {
	import cloudflare
	import log_settings
	import security_headers

	@test host test.domain.com
	handle @test {
		respond "Works."
	}

	handle {
		abort
	}
}

5. Links to relevant resources:

Searched all over looking for people with similar-ish problems (including the most recent help topic) but they rarely had the same setup and tried their working solutions but nothing worked for me so far.

Is your problem that remote_ip is not actually from Cloudflare ranges because of your networking setup? Then you could just use trusted_proxies static private_ranges if you know there’s no other way for requests to make it to your server than through Cloudflare (or your local network which shouldn’t be trying to spoof Cf-Connecting-Ip anyway)

1 Like

Ah I see okay, let me give this a go and see. That seems like such an easy solution (that I thought I tried but maybe not).

So it does work in getting client_ip to be correct. But remote_ip if the request is originating from outside the network is the cloudflare tunnel internal IP. So would I just set up local-only routes with client_ip? Just wondering if this is a “normal” config on my end.

I don’t understand the question. Can you elaborate?

1 Like

Maybe poorly worded but I think your suggestion worked with this config I’m able to do local only on domains/paths I want to which was the end goal. I’m leaving the config below for anyone else who wants to do something similar or if you have any suggestions/improvements if I’m doing something wrong:

{
	email caddy@domain.com
	servers {
		trusted_proxies static private_ranges
	}
}

(security_headers) {
	header {
		Strict-Transport-Security "max-age=31536000; includeSubDomains;"
		X-Frame-Options "SAMEORIGIN"
		X-Content-Type-Options "nosniff"
		Referrer-Policy "strict-origin"
		X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"
	}
}

(log_settings) {
	log {
		output file /var/log/caddy/caddylog.log {
			roll_size 10MiB
			roll_keep 2
			roll_keep_for 72h
		}
		level INFO
		format console
	}
}

(cloudflare) {
	tls {
		dns cloudflare <api_token>
	}
}

(simple_lb) {
	lb_policy first
	lb_try_duration 5s
	lb_try_interval 250ms

	fail_duration 10s
	max_fails 1
	unhealthy_status 5xx
	unhealthy_latency 5s
	unhealthy_request_count 1
}

(local_only) {
	client_ip private_ranges
}

:80 {
	import log_settings
	root * /usr/share/caddy
	file_server
}

*.domain.com {
	import cloudflare
	import log_settings
	import security_headers

	@immich host photos.domain.com
	handle @immich {
		reverse_proxy 192.168.86.11:2283
	}

	@test {
		host test.domain.com
	}
	handle @test {
		respond "test"
	}

	@uptimekuma host up.domain.com
	handle @uptimekuma {
		@uptimenotblacklisted {
			not {
				path /dashboard* /manage*
			}
		}
		reverse_proxy @uptimenotblacklisted 192.168.86.11:3001
		@uptimelocal {
			import local_only
		}
		reverse_proxy @uptimelocal 192.168.86.11:3001
	}

	handle {
		abort
	}
}

I’d write this stuff like this:

	@uptimekuma host up.domain.com
	handle @uptimekuma {
		@uptimenotblacklisted not path /dashboard* /manage*
		handle @uptimenotblacklisted {
			reverse_proxy 192.168.86.11:3001
		}
		@uptimelocal client_ip private_ranges
		handle @uptimelocal {
			reverse_proxy 192.168.86.11:3001
		}
		handle {
			abort
		}
	}

The reason is that once you’re inside a handle, it won’t run other handle at the same nesting level, so your existing abort wasn’t getting run, and non-matched requests in your @uptimekuma handle would go unhandled (empty 200 status response) instead of getting aborted or w/e. Using another set of handles inside lets you handle the fallback for this domain cleanly. You could do something other than abort if you prefer (error page maybe).

1 Like

Ah, thank you for that. That makes sense why I was getting CF error for unhandled domains rather than a blank page.

How would I make it so that I can have all unhandled domains be aborted? I thought that’s what I was doing but I guess not.


After marking the answer as solved :sweat_smile:, I had an issue and realized that config only worked with AdGuard Home. I’ve turned that off as it’s causing other issues within my network currently.

With the config I posted above and my current network conditions, my access logs show:

  • remote_ip is set to my Cloudflare Tunnel LXC container (192.168.86.13)
  • client_ip is set to my external IP address
    • Tested with iPhone on cellular and it is indeed the external IP address but it should be the local address when I’m within the network

So again I’m back to square 1 with not being able to get either of these IPs as my local client address (192.168.86.xx).

Any tips on what I should do to get client_ip to be the local address? Also shouldn’t remote_ip be set to the actual remote ip address rather than my CF Tunnel container?

Yes that is what you were doing with your handle { abort }. If CF sees the aborted connection, yes it would throw its own error page cause it doesn’t know what to do otherwise.

My suggestion above is only to adjust your up.domain.com, not any handling for other domains.

Uh, that means connections inside your network are still going through a TCP-layer proxy before it reaches Caddy. It’s not a Caddy config issue, it’s a networking issue. I don’t really understand your setup that well so I’m not sure what to suggest. I don’t use CF tunnels nor LXC containers.

1 Like

Maybe I can try and explain my setup a little bit more if that can help with any suggestions.

  • I’m running Caddy and CF tunnel in Proxmox LXC containers (pretty much Debian). I only mention LXC just so you have the full picture. I don’t think that has any bearing on this issue.
    • Both get their own IPs on the network and are essentially directly connected to the network
  • No ports open on my router. Everything runs through the CF tunnel. Routers are TP-Link Decos connected to a modem in bridge mode.
  • CF tunnel is configured to send traffic to Caddy which then forwards the request to wherever it’s supposed to go.
  • Previously I had AdGuard Home running as well but that is offline for the time being.
  • I do have Tailscale running on Proxmox (I can try disabling and see if that solves any issues)

Everything works fine. Only issue is the IPs that are reported in the Caddy access logs are not actual client/remote IPs, they’re external IPs and the IP of the CF tunnel.

Which external IP do you mean? The server’s or the client’s?

@hmoffatt The client’s. Even when I’m within my local network I see my own external IP address rather than the actual client’s.

If you’re accessing a service from inside your own network, it’s still going to go out to Cloudflare using your external IP and back via the tunnel, unless you are using split DNS to short-circuit this.

2 Likes

Oh god, duh! That makes a lot of sense. Thanks for the clarification. We can close this out.

Original solution by @francislavoie is accurate.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.