I cannot make trusted_proxies to work with client_ip, remote_ip or X-Forwarded-For

1. The problem I’m having:

I am banning myself with my bouncer because I am serving my docker virtual network gateway to crowdsec:

cscli alerts list `│ 1  │ Ip:172.50.0.1 │ crowdsecurity/http-bad-user-agent    │ US      │ 21928 T-MOBILE-AS21928 │ ban:1     │ 2024-03-09 07:27:09.912528318 +0000 UTC │`

I have a stack with caddy-crowdsec-bouncer and a dashboard and caddy container has the with GitHub - WeidiDeng/caddy-cloudflare-ip collection and caddy is configured with trusted_proxies cloudflare.

Cloudflare is delivering the real ip Cf-Connecting-Ip but I don’t know how to serve it to the crowdsec correctly.

I am not an advanced user, I hope this is no a bad topic, I did try to search about it, but if there are suggestions other than use “trusted_proxies” I can’t read too much into them without some kind of example.

client_ip":"172.50.0.1 - all of these are from docker virtual network gateway
remote_ip":"172.50.0.1
X-Forwarded-For":["172.50.0.1"]
Cf-Connecting-Ip":["5.5.5.5."]` - real ip

2. Error messages and/or full log output:

line: {"level":"debug","ts":1710064973.2757668,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"192.168.10.20:5055","duration":2.838332639,"request":{"remote_ip":"172.50.0.1","remote_port":"56586","client_ip":"172.50.0.1","proto":"HTTP/2.0","method":"GET","host":"media.mydomain,.com","uri":"/api/v1/movie/1044302","headers":{"Sec-Fetch-Site":["same-origin"],"X-Forwarded-For":["172.50.0.1"],"Cf-Iplongitude":["0"],"X-Forwarded-Proto":["https"],"Accept-Language":["en-US,en;q=0.5"],"Cf-Visitor":["{\"scheme\":\"https\"}"],"Cdn-Loop":["cloudflare"],"Accept-Encoding":["gzip, br"],"Cf-Timezone":["Europe/MYCITY"],"Sec-Fetch-Dest":["empty"],"Cf-Ray":["862278b0188f0bb4-AMS"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Cf-Iplatitude":["0"],"Referer":["https://media.mydomain.com/"],"Cf-Ipcontinent":["EU"],"Accept":["application/json, text/plain, */*"],"Cookie":[],"Cf-Region":["mycity"],"Cf-Postal-Code":["00000"],"X-Forwarded-Host":["media.mydomain.com"],"Cf-Region-Code":["MY"],"Cf-Ipcity":["MYCITY"],"Cf-Ipcountry":["MY"],"Cf-Connecting-Ip":["5.5.5.5"],"Sec-Fetch-Mode":["cors"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"media.mydomain.com"}},"headers":{"X-Powered-By":["Express"],"Content-Type":["application/json; charset=utf-8"],"Content-Length":["39408"],"Etag":["W/\"99f0-kpMttV9T+0t5WwlerQWxCFsFHwI\""],"Date":["Sun, 10 Mar 2024 10:02:53 GMT"],"Connection":["keep-alive"],"Keep-Alive":["timeout=5"]},"status":200}

The above is taken out of the caddy’s access log, so I think I may have miss configured something really bad, since all I read about it, is that caddy should receive the client_ip without problems

3. Caddy version:

v2.7.6 h1:w0NymbG2m9PcvKWsrXO6EEkY9Ru4FJK8uQbYcev1p3A=

4. How I installed and ran Caddy:

docker-compose up -d (on the entire caddy, crowdsec-bouncer, dashboard stack). The DNS is configured with Cloudflare.

a. System environment:

Docker, Linux DS220 4.4.302+ #69057 SMP Fri Jan 12 17:02:28 CST 2024 x86_64 →

b. Command:


RUN xcaddy build \
    # dns.providers.cloudflare wraps the provider implementation as a Caddy module.
    --with github.com/caddy-dns/cloudflare \
    # http.ip_sources.cloudflare provides a range of IP address prefixes (CIDRs) retrieved from cloudflare.
    --with github.com/WeidiDeng/caddy-cloudflare-ip \
    # CrowdSec Bouncer - HTTP handler and Layer4 matcher to decide if a request or connection is allowed or not.
    --with github.com/hslatman/caddy-crowdsec-bouncer/http

FROM caddy:latest

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

c. Service/unit/compose file:

version: "3.9"

services:
  crowdsec:
    hostname: crowdsec
    image: crowdsecurity/crowdsec:latest
    environment:
      # https://hub.crowdsec.net/author/crowdsecurity/collections/caddy
      COLLECTIONS: "crowdsecurity/caddy"
      GID: "${PGID}"
      DISABLE_ONLINE_API: "true"
    depends_on:
      - 'caddy'
    ports:
      - 8080:8080 # exposes a REST API for bouncers, cscli and communication between crowdsec agent and local api
      - 6060:6060 #exposes prometheus metrics on /metrics and pprof debugging metrics on /debug
    volumes:
      - ./crowdsec/acquis.yaml:/etc/crowdsec/acquis.yaml
      # Volume mouunted
      - crowdsec-db:/var/lib/crowdsec/data/
      - crowdsec-config:/etc/crowdsec/
      #  Caddy database mounted as volume
      - ./caddy/logs:/var/log/caddy
    networks:
      caddy_crowdsec:
        ipv4_address: 172.50.0.5

  # Reverse-proxy-waf definition. Service definition based on docker compose: https://hub.docker.com/_/caddy
  # Tutorial by Juan Pablo Tosso: https://medium.com/@jptosso/oss-waf-stack-using-coraza-caddy-and-elastic-3a715dcbf2f2
  caddy:
    hostname: caddy
    build:
      context: ./caddy
      dockerfile: Dockerfile #Custom build
    ports:
      #  - "1801:80"
      - "1443:443"
    environment:
      - $CLOUDFLARE_API_TOKEN
      - CROWDSEC_LOCAL_API_KEY=${CROWDSEC_LOCAL_API_KEY}
    volumes:
      # Caddy configuration.
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile # Required. Needs to be an extension-less file NOT a directory
      - ./caddy/caddy_data:/data # Optional, house for certs. Caddy adds its own /caddy/ directory
      - ./caddy/caddy_config:/config # Caddy adds its own /caddy/ directory
      - ./caddy/logs:/var/log/caddy # check for /access.log
      - ./docker/config.json:/etc/caddy/config.json
    networks:
      caddy_crowdsec:
        ipv4_address: 172.50.0.6

  dashboard:
    #we're using a custom Dockerfile so that metabase pops with pre-configured dashboards
    build: ./crowdsec/dashboard
    ports:
      - 3333:3000
    environment:
      MB_DB_FILE: /data/metabase.db
      MGID: "${PGID}"
    depends_on:
      - 'crowdsec'
    volumes:
      # bind crowdsec-db for metabase dashboard
      - crowdsec-db:/metabase-data/
    networks:
      caddy_crowdsec:
        ipv4_address: 172.50.0.9

volumes:
  crowdsec-db:
  crowdsec-config:


networks:
  caddy_crowdsec:
    ipam:
      driver: default
      config:
        - subnet: 172.50.0.0/24

d. My complete Caddy config:

{
	debug
	order crowdsec first
	crowdsec {
		api_url http://crowdsec:8080 # the URL where your CrowdSec LAPI can be reached, somewhere on your network/system
		api_key {env.CROWDSEC_LOCAL_API_KEY} # the secret API key for the bouncer to authenticate against LAPI
	}
	log {
		level DEBUG
		output file /var/log/caddy/access.log {
			roll_size 5mb
			roll_keep 10
			roll_keep_for 720h
		}
	}
	servers {
		# all traffic come from cloudflare cdn
		trusted_proxies cloudflare
	}
	acme_dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}

localhost {
    route {
        crowdsec
        respond "Allowed by CrowdSec!"
    }
}

*.mydomain.com {
	@plex host plex.mydomain.com
	handle @plex {
		crowdsec
		reverse_proxy 192.168.10.20:32400
	}

	@media host media.mydomain.com
	handle @media {
		crowdsec
		reverse_proxy 192.168.10.20:5055
	}

	@photos host photos.mydomain.com
	handle @photos {
		crowdsec
		reverse_proxy 192.168.10.20:2342
	}

	# Fallback for otherwise unhandled domains
	handle {
		abort
	}
}

5. Links to relevant resources:

Tried to add these transform rules, didn’t see any difference and I am sorry in advance that I have redacted the domain and Ip addresses.

The crowdsec bouncer required me to bind the config.jos. I have pasted it below.

{
  "logging": {
    "logs": {
      "default": {
        "level": "DEBUG",
        "writer": {
          "output": "stderr"
        }
      },
      "access": {
        "level": "DEBUG",
        "writer": {
          "output": "file",
          "filename": "/var/log/caddy/access.log"
        },
        "encoder": {
          "format": "formatted",
          "template": "{common_log} \"{request>headers>Referer>[0]}\" \"{request>headers>User-Agent>[0]}\""
        },
        "include": ["http.log.access.access"]
      }
    }
  },
  "apps": {
    "crowdsec": {
      "api_key": "api_key",
      "api_url": "http://crowdsec:8080/",
      "ticker_interval": "10s",
      "enable_streaming": true
    },
    "layer4": {
      "servers": {
        "https_proxy": {
          "listen": ["caddy:8443"],
          "routes": [
            {
              "match": [
                {
                  "crowdsec": {},
                  "tls": {}
                }
              ],
              "handle": [
                {
                  "handler": "proxy",
                  "upstreams": [
                    {
                      "dial": ["caddy:1443"]
                    }
                  ]
                }
              ]
            }
          ]
        }
      }
    },
    "http": {
      "http_port": 1801,
      "https_port": 1443,
      "servers": {
        "server1": {
          "listen": ["0.0.0.0:1443"],
          "routes": [
            {
              "group": "temp-example-group",
              "match": [
                {
                  "path": ["/*"]
                }
              ],
              "handle": [
                {
                  "handler": "crowdsec"
                },
                {
                  "handler": "static_response",
                  "status_code": "200",
                  "body": "Hello World!"
                },
                {
                  "handler": "headers",
                  "response": {
                    "set": {
                      "Server": ["caddy-cs-bouncer-example-server"]
                    }
                  }
                }
              ]
            }
          ],
          "logs": {
            "default_logger_name": "access"
          }
        }
      }
    },
    "tls": {
      "automation": {
        "policies": [
          {
            "subjects": ["caddy", "localhost"],
            "issuer": {
              "module": "internal"
            },
            "on_demand": true
          }
        ]
      }
    }
  }
}

Docker sometimes has a userland proxy enabled, which is at the TCP layer in front of the containers, which causes all connections to look as if they came from the Docker gateway itself.

The problem isn’t with your Caddy config, it’s with Docker.

1 Like

I really appreciate your swift response, it was spot on. I have disabled userland proxy and also had to enable prerouting again as nothing would work after the userland had been disabled.

I have made it work, I have a functional caddy-bouncer with crowdsec. Thank you very much!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.