Wildcard Multiple Reverse Proxies

1. Caddy version (caddy version):

v2.1.0-beta.1 => /src/caddy (built 2.1.0-beta.1 with Cloudflare DNS support)

2. How I run Caddy:

a. System environment:

Docker on Debian Buster

b. Command:

docker-compose up

c. Service/unit/compose file:

version: "3.2"

networks:
  default:
    external:
      name: proxy

services:
  caddy:
    container_name: caddy
    image: caddy:2.1.0-beta.1
    build:
      context: .
      dockerfile: Dockerfile-caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ../volumes/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - ../volumes/caddy/cloudflare-origin-pull-ca.pem:/etc/caddy/cloudflare-origin-pull-ca.pem:ro
      - ../volumes/caddy/config:/config
      - ../volumes/caddy/data:/data
      - ../volumes/caddy/logs:/logs
      # php-fpm roots
      - ../volumes/caddy/sites:/sites
      - ../volumes/nextcloud/html:/php-fpm-root/nextcloud
      - ../volumes/nextcloud/apps:/php-fpm-root/nextcloud/custom_apps
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"

Dockerfile-caddy:

FROM caddy:2.1.0-beta.1-builder AS builder

RUN caddy-builder \
    github.com/caddy-dns/cloudflare

FROM caddy:2.1.0-beta.1

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

d. My complete Caddyfile or JSON config:

# /etc/caddy/Caddyfile

# Global config
{
    # Let's Encrypt
    email email@domain.com
    
    # No admin
    admin off
}

(webconf) {
    # Add zstd and gzip compression to requests
    encode zstd gzip
    
    # Remove headers (leading "-")
    header {
        -x-powered-by
    }
    
    # Use Cloudflare DNS for Let's Encrypt
    tls {
        dns cloudflare supersecretcloudflareapikey
    }
}

(external_matcher) {
    @external {
        not remote_ip 192.168.0.0/16
    }
}

# Only allow internal (LAN) connections
(internal_only) {
    import external_matcher
    respond @external "Access denied. This is an internal website." 403 {
        close
    }
}

# Access log
(access_log) {
    log {
	    output file /logs/{args.0}.access.log
	    format single_field common_log
    }
}

# A basic reverse proxy internal website
(website_internal_reverse_proxy) {
    @{args.0} host {args.0}.internal.example.com
    reverse_proxy @{args.0} {args.1}
}

*.internal.example.com {
    import internal_only
    import webconf
    import access_log internal.example.com

    import website_internal_reverse_proxy pass bitwarden:80
    import website_internal_reverse_proxy reader miniflux:80
}

3. The problem I’m having:

Wildcard DNS certificate is generated correctly, and I can visit one of the two internal sites. For this example, I will visit pass.internal.example.com first, but you could start with reader and get the same behavior. Visiting pass.internal.example.com loads the page just fine, HTTP status code 200 for all resources. Navigating to reader.internal.example.com however will have the two resources (root / and favicon) respond with a 403 error.

Here’s the weird part. When you ctrl+shift+r reload the page (hard refresh/bypass the cache I think?) reader.internal.example.com loads just fine, with all the resources 200. You can normal refresh it as often as you want, it will continue working. BUT, pass.internal.example.com will begin getting 403 errors (either by normal refreshing, or if there was a javascript thing doing background updates). To get pass.internal.example.com working again, just hard refresh, but then reader.internal.example.com will stop working, and so on and so forth.

It’s almost like Caddy locks the user to the same matched reverse proxy until a new session happens or something? Not sure.

4. Error messages and/or full log output:

There are no errors normally, but I did run in debug mode and got this:

{
  "level": "error",
  "ts": 1592665352.2328944,
  "logger": "http.log.access.log15",
  "msg": "handled request",
  "request": {
    "method": "GET",
    "uri": "/",
    "proto": "HTTP/2.0",
    "remote_addr": "192.168.2.221:37634",
    "host": "reader.internal.example.com",
    "headers": {
      "User-Agent": [
        "Mozilla/5.0 (X11; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
      ],
      "Accept": [
        "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
      ],
      "Cache-Control": [
        "no-cache"
      ],
      "Te": [
        "trailers"
      ],
      "Accept-Language": [
        "en-US,en;q=0.5"
      ],
      "Accept-Encoding": [
        "gzip, deflate, br"
      ],
      "Dnt": [
        "1"
      ],
      "Cookie": [
        "Auth-Type=http; Auth-Token=REDACTED"
      ],
      "Upgrade-Insecure-Requests": [
        "1"
      ],
      "Pragma": [
        "no-cache"
      ]
    },
    "tls": {
      "resumed": false,
      "version": 772,
      "ciphersuite": 4865,
      "proto": "h2",
      "proto_mutual": true,
      "server_name": "reader.internal.example.com"
    }
  },
  "common_log": "192.168.2.221 - - [20/Jun/2020:15:02:32 +0000] \"GET / HTTP/2.0\" 403 0",
  "duration": 2.5682e-05,
  "size": 0,
  "status": 403,
  "resp_headers": {
    "Server": [
      "Caddy"
    ]
  }
}
{
  "level": "error",
  "ts": 1592665352.3344145,
  "logger": "http.log.access.log15",
  "msg": "handled request",
  "request": {
    "method": "GET",
    "uri": "/favicon.ico",
    "proto": "HTTP/2.0",
    "remote_addr": "192.168.2.221:37634",
    "host": "reader.internal.example.com",
    "headers": {
      "Cache-Control": [
        "no-cache"
      ],
      "Te": [
        "trailers"
      ],
      "Accept": [
        "image/webp,*/*"
      ],
      "Accept-Language": [
        "en-US,en;q=0.5"
      ],
      "Dnt": [
        "1"
      ],
      "Cookie": [
        "Auth-Type=http; Auth-Token=REDACTED"
      ],
      "Pragma": [
        "no-cache"
      ],
      "User-Agent": [
        "Mozilla/5.0 (X11; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
      ],
      "Accept-Encoding": [
        "gzip, deflate, br"
      ]
    },
    "tls": {
      "resumed": false,
      "version": 772,
      "ciphersuite": 4865,
      "proto": "h2",
      "proto_mutual": true,
      "server_name": "reader.internal.example.com"
    }
  },
  "common_log": "192.168.2.221 - - [20/Jun/2020:15:02:32 +0000] \"GET /favicon.ico HTTP/2.0\" 403 0",
  "duration": 2.3007e-05,
  "size": 0,
  "status": 403,
  "resp_headers": {
    "Server": [
      "Caddy"
    ]
  }
}

5. What I already tried:

I have tried this in Firefox and Chrome, both act the same way. I’ve also tried it in multiple tabs, and just changing the URL from one to the other, both are the same as well.

I’ve also tried making another endpoint, pass2.internal.example.com that points to the same reverse proxy backend as pass (bitwarden:80), and that also fails the same way as reader does.

I tried using handlers as well (wrapped around the reverse proxy directive), didn’t work out. I’ve also been trying to find any other way to do multiple reverse proxies in a wildcard cert setup but… haven’t found any other methods of doing it.

6. Links to relevant resources:

Figured I was doing exactly this, maybe something changed in those couple of months?

Hi @PhasecoreX,

This might be some kind of bug.

Can you break it down to a really, really simple Caddyfile - remove all imports, have just the matchers and reverse proxies hard-coded in - and tell us if you can reproduce the issue with that? The simpler the better.

While making it as simple as possible, I ended up figuring out what was causing the issue (but not why, as it makes no sense to me):

# /etc/caddy/Caddyfile

# External site
site1.example.com {
    tls {
        dns cloudflare supersecretcloudflareapikey
        
        ######################## Here is the issue ########################
        client_auth {
            mode require_and_verify
            trusted_ca_cert_file /etc/caddy/cloudflare-origin-pull-ca.pem
        }
        ###################################################################
    }
    
    respond "Hello, world!"
}

# Internal sites
*.internal.example.com {
    tls {
        dns cloudflare supersecretcloudflareapikey
    }
    
    @external {
        not remote_ip 192.168.0.0/16
    }
    respond @external "Access denied. This is an internal website." 403 {
        close
    }
    
    @pass host pass.internal.example.com
    reverse_proxy @pass bitwarden:80
    
    @reader host reader.internal.example.com
    reverse_proxy @reader miniflux:80
}

If you run the above config, all of what I said in the original post holds true:

  • Going to either of the internal sites first works, then the other gets 403 errors.
  • Doing a hard refresh on the non-working internal site will make it work, but make the first have 403 errors.

If you comment out the client_auth section in the unrelated external site, both internal sites work as expected. You can go to either, no more 403 errors.

I have no idea why this is the case.

I suspect I know what is happening, but I want to be sure.

What happens if you remove the respond @external directive? Do you get the exact same results (including same logs)?

Removing

    @external {
        not remote_ip 192.168.0.0/16
    }
    respond @external "Access denied. This is an internal website." 403 {
        close
    }

still results in the same thing happening.

Here’s the new config, still broken:

# /etc/caddy/Caddyfile

# External site
site1.example.com {
    tls {
        dns cloudflare supersecretcloudflareapikey
        
        ######################## Here is the issue ########################
        client_auth {
            mode require_and_verify
            trusted_ca_cert_file /etc/caddy/cloudflare-origin-pull-ca.pem
        }
        ###################################################################
    }
    
    respond "Hello, world!"
}

# Internal sites
*.internal.example.com {
    tls {
        dns cloudflare supersecretcloudflareapikey
    }
    
    @pass host pass.internal.example.com
    reverse_proxy @pass bitwarden:80
    
    @reader host reader.internal.example.com
    reverse_proxy @reader miniflux:80
}

Great - so I bet your 403 is coming from here:

Make sure that your TLS ServerName and HTTP Host header are the same. If you make a connection to one site, then you can’t reuse it to make a request to another site because it could result in authentication bypass vulnerability.

But you only pasted access logs so it’s impossible to know without the rest of the logs.

How do I get the rest of the logs? I am running Caddy with

{
    admin off
    debug
}

at the top. But when I do the refresh on the broken site, no logs show up at all in docker logs -f caddy. Are there more logs elsewhere?

Ah okay I see it now in the logs. Not sure why they weren’t showing up before, but host is pass.internal.example.com, whereas the tls server_name is reader.internal.example.com. I guess that raises more questions for me then.

  • Why does adding this client_auth to site1.example.com affect the wildcard (internal) sites?

  • I can have site2.example.com defined (with or without client_auth in it) and it seems to work correctly. Why doesn’t client_auth affect that site?

  • Is there a better way to have multiple sites use one auto-updating Let’s Encrypt wildcard certificate?

The more and more I’m digging into this, the more I’m realizing I don’t know much about https/tls. Thanks for bearing with me!

That explains the 403 then. Since they differ, the request is rejected.

I think you can force-disable this check if you want (which can be useful, for example when wanting to do domain fronting), as long as there’s nothing sensitive on any other client-auth-protected sites. In other words, you have to be OK with visitors making a TLS connection and authenticating under the guise of “I’m accessing site A” and then if they go to access site B they will succeed and you have to be OK with that.

TLS client auth is not a good solution for authenticating at the HTTP/application layer.

This topic was automatically closed after 30 days. New replies are no longer allowed.