Using a chain of reverse_proxy handlers

Hi,
I’m migrating an existing nginx server configuration to Caddy and encountered a problem. My use-case is quite unorthodox - The purpose of Caddy is to serve as a transparant “fallback server”. It first tries to proxy all requests to a local upstream server. If this server returns any erronous status code (5xx/4xx) I then reverse-proxy to an original server based on a x-original-host header which already exists in the original traffic. In nginx this worked by setting a location with proxy_pass that has error_page set to a second location.
In Caddyfile I tried multiple options but I can’t seem to get it right.

1. The problem I’m having:

When falling back through the inner reverse-proxy I get 400. Debug logs show that this 400 appear to come directly from the second upstream handling. I believe this might have to do with copy_response copying the wrong response or something. But generally I would really appreciate it if someone can review my configuration and see if it even makes sense. Some forum reading tells me that reverse_proxy in Caddy is a terminating handler, meaning it might not be entirely supported to do my use case?

2. Error messages and/or full log output:

Seen here - falling back, but alwats returning 400

{"level":"debug","ts":1763411953.2119687,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"chatgpt.com:443","total_upstreams":1}
{"level":"debug","ts":1763411953.219413,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"{http.request.header.x-original-host}:443","duration":0.007207787,"request":{"remote_ip":"192.168.194.71","remote_port":"45724","client_ip":"192.168.194.71","proto":"HTTP/2.0","method":"POST","host":"chatgpt.com","uri":"/ces/statsc/flush","headers":{"Cookie":["REDACTED"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Sec-Ch-Ua-Model":["\"\""],"Oai-Language":["en-US"],"X-Forwarded-Host":["aim-public-apps-proxy-fallback"],"X-Real-Ip":["192.168.194.71"],"Sec-Ch-Ua-Full-Version-List":["\"Chromium\";v=\"142.0.7444.162\", \"Google Chrome\";v=\"142.0.7444.162\", \"Not_A Brand\";v=\"99.0.0.0\""],"Sec-Ch-Ua":["\"Chromium\";v=\"142\", \"Google Chrome\";v=\"142\", \"Not_A Brand\";v=\"99\""],"X-Aim-Plugin-Installed":["true"],"Content-Type":["application/json"],"Via":["2.0 Caddy"],"Oai-Device-Id":["7fb4d345-dc4e-4f52-80f3-530ae758be92"],"Sec-Fetch-Site":["same-origin"],"Sec-Ch-Ua-Mobile":["?0"],"Sec-Ch-Ua-Full-Version":["\"142.0.7444.162\""],"X-Forwarded-Proto":["https"],"Sec-Ch-Ua-Bitness":["\"64\""],"Accept":["*/*"],"Sec-Ch-Ua-Platform-Version":["\"15.7.2\""],"Referer":["https://chatgpt.com/"],"Content-Length":["188"],"Accept-Language":["en-US,en;q=0.9"],"Upgrade":[""],"Connection":[""],"Oai-Client-Version":["prod-7b4ad770564e1a05033d9481af8a607f0f63bc7b"],"Sec-Ch-Ua-Arch":["\"arm\""],"Sec-Ch-Ua-Platform":["\"macOS\""],"Sec-Fetch-Dest":["empty"],"Sec-Fetch-Mode":["cors"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36"],"X-Forwarded-For":["192.168.194.71"],"Priority":["u=1, i"],"Origin":["https://chatgpt.com"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"aim-public-apps-proxy-fallback"}},"headers":{"Server":["cloudflare"],"Date":["Mon, 17 Nov 2025 20:39:13 GMT"],"Content-Type":["text/html"],"Content-Length":["557"],"Cf-Ray":["9a0209441f87f9c6-TLV"]},"status":400}

3. Caddy version:

v2.10.2

4. How I installed and ran Caddy:

Caddy is running in k8s deployment with the latest image.

a. System environment:

alpine image from the official docker hub 2.10.2-alpine

b. Command:

/usr/bin/caddy run --config /etc/caddy/Caddyfile

d. My complete Caddy config:

I tried many things. a couple are listed here

  1. nested reverse_proxy directives.
  2. invoke-based
    :443 {
        tls /etc/certificates/cert.crt /etc/certificates/cert.key

        handle {
           reverse_proxy http://local-reverse-proxy.service.local:443 {
                header_up Upgrade {header.Upgrade}
                header_up Connection {header.Connection}
                header_up Host {host}
                header_up X-Real-IP {remote_host}

                # Streaming based options https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#streaming
                stream_timeout 24h
                stream_close_delay 5s

                # Intercept error responses from upstream
                @error status 400 401 402 403 404 405 406 408 409 410 411 412 413 414 415 416 421 429 500 501 502 503 504 505 507
                handle_response @error {
                    log_append * "fallback_status" "fallback-to-upstream"

                    # Check if x-original-host header exists
                    @has_original_host header x-original-host *

                    handle @has_original_host {
                        reverse_proxy {header.x-original-host}:443 {
                            transport http {
                                tls
                                tls_server_name {header.x-original-host}
                            }

                            header_up Host {header.x-original-host}
                            header_up Upgrade {header.Upgrade}
                            header_up Connection {header.Connection}
                            header_up X-Real-IP {remote_host}
                        }
                    }

                    handle {
                        # No x-original-host header, just pass through the error from the reverse-proxy (probably 400)
                        copy_response
                    }
                }
            }
        }
    }

    &(app-proxy) {
        reverse_proxy {http.request.header.x-original-host}:443 {
            transport http {
                tls
                tls_server_name {http.request.header.x-original-host}
            }

            header_up Host {http.request.header.x-original-host}
            header_up Upgrade {header.Upgrade}
            header_up Connection {header.Connection}

            handle_response {
                copy_response
            }
        }
    }

    :443 {
        tls /etc/certificates/cert.crt /etc/certificates/cert.key

        log {
            output stdout
            format json
            level debug
        }

        route {
            log_append "fallback_status" "before-reverse-proxy"

            reverse_proxy http://local-reverse-proxy.service.local:443 {
                header_up Host {host}
                header_up Upgrade {header.Upgrade}
                header_up Connection {header.Connection}
                header_up X-Real-IP {remote_host}

                # Intercept error responses from upstream
                @error status 400 401 402 403 404 405 406 408 409 410 411 412 413 414 415 416 421 429 500 501 502 503 504 505 507
                handle_response @error {
                    # Check if x-original-host header exists
                    @has_original_host header x-original-host *

                    # Instead of reverse_proxy here, invoke the fallback route
                    invoke @has_original_host app-proxy

                    # No x-original-host, pass through error
                    copy_response
                }
            }
        }
    }
```

Not sure if I fully understand what you’re trying to do, but here’s a quick example:

## Main site

example.com {
	tls internal

	reverse_proxy 127.0.0.1:8080 {
		header_up Host local-backend
	
		@4xx-5xx status 4xx 5xx
		handle_response @4xx-5xx {

			@has_original_host header x-original-host *
			handle @has_original_host {
				reverse_proxy 127.0.0.1:8080 {
					header_up Host {header.x-original-host}
				}
			}
			handle {
				respond 502
			}
		}
	}
}

## Backends

:8080 {
	@local host local-backend
	handle @local {
		respond "Backend: local-backend"
	}

	@orig1 host orig-host1
	handle @orig1 {
		respond "Backend: orig-host1"
	}

	@orig2 host orig-host2
	handle @orig2 {
		respond "Backend: orig-host2"
	}

	handle {
		respond 502
	}
}

example.com is the main site. Since I’m running this on my laptop, all the “backends” are on port 8080. In other words, port 8080 in this setup is just simulating your backend servers.

Test:

$ curl https://example.com
Backend: local-backend

Now I’m going to simulate a situation where your local backend has some issues and starts returning HTTP 503:

	@local host local-backend
	handle @local {
		respond "Backend: local-backend" 503
	}

Test:

$ curl https://example.com -i
HTTP/2 502
alt-svc: h3=":443"; ma=2592000
server: Caddy
content-length: 0
date: Tue, 18 Nov 2025 16:35:24 GMT
$ curl https://example.com -H 'x-original-host: orig-host1'
Backend: orig-host1
$ curl https://example.com -H 'x-original-host: orig-host2'
Backend: orig-host2
$ curl https://example.com -H 'x-original-host: foo' -i
HTTP/2 502
alt-svc: h3=":443"; ma=2592000
date: Tue, 18 Nov 2025 16:37:57 GMT
server: Caddy
via: 1.1 Caddy
content-length: 0

Is this what you were going for?

2 Likes