Unrecognized placeholder when using reverse_proxy

1. The problem I’m having:

I have a really great Caddyfile. I’m using a forward_auth directive to authenticate a request. The upstream gives me an other upstream to serve the request to which I reverse_proxy the original request. It looks like this:

{
	auto_https off
}

http://localhost:2000 {
	forward_auth localhost:8080 {
		uri /init
		copy_headers X-Upstream-Address
	}

	reverse_proxy {header.X-Upstream-Address}
}

This works like a charm. Truly impressed. Now I want to add some health check to this reverse_proxy upstream. Seems simple enough;

{
	auto_https off
}

http://localhost:2000 {
	forward_auth localhost:8080 {
		uri /init
		copy_headers X-Upstream-Address
	}

	reverse_proxy {header.X-Upstream-Address} {
               health_uri /ping
        }
}

This doesn’t work. When running Caddy it throws an error. It complains about an unrecognized placeholder. It doesn’t matter if I move the placeholder to a to directive inside the reverse_proxy.
Any ideas?

2. Error messages and/or full log output:

❯ caddy run
2023/04/07 19:44:28.780 INFO    using adjacent Caddyfile
2023/04/07 19:44:28.782 INFO    admin   admin endpoint started  {"address": "localhost:2019", "enforce_origin": false, "origins": ["//[::1]:2019", "//127.0.0.1:2019", "//localhost:2019"]}
2023/04/07 19:44:28.782 WARN    http    automatic HTTPS is completely disabled for server       {"server_name": "srv0"}
2023/04/07 19:44:28.782 INFO    tls.cache.maintenance   started background certificate maintenance {"cache": "0xc000398f50"}
2023/04/07 19:44:28.782 INFO    http.log        server running  {"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2023/04/07 19:44:28.782 ERROR   http.handlers.reverse_proxy.health_checker.active       invalid use of placeholders in dial address for active health checks        {"address": "", "error": "unrecognized placeholder {http.request.header.X-Upstream-Address}"}
2023/04/07 19:44:28.782 INFO    tls     cleaning storage unit   {"description": "FileStorage:/home/harm/.local/share/caddy"}
2023/04/07 19:44:28.782 INFO    tls     finished cleaning storage units
2023/04/07 19:44:28.782 INFO    autosaved config (load with --resume flag)      {"file": "/home/harm/.local/share/caddy/autosave.json"}
2023/04/07 19:44:28.782 INFO    serving initial configuration

3. Caddy version:

❯ caddy version
v2.6.4

4. How I installed and ran Caddy:

Installed with pacman.

a. System environment:

❯ uname -av
Linux mjolnir 5.19.17-2-MANJARO #1 SMP PREEMPT_DYNAMIC Sun Nov 6 00:08:27 UTC 2022 x86_64 GNU/Linux

b. Command:

caddy run

c. Service/unit/compose file:

n/a

d. My complete Caddy config:

see beginning of message

5. Links to relevant resources:

The original Caddyfile came to be after some excellent help: Programmatically update config from a Go program - #16 by haarts

1 Like

It’s not possible to use active health checks when the upstream address is dynamic.

The problem is that active health checks happen in the background, but the upstream address is only known during a request. So that means the background task has no idea what upstream to dial.

You can do passive health checks though.

I’d just recommend that your auth app does the health checking itself, and return an error if it knows the upstream isn’t available, instead of handing it to Caddy. Essentially, the key is “what part of the system actually knows the real upstream addresses?”

1 Like

That makes total sense. It now clicks what the docs read: “Active health checks perform health checking in the background on a timer”

Thank you.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.