How to use handle_path and handle_errors in tandem?

1. The problem I’m having:

As a preface - I understand handle_errors must be a top-level directive for a site; my goal here is help in figuring out how I can rework this logic such that it’ll still work ideally without the need for handle_errors.

My issue is I have quite large single-sites configurations (say, 2000+ lines long) in order to achieve something like:

https://rpc.lavenderfive.com:443/osmosis
https://rpc.lavenderfive.com:443/cosmoshub
https://rpc.lavenderfive.com:443/axelar
...

For 100+ different endpoints, which results in a configuration like:

rpc.lavenderfive.com {
    log
    import blocked
    import rpc-header

   handle_path /osmosis* {
       reverse_proxy 5.0.4.3:12517 {
          import lb-config 12517
          @old_block status 400 404 503 502
          handle_response @old_block {
            reverse_proxy 5.0.4.2:12517 5.0.4.8:12517 5.0.4.9:12517 {
              import lb-config 12517
            }
         }
       }
}

(Note that this is quite simplified for the sake of brevity).

Now, if we say 5.0.4.3:12517 is down, all of the upstreams are considered down despite there being 4 total. What I’d like to do as an ideal case is the following:

rpc.lavenderfive.com {
    log
    import blocked
    import rpc-header

   handle_path /osmosis* {
       reverse_proxy 5.0.4.3:12517 {
          import lb-config 12517
          @old_block status 400 404 503 502
          handle_response @old_block {
            reverse_proxy 5.0.4.2:12517 5.0.4.8:12517 5.0.4.9:12517 {
              import lb-config 12517
            }
         }
       }
       handle_errors {
          reverse_proxy 5.0.4.2:12517 5.0.4.8:12517 5.0.4.9:12517 {
            import lb-config 12517
          }
       }
}

However, this cannot be done. Instead, I understand the following does work:

rpc.lavenderfive.com {
    log
    import blocked
    import rpc-header

   handle_path /osmosis* {
       reverse_proxy 5.0.4.3:12517 {
          import lb-config 12517
          @old_block status 400 404 503 502
          handle_response @old_block {
            reverse_proxy 5.0.4.2:12517 5.0.4.8:12517 5.0.4.9:12517 {
              import lb-config 12517
            }
         }
       }
      } 
  handle_errors {
         handle_path /osmosis* {
          reverse_proxy 5.0.4.2:12517 5.0.4.8:12517 5.0.4.9:12517 {
            import lb-config 12517
          }
       }
    }
}

Is there a way to achieve something closer to the “ideal” case?

You can just use uri strip_prefix /osmosis instead of handle_path, it’s shorter if you don’t need a handle. The handle part is only necessary if you have more than one route.

Also you could use named routes + invoke to deduplicate your reverse_proxy stuff

1 Like

Hey @francislavoie , as always, thank you for your help.

uri strip_prefix doesn’t really work here for the reason you outlined; there are many routes. 100+.

As far as named routes are concerned - what’s the difference between that and import? Here’s what the config actually looks like right now so you can see what I’m doing. Again, some things are stripped out but the concept remains.

rpc.lavenderfive.com {
    log
    import blocked
    import rpc-header

    import /etc/caddy/rpc/*.caddy
}

rpc/*,caddy has 100+ files, but here’s Osmosis to pair with the config above:

handle_path /osmosis* {
	handle @sla {
		import osmosis-rpc-rp
	}

	handle {
		import rate-limiter # you can ignore this
		import osmosis-rpc-rp
	}
        # ideally I'd stick `handle_errors` right here
}
(osmosis-rpc-rp) {
    reverse_proxy 5.0.3.8:12557  {
        import lb-config 12517
        @old_block status 400 404
        handle_response @old_block {
          reverse_proxy 5.0.3.9:12557 5.0.3.10:12557 {
            import lb-config 12517
          }
        }
    }
}

So basically I unravelled this and stripped out the excess to simplify.

What I’m understanding now is I can use invoke rather than import for the reverse_proxies (why?), but there isn’t a better way to handle_errors other than duplicate everything. Is that correct?


This may fall outside the scope of this conversation, but my understanding was caddy maintained an inner cache of node statuses for load balance health purposes. For example -

one.example.com {
  reverse_proxy 0.1.2.3:4444 1.2.3.4:5555 {
		lb_policy least_conn
		health_uri /healthz
		health_interval 5s
  } 
}

two.example.com {
  reverse_proxy 0.1.2.3:4444 1.2.3.4:5555 {
		lb_policy least_conn
		health_uri /healthz
		health_interval 5s
  } 
}

three.example.com {
  reverse_proxy 0.1.2.3:4444 1.2.3.4:5555 {
		lb_policy least_conn
		health_uri /healthz
		health_interval 5s
  } 
}

These endpoints/config is re-used 3 times, therefor you might expect both endpoints are hit 3 times every 5 seconds. Is invoke the glue that makes that work, and otherwise caddy DOES hit those 2 endpoints 3x per 5 seconds?

Each reverse_proxy instance will spawn its own active health checker. If you use invoke, it makes it only have one because it’s just one reverse_proxy instance in the named route being invoked/shared between all the sites.

Error routes are in a separate part of the config. Run caddy adapt -p and you’ll see, or look at JSON Config Structure - Caddy Documentation, you’ll see that errors routes are separate from the routes of your main site. But if you declare named routes, those can be invoked from both your regular routes and error routes if you need to deduplicate anything.

1 Like

That’s… absolutely horrifying, I must be utterly slamming my endpoints as they’re reused dozens of times through imports.

Thanks again, Francis. Always more to learn!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.