The order is described on a separate page, but I think you’ve seen it already.
My understanding is: If you order rate_limit before rewrite, rate_limit is placed between method and rewrite. Henceforth, requests will be handled by rewrite first, then by rate_limit. Responses are then handled by rate_limit first, then rewrite, since responses go up the handler chain (but I imagine rate-limiting responses is nonsensical; maybe in this constellation it’s not even run).
In the case of handle, the most specific matchers are run first, followed by less specific, and matcherless/general blocks are last. So, if there is a handle [matcher], the handler block is handled first, which means, in my understanding, the order is run for that matcher if rate_limit is defined there.
Did you mean, “rate_limit is placed between method and rewrite"? If so, that would be correct.
Handler order is also elaborated on in the JSON docs:
The list of handlers for this route. Upon matching a request, they are chained together in a middleware fashion: requests flow from the first handler to the last (top of the list to the bottom), with the possibility that any handler could stop the chain and/or return an error. Responses flow back through the chain (bottom of the list to the top) as they are written out to the client.
Not all handlers call the next handler in the chain. For example, the reverse_proxy handler always sends a request upstream or returns an error. Thus, configuring handlers after reverse_proxy in the same route is illogical, since they would never be executed. You will want to put handlers which originate the response at the very end of your route(s). The documentation for a module should state whether it invokes the next handler, but sometimes it is common sense.
Some handlers manipulate the response. Remember that requests flow down the list, and responses flow up the list.
For example, if you wanted to use both templates and encode handlers, you would need to put templates after encode in your route, because responses flow up. Thus, templates will be able to parse and execute the plain-text response as a template, and then return it up to the encode handler which will then compress it into a binary format.
If templates came before encode, then encode would write a compressed, binary-encoded response to templates which would not be able to parse the response properly.
{
order rate_limit before rewrite # case 1
# order rate_limit before reverse_proxy # case 2
}
localhost, 127.0.0.1, 192.168.1.104 {
# outer
rate_limit {
zone global_zone {
key {remote_ip}
events 3000
window 1m
}
}
@api_routes {
path /api/* /health /swagger/*
}
handle @api_routes {
# inner
rate_limit {
zone api_zone {
key {remote_ip}
events 120
window 1m
}
}
reverse_proxy 192.168.1.201:10081 {
}
}
}
In case 1 (before rewrite), both rate_limit take effect.
In case 2 (before reverse_proxy), only the inner rate_limit take effect, the outer is omitted in my test.
Hence the 2 rules in the original question.
The current case 1 config fit my use case, I’m just curious how it actual works. Anyway I will try the caddy adapt –pretty later.
handle happens well after rewrite, but before reverse_proxy, so in case 1 above, the rate limit gets applied before going into the handle @api_routes block.
In case 2 above, handle goes first, applying the inner rate limit only, and then the route is terminated (“handled”) by the reverse_proxy.
If you read the article I linked you to above, it explains this, and it might be best to not mix the two methods of composing routes.
You might also use the route directive if you want that much control over the order of things.