Forza
(Forza)
November 14, 2023, 11:44am
1
Hi. I am looking for a way to limit upstream pho-fpm requests. I found Limit number of concurrent requests to reverse_proxy backend? that mentioned max_connections. I tried the following, but it doesn’t work.
php_fastcgi unix//var/run/php-fpm/fpm-wiki.socket {
max_requests 10
}
file_server {
precompressed br zstd gzip
}
Caddy validate complains that max_requests
isn’t valid. The fastcgi docs at php_fastcgi (Caddyfile directive) — Caddy Documentation do mention that reverse_proxy directives should work though.
Error: adapting config using caddyfile: parsing caddyfile tokens for 'php_fastcgi': unrecognized subdirective max_requests, at /etc/caddy/vhosts/wiki.tnonline.net.caddy:54 import chain ['Caddyfile:40 (import)']
❯ caddy version
v2.7.5 => /usr/src/caddy/git/caddy@(devel)
Please fill out the help topic template, as per the forum rules.
max_requests
isn’t a thing. I don’t know where you got that from. It doesn’t appear anywhere in reverse_proxy (Caddyfile directive) — Caddy Documentation .
You should probably be configuring your php-fpm worker pool instead.
Forza
(Forza)
November 14, 2023, 12:50pm
3
I read it in the linked thread
There is a reference here JSON Config Structure - Caddy Documentation
Forza
(Forza)
November 14, 2023, 3:45pm
4
1. The problem I’m having:
I want to limit concurrent requests to upstream servers, both when running as reverse_proxy and php_fastcgi.
I found a previous thread that indicated this might work using max_requests
, but i don’t know how to use that directive. Limit number of concurrent requests to reverse_proxy backend?
Could it be that upstreams/max_request
only work with JSON configuration?
2. Error messages and/or full log output:
❯ caddy validate
2023/11/14 15:26:22.165 INFO using adjacent Caddyfile
2023/11/14 15:26:22.165 INFO using provided configuration {"config_file": "Caddyfile", "config_adapter": ""}
Error: adapting config using caddyfile: parsing caddyfile tokens for 'php_fastcgi': unrecognized subdirective max_requests, at /etc/caddy/vhosts/gist.tnonline.net.caddy:13 import chain ['Caddyfile:40 (import)']
and
❯ caddy validate
2023/11/14 15:38:26.144 INFO using adjacent Caddyfile
2023/11/14 15:38:26.144 INFO using provided configuration {"config_file": "Caddyfile", "config_adapter": ""}
Error: adapting config using caddyfile: parsing caddyfile tokens for 'reverse_proxy': unrecognized subdirective max_requests, at /etc/caddy/vhosts/git.buddhism-chat.org.caddy:8 import chain ['Caddyfile:40 (import)']
3. Caddy version:
v2.7.5 => /usr/src/caddy/git/caddy@(devel)
4. How I installed and ran Caddy:
Compiled from git sources.
#!/bin/sh
export XCADDY_SETCAP=1
export GOARCH="amd64"
export GOAMD64="v3"
export CGO_ENABLED=1
/root/go/bin/xcaddy build \
--with github.com/caddyserver/caddy/v2=/usr/src/caddy/git/caddy \
--with github.com/ueffel/caddy-brotli \
--with github.com/caddyserver/transform-encoder \
--with github.com/caddyserver/cache-handler \
--with github.com/kirsch33/realip \
--with github.com/git001/caddyv2-upload
a. System environment:
Gentoo Linux with kernel 6.5.8
b. Command:
caddy validate
d. My complete Caddy config:
❯ caddy fmt vhosts/gist.tnonline.net.caddy
gist.tnonline.net:443 {
import main gist.tnonline.net
@php {
not path /public/files/* /rain/* /install/* /includes/*
}
root * /var/www/domains/gist.tnonline.net/
reverse_proxy 192.168.0.1:9000 {
}
uri replace /files/ /public/files/
php_fastcgi @php unix//var/run/php-fpm/fpm-www.socket {
max_requests 10
}
file_server
}
gist.tnonline.net:80 {
import main80 gist.tnonline.net
root * /var/www/domains/gist.tnonline.net/
file_server
@https not path /.well-known/*
redir @https https://gist.tnonline.net/ permanent
}
❯ caddy fmt vhosts/git.buddhism-chat.org.caddy
git.buddhism-chat.org:443 {
import main git.buddhism-chat.org
root * /var/www/domains/git.buddhism-chat.org/htdocs
file_server
@https not path /.well-known/*
reverse_proxy @https 10.1.1.3:3000 {
max_requests 10
}
}
git.buddhism-chat.org:80 {
import main80 git.buddhism-chat.org
root * /var/www/domains/git.buddhism-chat.org/htdocs
file_server
@https not path /.well-known/*
redir @https https://wiki.buddhism-chat.org/ permanent
}
5. Links to relevant resources:
Is it possible to limit the number of ongoing requests to a server backend defined in reverse_proxy?
My app server sadly does not include this feature, and can become clobbered / overloaded.
The user experience ends up pretty poor: User is forced to wait a long time (20+ seconds as the app server is clobbered), then finally get a 500 error.
Would rather have a hard limit inside caddy itself to either get a 500 right away when the limit is reached and/or redirect the traffic to another server …
max_requests
The maximum number of simultaneous requests to allow to this upstream. If set, overrides the global passive health check UnhealthyRequestCount value.
Yes. But if you look at the end of that thread, the equivalent in the Caddyfile is unhealthy_request_count
.
But are you sure you need that? Why can’t you just update your php-fpm config? It’s better to solve your problem closer to the source.
Forza
(Forza)
November 14, 2023, 9:20pm
6
For fpm i have adjusted pm.max_children, but php likes to shoot a warning when it reaches a limit. Probably not a huge issue, I think. I read that the fpm process manager queues up requests until server has available slots.
The reverse_proxy situation is different as those apps don’t have built-in support for this.
Seems that Caddy sends a status 503 when the limit is reached. This is of course technically correct, but not exactly what I had hoped for.
Is there a way that Caddy can pool requests and only issue them upstream within a maximum concurrency limit?
I’m not sure what you want to happen instead then.
Do you mean you want Caddy to wait until a server is available? Then you need to enable retries with lb_try_duration
and/or lb_retries
.
2 Likes
Forza
(Forza)
November 14, 2023, 11:11pm
8
Yes, i wanted caddy to accept the connection (up to an upper limit) and then send them to upstream as available slots becomes available.
As an example we can take client A who issues 20 requests to Caddy. Caddy accepts all, but holds 15 because upstream is limited to 5 concurrent requests. Once first request is handled, another request is taken from the pool and sent upstream.
Thanks for pointing to the load balancing features. I think it should do exactly what I wanted.
Load balancing is typically used to split traffic between multiple upstreams. By enabling retries, it can also be used with one or more upstreams, to hold requests until a healthy upstream can be selected (e.g. to wait and mitigate errors while rebooting or redeploying an upstream).
2 Likes