I need Caddyserver distribute all traffic from same client to same backend. With other words, if client (web browser) connect to backend at port 4002 (this is just example), it should keep connection only to that backend.
Attached is image where you can see that content of single webpage is distributed to 2 backends, creating 2 requests. Is it possible to configure Caddyserver to create only one request per client?
The default policy is random, which is why each subsequent request can hit different backends.
There are other options there, notably ip_hash or client_ip_hash as fallback policies for cookie perhaps, which sound like they’ll suit your needs much better.
That’s 100% normal. The browser is making a request for /favicon.ico because your HTML page didn’t otherwise declare one in the <head>. That’s standard browser behaviour. It’ll make as many requests as it needs to load all the assets it requires to render the page.
What does that even mean? Each client (browser) will make as many requests as it needs. First for the HTML, then more for any assets like CSS, JS, images etc. “One request per client” doesn’t really make sense as a statement in the context of web browsers.
This feels like an XY problem. You’re asking the wrong question. Why do you think you need to change the load balancing policy? What actual problem are you trying to solve by doing that? What’s the problem with spreading all requests to all your upstreams?
and now Caddyserver redirects all files to one backend (see image below) - that’s great.
The problem is when another client comes from the same IP address. Caddy redirects it to the same backend which is wrong. What I need is for each subsequent client (even though it has the same IP address and browser) to access the first subsequent backend server and not the same as the previous client. I don’t know how to adjust that.
Unfortunately not. Just in case, I changed the name of the cookie, cleared the cache and third browser accessed the same backend again (from the same IP address), even when one backend is not touched.
Actually, I named new cookie like this XXXXX1
an when I look into header, it is same on Edge and Chrome (using same IP)
XXXXX1=e354d5f7458e9b113774300ff92cfeefbce641fba7c096fac010403a1d78495f
third browser have other IP address and there works perfect.
Basically, it looks like a bug for me because it assigned same cookie on different browsers (using same IP address).
I have 3 backend software listening at ports 4000,4001 and 4002.
Caddy listen on port 88.
When 5 clients access https://someurl.com:88 they need to be distributed like this:
Client 1 to 4000
Client 2 to 4001
Client 3 to 4002
Client 4 to 4000
Client 5 to 4001
This order must not be like this, it can start with any port. Important is that each new client need to be send to backend who have not or have least connections.
Currently, they are distribute like this (this is not the rule, it can always happen differently):
Client 1 to 4002
Client 2 to 4001
Client 3 to 4001
Client 4 to 4001
Client 5 to 4002
and nobody is distributed to 4000 etc.
I tested that with my own backend I’m using for years and also with other server software like Rebex Tiny Web Server (free) - Rebex.NET
Each time I tested, I cleared all cache.
This is the Caddyfile I used:
I’m going to have to point out the documentation to you again:
least_conn choose upstream with fewest number of current requests; if more than one host has the least number of requests, then one of those hosts is chosen at random
Connections don’t last forever. With a small number of clients making requests, all of your backends are going to have very low (or zero) connections, so it’s going to be random.
When you get more clients, and each upstream has actually notable amounts of connections happening at once, Caddy will start to distribute them better equally.
Maybe you want different behaviour than least_conn. Maybe you don’t actually care about which backend has the lowest connections and what you actually want is for each client to get a new backend in sequence, which is the case in the desired behaviour you gave as an example. In which case:
round_robin iterates each upstream in turn
But you can’t have both. You either get a random pick of the backends with the lowest connections (effectively random when the load is low), or you get round-robin, which may result in uneven distribution under higher loads.