Same client to same backend

1. The problem I’m having:

I need Caddyserver distribute all traffic from same client to same backend. With other words, if client (web browser) connect to backend at port 4002 (this is just example), it should keep connection only to that backend.
Attached is image where you can see that content of single webpage is distributed to 2 backends, creating 2 requests. Is it possible to configure Caddyserver to create only one request per client?

server1

2. Error messages and/or full log output:

3. Caddy version:

v 2.8.4

4. How I installed and ran Caddy:

a. System environment:

b. Command:

caddy run --config Caddyfile

c. Service/unit/compose file:

d. My complete Caddy config:

:88 {
    reverse_proxy localhost:4000 localhost:4001 {
    }
}

5. Links to relevant resources:

Howdy @krosoftware, welcome to the Caddy community.

You’ll wanna have a look at the documentation for reverse_proxy, specifically the Load Balancing section: reverse_proxy (Caddyfile directive) — Caddy Documentation

The default policy is random, which is why each subsequent request can hit different backends.

There are other options there, notably ip_hash or client_ip_hash as fallback policies for cookie perhaps, which sound like they’ll suit your needs much better.

1 Like

Thank you @Whitestrake
yes, using Caddfile below, it keep all requests on the same backend:

mydomain.com:88 {
    reverse_proxy localhost:4000 localhost:4001 localhost:4002 {
        trusted_proxies 127.0.0.1
        lb_policy cookie b2b1 {
            fallback client_ip_hash
        }
    }
}

but requests from the same IP goes to the same backend. I don’t know how to configure Caddyfile properly, any help will be great!

That’s 100% normal. The browser is making a request for /favicon.ico because your HTML page didn’t otherwise declare one in the <head>. That’s standard browser behaviour. It’ll make as many requests as it needs to load all the assets it requires to render the page.

What does that even mean? Each client (browser) will make as many requests as it needs. First for the HTML, then more for any assets like CSS, JS, images etc. “One request per client” doesn’t really make sense as a statement in the context of web browsers.

This feels like an XY problem. You’re asking the wrong question. Why do you think you need to change the load balancing policy? What actual problem are you trying to solve by doing that? What’s the problem with spreading all requests to all your upstreams?

1 Like

@francislavoie I’m sorry for the confusion and lack of explanation.
This is the current configuration:

mydomain.com:88 {
  reverse_proxy localhost:4000 localhost:4001 localhost:4002 {
    trusted_proxies 127.0.0.1
    lb_policy cookie b2b1 {
      fallback client_ip_hash
    }
  }
}

and now Caddyserver redirects all files to one backend (see image below) - that’s great.
The problem is when another client comes from the same IP address. Caddy redirects it to the same backend which is wrong. What I need is for each subsequent client (even though it has the same IP address and browser) to access the first subsequent backend server and not the same as the previous client. I don’t know how to adjust that.
server2

Hmm. Maybe least_conn might work better as a fallback, then?

1 Like

Unfortunately not. Just in case, I changed the name of the cookie, cleared the cache and third browser accessed the same backend again (from the same IP address), even when one backend is not touched.
Actually, I named new cookie like this XXXXX1
an when I look into header, it is same on Edge and Chrome (using same IP)
XXXXX1=e354d5f7458e9b113774300ff92cfeefbce641fba7c096fac010403a1d78495f
third browser have other IP address and there works perfect.
Basically, it looks like a bug for me because it assigned same cookie on different browsers (using same IP address).

Remind me again, sorry, what’s the actual desired behaviour exactly?

1 Like

I have 3 backend software listening at ports 4000,4001 and 4002.
Caddy listen on port 88.
When 5 clients access https://someurl.com:88 they need to be distributed like this:
Client 1 to 4000
Client 2 to 4001
Client 3 to 4002
Client 4 to 4000
Client 5 to 4001
This order must not be like this, it can start with any port. Important is that each new client need to be send to backend who have not or have least connections.
Currently, they are distribute like this (this is not the rule, it can always happen differently):
Client 1 to 4002
Client 2 to 4001
Client 3 to 4001
Client 4 to 4001
Client 5 to 4002
and nobody is distributed to 4000 etc.
I tested that with my own backend I’m using for years and also with other server software like Rebex Tiny Web Server (free) - Rebex.NET
Each time I tested, I cleared all cache.
This is the Caddyfile I used:

:88 {
    reverse_proxy localhost:4000 localhost:4001 localhost:4002 {
        lb_policy cookie b2b1 {
            fallback least_conn
        }
    }
    tls ..\pem.pem ..\key.key 
}

Thank you for your consideration.

I’m going to have to point out the documentation to you again:

  • least_conn choose upstream with fewest number of current requests; if more than one host has the least number of requests, then one of those hosts is chosen at random

Connections don’t last forever. With a small number of clients making requests, all of your backends are going to have very low (or zero) connections, so it’s going to be random.

When you get more clients, and each upstream has actually notable amounts of connections happening at once, Caddy will start to distribute them better equally.

Maybe you want different behaviour than least_conn. Maybe you don’t actually care about which backend has the lowest connections and what you actually want is for each client to get a new backend in sequence, which is the case in the desired behaviour you gave as an example. In which case:

  • round_robin iterates each upstream in turn

But you can’t have both. You either get a random pick of the backends with the lowest connections (effectively random when the load is low), or you get round-robin, which may result in uneven distribution under higher loads.

Either way, those are your options. The list at reverse_proxy (Caddyfile directive) — Caddy Documentation is what Caddy has to offer.

2 Likes

OK thank you for great explanation. Make sense. Thanks.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.