Caddy load balancer multiple servers

1. Caddy version (caddy version):

caddy version 2.3.0

2. How I run Caddy:

{
    on_demand_tls {
        ask    https://lave.live/domain/verify
        interval 2m
        burst 5
    }
}

www.lave.live {
        redir https://lave.live{uri}
}


 lave.live {
 root * /home/forge/lave.live/public
         encode zstd gzip
         file_server
       php_fastcgi unix//var/run/php/php8.1-fpm.sock
}

:443 {
    tls {
        on_demand
    }

     root * /home/forge/lave.live/public
         encode zstd gzip
         file_server
         php_fastcgi unix//var/run/php/php8.1-fpm.sock
}

a. System environment:

Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-109-generic x86_64)

b. Command:

Paste command here.

c. Service/unit/compose file:

Paste full file contents here.
Make sure backticks stay on their own lines,
and the post looks nice in the preview pane.

d. My complete Caddyfile or JSON config:

Paste config here, replacing this text.
Use `caddy fmt` to make it readable.
DO NOT REDACT anything except credentials.
LEAVE DOMAIN NAMES INTACT.
Make sure the backticks stay on their own lines.

3. The problem I’m having:

So i heard about caddy and moved from Nginx . I created a test droplet via digitalocean and everything worked fine . I was able to use custom domain names with ssl generated etc.

I decided to move this to my main application.
So my application is a multitenant application which allows custom domain. It has two application servers, one database server all behind a load balancer. My issue is how to get the root to point to my application servers.

With my test server, everything worked perfectly because it was a single server so was able to point to the root project file. (Its a laravel project)
However, on my main application, my entry point to my domain hits my load balancer and not an application server so my root code says HTTP ERROR 404

Im pretty new to this so any help would be appreciated. Thank you

4. Error messages and/or full log output:

May 18 12:33:09 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877189.333417,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“www.lave.live”,“error”:"no information found to solve ch>
May 18 12:33:09 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877189.334122,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“www.lave.live”,“error”:"no information found to solve ch>
May 18 12:33:09 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877189.5672505,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“lave.live”,“error”:"no information found to solve chall>
May 18 12:33:09 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877189.5677433,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“lave.live”,“error”:"no information found to solve chall>
May 18 12:40:31 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877631.5532804,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“www.lave.live”,“error”:"no information found to solve c>
May 18 12:40:31 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877631.5533555,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“www.lave.live”,“error”:"no information found to solve c>
May 18 12:40:31 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877631.7869983,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“www.lave.live”,“error”:"no information found to solve c>
May 18 12:40:31 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877631.7876177,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“www.lave.live”,“error”:"no information found to solve c>
May 18 12:40:32 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877632.0230482,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“lave.live”,“error”:"no information found to solve chall>
May 18 12:40:32 lave-load-balancer caddy[3189]: {“level”:“error”,“ts”:1652877632.02355,“logger”:“tls.issuance.acme”,“msg”:“looking up info for HTTP challenge”,“host”:“lave.live”,“error”:"no information found to solve challen>
~

5. What I already tried:

6. Links to relevant resources:

That’s a pretty old version. Please upgrade to v2.5.1!

Your logs are truncated (notice the > at the end of each line). Please use the command from the docs to read your logs:

Also, please make sure to read the template you filled out – you should use backticks to wrap your logs and config, otherwise formatting gets broken.

I don’t understand what you mean. Is Caddy your load balancer here, or do you have something in front of Caddy? Are you running more than one instance of Caddy? Can you explain in more detail what your setup is?

1 Like

Thanks for the response Francis
I have updated to the latest version (2.5.1)

So my current architecture has a load balancer which is done by nginx

I want to use caddy .

Pls correct me if i am wrong but this is how i did it

SO i installed caddy on my application servers and stopped nginx (on the application server) however, nginx comes as the server name in my response header (Im guessing it still comes as nginx because the loadbalancing is done by nginx) .

SO my way of thinking is to install caddy on my load balancer and let caddy balance the traffic.

This is my loadbalancer config

lave.live {

reverse_proxy privateip:80 privateip:80 {
lb_policy round_robin
lb_try_duration 100ms
lb_try_interval 250ms
}
}

and on my application servers caddy config
{
on_demand_tls {
ask https://lave.live/domain/verify
interval 2m
burst 5
}
}

www.lave.live {
redir https://lave.live{uri}
}

:80 {
root * /home/forge/lave.live/public
encode zstd gzip
file_server
php_fastcgi unix//var/run/php/php8.1-fpm.sock
}

:443 {
tls {
on_demand
}

 root * /home/forge/lave.live/public
     encode zstd gzip
     file_server
     php_fastcgi unix//var/run/php/php8.1-fpm.sock

}

My issue now is my caddy only works when I manually enter the domain name in the load balancer config .
Because my application accepts custom domain , how do i make my loadbalancer config accept all domain names as well as subdomains ? ps : All custom domains are pointing to my loadbalancer server ip

Like I said earlier, please use backticks to wrap your config when you post them on the forums. Their formatting gets broken otherwise, and it’s difficult to read.

You’d have to make your front instance of Caddy terminate TLS (i.e. do the on-demand stuff) and then proxy over HTTP to your two upstreams. Use http:// as your site address upstream so that it doesn’t try to match based on hostname.

Front:

{
	on_demand_tls {
		ask https://lave.live/domain/verify
	}
}

(proxy) {
	reverse_proxy backend-1:80 backend-2:80 {
		# whatever load balancing config
	}
}

lave.live {
	import proxy
}

https:// {
	tls {
		on_demand
	}

	import proxy
}

www.lave.live {
	redir https://lave.live{uri}
}

Backend:

http:// {
	root * /home/forge/lave.live/public
	encode zstd gzip
	php_fastcgi unix//var/run/php/php8.1-fpm.sock {
		# so that X-Forwarded-* headers are trusted
		trusted_proxies private_ranges
	}
	file_server
}

For the frontend, I’m using a snippet for the proxy and two separate site blocks, because otherwise there’d be a chicken-and-egg issue with the lave.live domain – the ask endpoint goes through lave.live, but since that’s being served by this same server, how can it ask if it can issue a cert for that domain? So you need to explicitly list that one as a non-on_demand domain so that it will work. Not super elegant but it should be fine.

You could avoid this issue by separating the domain verification endpoint to a simple script you run on the front instance so it doesn’t need to be proxied to one of your backends which may or may not be available. Something like ask http://localhost:8080/domain/verify instead.

Note that technically, this approach using a snippet will actually cause there to be two reverse_proxy handlers, and they will have their own state. If you change ask, then you can simplify it back down to just one https:// site block.

1 Like

Thanks so much Francis
your solution works like a charm.
Thanks

about caddy loadbalancing

so this is my script

reverse_proxy privateip:80 privateip:80 {
lb_policy round_robin
lb_try_duration 100ms
lb_try_interval 250ms
}

SO i turned off one of my server to see if caddy would route traffic to the available server but it sometimes sends traffic to the down server

I was assuming caddy would only route to the available server

can you help me with what im doing wrong ? thanks

A lb_try_duration of 100ms is way too short. This is a timer that starts when reverse_proxy first gets the request. As long as it’s been less time than this duration, Caddy will attempt a retry. Caddy’s default dial timeout for the HTTP transport is 3 seconds, so it might take up to 3 seconds before failing… which is well past 100ms.

You could try setting this to 5s or 10s.

Also, you need to enable passive or active health checking for your upstreams to be marked as unhealthy, otherwise Caddy will continue to try connecting to all the backends.

This topic was automatically closed after 30 days. New replies are no longer allowed.