1. Caddy version (caddy version
):
v2.5.1 h1:bAWwslD1jNeCzDa+jDCNwb8M3UJ2tPa8UZFFzPVmGKs=
2. How I run Caddy:
a. System environment:
Ubuntu 20.04 with apt package - copy paste installation from the documentation
b. Command:
vim /etc/caddy/Caddyfile
c. Service/unit/compose file:
d. My complete Caddyfile or JSON config:
{
# Enable Debug mode
# debug
# Disable admin API
admin off
}
application1.domain {
header {
# Hide "Server: Caddy"
-Server
}
# https://caddyserver.com/docs/caddyfile/directives/log
log {
output file /var/log/caddy/app1.log
format console
}
# Another common task is to set up a reverse proxy:
# https://caddyserver.com/docs/caddyfile/directives/reverse_proxy
tls internal
reverse_proxy * {
to 10.1.0.10:8080 10.1.0.11:8080
lb_policy cookie
lb_try_duration 1s
lb_try_interval 250ms
health_uri /status # Backend health check path
health_interval 10s
health_timeout 2s
health_status 200
}
}
application2.domain {
header {
# Hide "Server: Caddy"
-Server
}
# https://caddyserver.com/docs/caddyfile/directives/log
log {
output file /var/log/caddy/app2.log
format console
}
# Another common task is to set up a reverse proxy:
# https://caddyserver.com/docs/caddyfile/directives/reverse_proxy
tls internal
reverse_proxy * {
to 10.1.0.7:8090 10.1.0.8:8090
lb_policy cookie
lb_try_duration 1s
lb_try_interval 250ms
health_uri /status # Backend health check path
health_interval 10s
health_timeout 2s
health_status 200
}
}
3. The problem I’m having:
Hi everyone,
i have more of best practice question.
We are using caddy to provide reverse proxy / load balancing / sticky sessions to redundant backend systems (same application running in parallel on multiple servers as failover).
Before we used caddy, we used apache for this.
While using apache we used to set a cookie with a routing id as value for a specific backend.
e.g. routeid=backend01 or routeid=backend02
This was practical since we have different use-cases where we have to circumvent the active loadbalancing to access a specific node directly, without changing the caddy config (due to active users working on each backend instance)
first example: we use ansible to update single nodes and before putting them back into action, we check directly if the application / webfrontend is fully available again - thats only possible if we are not directed to the other instance.
second example: debugging of an application issues on one specific node (backend01 is running fine, backend02 is unstable and we need to check it through the frontend).
So for the question: How could i create the same “option” with caddy as the webserver - option = force the routing to a specific node “onDemand”.
If i can provide further information, let me know.
Thanks in advance and best regards
Alcesh
4. Error messages and/or full log output:
None…
5. What I already tried:
Using lb_policy cookie and trying to play around with the cookie content.
Reading the documentation about all the lb options and/or possible other ways like header manipulation.