1. The problem I’m having:
HI All. Thanks to the help here I got my caddy server up and running well as a reverse proxy. I am running into an issue with my LibreSpeedRS server. I'm experiencing a 50%+ hit on throughput when going through Caddy. Hit the site via the IP get full (2.5G) throughput. I figured https would have some overhead, but this seems a bit much.
2. Error messages and/or full log output:
No errors and I’m not sure where to pull logs from.
3. Caddy version:
v2.10.2
4. How I installed and ran Caddy:
a. System environment:
Debian LXC
d. My complete Caddy config:
{
acme_dns cloudflare {env.CF_API_TOKEN}
}
https://pvep.7263377.xyz {
reverse_proxy https://192.168.31.10:8006 {
transport http {
tls_insecure_skip_verify
}
}
}
librespeed-pves.7263377.xyz {
reverse_proxy 192.168.31.15:80
}
I updated my config to use http instead of https and speeds improved, but are still way off what they should be. Speeds using the IP (bypassing Caddy) gets me about 2500 Mb/s up and down. HTTP via Caddy is getting me about 1800 Mb/s up and down. HTTPS via Caddy is getting about 1700 Mb/s down and 1000 Mb/s up.
Those speeds seem great tbh. Caddy is not optimized for throughput. We don’t have kTLS (kernel TLS) yet because Go doesn’t support it, so TLS overhead is quite a bit more than it could be, plus the fact that we have a handler chain and no kTLS means we can’t use sendfile() syscall which would be way faster.
You could test with Go’s stock reverse proxy (i.e. httputil.NewSingleHostReverseProxy) and see how that looks. We’re doing basically the same as that, but with a bit more overhead cause of the modularity of Caddy.
Hi @francislavoie. Thanks for the reply. Just so I can get a better understanding (I’m very much an internet protocol noob) is a 60% decrease expected or is it that speeds this high is too much for Caddy and/or HTTPS protocols.
Caddy is just not optimized for that task. Other servers can reach higher peak throughput because they’re written with lower level languages that allow access to kTLS and sendfile and other fancy tricks, but Caddy is build with Go using Go’s HTTP and TLS stack which is a good thing and a bad thing – good because we get a lot of incredibly good functionality “for free” and with memory safety guarantees and it lets us write a convenient and easy to use modular config layer and TLS automation stuff etc, but it means somewhat less control at the low level to reach peak throughput.
Caddy has really good concurrency (like handling a lot of separate requests simultaneously), but single requests with high bandwidth is not its strongest point.
2 Likes