I’m configuring Caddy to act as a public facing proxy server for my websites. The architecture is like this:
Client → Caddy (acting as proxy and SSL provider) → Varnish caching → Load balancer for my Kubernetes cluster
Given that every request will hit Caddy, what’s an optimal server specification (vCPU/RAM) for Caddy to handle a large number of requests without slowing down?
Gosh, this is an incredibly nuanced question with a huge number of variables to consider.
Caddy is pretty fast. We’ve had some favourable benchmarks posted here over the years. It’s nowhere near as popular, though, as the big two. So we don’t really benefit from huge numbers of people testing and determining the answer to this kind of question in a variety of environments. If someone has, I can’t recall them posting it.
Do you have a target load (i.e. what do you consider “a large number” of requests)? Your best bet to get useful information is to grab a load tester like h2load, throw Caddy on a resource-scalable VM, and start ramping up to your target load to see how it performs. Scale it up if it starts slowing down until you find a good equilibrium.
Thanks for the suggestion, Matthew.
The reason I was asking this was because I was going to purchase some reserved instances on AWS for my Caddy deployments.
I suppose, the safest thing right now is to use an on-demand instance to try and find out the server specs where Caddy works best for my use case, as you suggested.
Yeah you’ll just have to see what your individual needs require.
be sure to test Varnish cache with both HTTP/1.1 and HTTP/2 to see if it still suffers from HTTP/2 starvation bug Caddy 0.10.9 + Varnish Cache 5.2 HTTP/2 thread starvation bug and if it actually does improve your performance etc
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.