I’m running several DigitalOcean droplets for different projects, and since I’m a poor student, I have to keep costs down.
FWIW, all of Caddy’s infrastructure runs on low-powered DigitalOcean droplets.
Unless you’re operating at massive scale like a major CDN (even personal/private CDNs won’t have a problem) or something like that, Caddy will probably run just fine for you on commodity / virtualized hardware.
Performance testing is complex and nuanced, and whole books are written on the subject. And web servers especially are difficult because there are so many layers, abstractions, protocols, and hidden variables that you don’t find in cases where performance testing is really crucial like crypto optimizations in processor instructions, etc.
Frankly, I advise against sinking time into any sort of benchmarking unless you know what you’re doing and have very specific requirements and goals. Even then, avoid drawing conclusions from any singular benchmark testing until proper, thorough profiling is completed.
I almost never see web server performance testing done properly – pages of tables and percentages don’t compensate for lack of full understanding of the stack and a complete consideration for hidden variables, even though the results may look impressive and thorough. I always end my reading of performance tests of any kind of web server with, “What about this or that?” I never really find them satisfying.
What is satisfying, though, is setting up the software I’m trialing, configuring it, and starting to use it – in a test or staging environment if needed – and then if something is running too slowly, I do some true profiling and find out where the bottleneck is and see how to fix it.
I did it this week, in fact, testing Caddy 2’s new brotli encoder (spoiler: it’s really slow). Through proper debugging and profiling, it became very clear that encoding responses using brotli in real-time on a single-core machine with this particular pure-Go library (that was directly converted from C, so is very inefficient as a Go program, but is at least pure Go) is not going to work for my case.
And then the key is to avoid extrapolating beyond that, or trying to avoid applying that result to someone else and drawing a conclusion like, “Therefore, we see that _ is slower than _” because too much gets lost out of context. The author of that brotli library is quite happy with its performance apparently, since he is using it in his own projects. (And of course, brotli’s compression algorithm is slow/complex in theory too—which is another point to consider sometimes—but I wanted to hook it up anyway and see what happens.) From my test, I can’t tell you that brotli is slow and you shouldn’t use it. I can’t make recommendations, because the number of variables is too large.
Anyway, Caddy is fast enough for most (read: really really most, like, probably 99%+ of) users, and especially as a student, I suspect almost any web server would be fast enough for your needs. No DigitalOcean droplet is that constrained on memory unless you’re operating some in-memory cache, running a busy database, or doing some other heavy lifting on each request. In which case: you don’t have a web server problem, you have an architecture/design/configuration problem.
So to answer your question:
There aren’t any reliable sources.
Try it out! I dare you to switch.