Caddy is written in Go. It will perform just as well as any other Go web servers.
Google, Netflix, Cloudflare, Fastly, and other large network companies deploy Go on their edge. You can too.
Here are a couple observations:
Side anecdote: did some benchmarking a week or 2 ago with Nginx, ~1k req/s pegged an 8 core machine at 100%.
Out-of-the box caddy did 1k req/s with ~5% CPU load. We were even questioning if the test was running.
We ran 20,000 clients / second (over 15 seconds) on our reverse proxy. Touched about 20% CPU and the bottleneck was cold starts on our serverless infrastructure, haha. Caddy did just fine. More than good enough for us. We’re not quite doing 72,000,000 page views per hour
In 2020, your web server will not be your bottleneck. Heck, python’s SimpleHTTPServer is probably fast enough for you. Your real bottlenecks are going to be network, storage/disk, (lack of) hardware acceleration for crypto instructions, and memory/CPU as number of sites and certificates grows with a growing customer base – those kinds of things.
Benchmarks don’t generalize well. Only rigorous profiling is helpful in making performance optimizations (see how the Go team improves the standard lib performance in their CLs). All performance results depend on hardware, virtualization, system tuning, OS, software updates, power supply, ambient temperature, monitors plugged in, and butterflies near the South Pole. There are thousands – millions – of parameters which affect benchmarks, which are largely futile. (Web servers are complicated.) It’s not like benchmarking a fibonacci tail call or something.
Deploy Caddy and see how it fares for you. Chances are it will perform as well or better than nginx or Apache. (Plus, you get the memory safety of a Go program, rather than fragile C programs.)