Caddy v2 HTTP/2 & HTTP/3 Benchmarks

First attempt at using newer Caddy v2 server so thought I’d do some quick HTTP/2 & HTTP/3 benchmarks against my Nginx HTTP/2 & HTTP/3 Cloudflare Quiche patched servers to see where performance is at. The full write up and system/config details are at GitHub - centminmod/centminmod-caddy-v2

Test Parameters

Using h2load tester

  • h2load HTTP/2 HTTPS load tests at 150, 500 and 1,000 user concurrency at different number of requests and max concurrent stream parameters
  • h2load HTTP/3 HTTPS load tests at 150, 500 and 1,000 user concurrency at different number of requests and max concurrent stream parameters

Caddy v2 keeled over at 1000 user concurrency mark for both h2load HTTP/2 and HTTP/3 load tests while Nginx handled them fine both on the same Virtualbox CentOS 7.8 guest OS environment.

Just the tabulated results are below:

HTTP/2 HTTPS Benchmarks
server h2load HTTP/2 requests/s ttfb min ttfb avg ttfb max cipher protocol successful req failed req
caddy v2 t1 c150 n1000 m50 959.57 213.30ms 696.74ms 1.03s ECDHE-ECDSA-AES256-GCM-SHA384 h2 TLSv1.2 100% 0%
caddy v2 t1 c500 n2000 m100 990.03 711.60ms 1.36s 1.98s ECDHE-ECDSA-AES256-GCM-SHA384 h2 TLSv1.2 100% 0%
caddy v2 t1 c1000 n10000 m100 1049.00 965.65ms 3.34s 6.53s ECDHE-ECDSA-AES256-GCM-SHA384 h2 TLSv1.2 68.89% 31.11%
nginx 1.17.10 t1 c150 n1000 m50 2224.74 158.04ms 300.22ms 440.22ms ECDHE-ECDSA-AES128-GCM-SHA256 h2 TLSv1.2 100% 0%
nginx 1.17.10 t1 c500 n2000 m100 1600.52 583.80ms 861.70ms 1.23s ECDHE-ECDSA-AES128-GCM-SHA256 h2 TLSv1.2 100% 0%
nginx 1.17.10 t1 c1000 n10000 m100 1912.05 949.61ms 2.98s 5.16s ECDHE-ECDSA-AES128-GCM-SHA256 h2 TLSv1.2 100% 0%
HTTP/3 HTTPS Benchmarks
server h2load HTTP/3 requests/s ttfb min ttfb avg ttfb max cipher protocol successful req failed req
caddy v2 t1 c150 n1000 m50 594.60 230.02ms 590.87ms 1.65s TLS_AES_128-GCM_SHA256 h3-27 TLSv1.3 100% 0%
caddy v2 t1 c500 n2000 m100 333.49 353.84ms 1.89s 5.98s TLS_AES_128-GCM_SHA256 h3-27 TLSv1.3 100% 0%
caddy v2 t1 c1000 n10000 m100 failed failed failed failed - - 0% 100%
nginx 1.16.1 patch t1 c150 n1000 m50 1856.67 213.84ms 333.80ms 510.07ms TLS_AES_128-GCM_SHA256 h3-27 TLSv1.3 100% 0%
nginx 1.16.1 patch t1 c500 n2000 m100 842.13 325.78ms 834.50ms 1.59s TLS_AES_128-GCM_SHA256 h3-27 TLSv1.3 100% 0%
nginx 1.16.1 patch t1 c1000 n10000 m100 847.89 657.03ms 3.58s 9.09s TLS_AES_128-GCM_SHA256 h3-27 TLSv1.3 100% 0%

Test ngx.domain.com site test over curl with HTTP/3 support built using Cloudflare Quiche library.

curl-http3 --http3 -skD - -H "Accept-Encoding: gzip" https://ngx.domain.com/caddy-index.html -o /dev/null          
HTTP/3 200
date: Sat, 09 May 2020 15:01:22 GMT
content-type: text/html; charset=utf-8
last-modified: Wed, 06 May 2020 18:44:09 GMT
vary: Accept-Encoding
etag: W/"5eb30579-2fc2"
server: nginx centminmod
x-powered-by: centminmod
alt-svc: h3-27=":443"; ma=86400
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
content-encoding: gzip

Caddy v2 HTTP/3 with experimental_http3 enabled

curl-http3 --http3 -skD - -H "Accept-Encoding: gzip" https://caddy.domain.com:4444/caddy-index.html -o /dev/null
HTTP/3 200
x-xss-protection: 1; mode=block
etag: "q9xapl9fm"
content-type: text/html; charset=utf-8
last-modified: Wed, 06 May 2020 18:44:09 GMT
content-encoding: gzip
x-powered-by: caddy centminmod
alt-svc: h3-27=":4444"; ma=2592000
x-content-type-options: nosniff
vary: Accept-Encoding
server: Caddy

For h2load HTTP/3 tests

h2load-http3 --version
h2load nghttp2/1.41.0-DEV

For curl

curl-http3 -V
curl 7.71.0-DEV (x86_64-pc-linux-gnu) libcurl/7.71.0-DEV BoringSSL zlib/1.2.11 brotli/1.0.7 libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.5) libssh2/1.8.0 nghttp2/1.36.0 quiche/0.3.0
Release-Date: [unreleased]
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp 
Features: alt-svc AsynchDNS brotli HTTP2 HTTP3 HTTPS-proxy IDN IPv6 Largefile libz NTLM NTLM_WB PSL SSL UnixSockets
2 Likes

Chart without CPU load and memory usage is not a useful measurement, I like to see those added.

I’ll do that later when I move testing to a proper server instead of virtualbox on my laptop. Just using virtualbox to learn the ropes with new Caddy v2 first.

Didn’t really intend to do resource usage tests until I moved off the virtualbox/laptop setup to a proper server, but did a quick HTTP/3 HTTPS test anyway and had to change h2load test parameters to duration based instead of request based tests to more accurately measure resource usage over time.

Full results and resource usage charts at:

1 Like

Great results!

Since eva2000 have yet to benchmark with default Nginx setting. The conclusion is that Centminmod (having to compile source code takes time?) is like a car that have been modified/modded with nitro-booster onboard which will definitely be faster than Caddy with default setting.

As it’s not fair in this context except on default setup, Caddy is fast with less complexity setup, could be faster (lower network latency) with Go HTTP web server or FastHTTP and it still won on being less complexity. In my opinion, Go is both simplicity and efficient.

You’re free to do your own default Nginx tests. I’ve never ran any web server using it’s defaults - my eventual aim is to alway run with better than defaults for any web servers I eventually use. It’s best to test and evaluate web servers based on your specific usage requirements. That’s why I am testing Caddy v2 and playing with it’s settings to see what can be tuned beyond default.

I’m evaluating Caddy for integration into my LEMP stack along with other web servers, so it’s a valid test for my usage - where I need a web server to scale and be performant under high concurrency work loads. I also will compile my own custom Caddy binaries, though for v2 custom binaries don’t get much boost in performance compared to custom Caddy 0.10/0.11 and v1.x binaries.

Just curious George, would you ever be interested in contributing optimizations to the code base?

Unfortunately not a Golang coder so wouldn’t be able to help - I’m just high level consumer/user :smiley:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.