Brotli on the fly

The Caddy is very good HTTP server, and also as proxy server.
Is there a plan to support brotli on the fly compression, like gzip?

Not that I know of. How concerned are you about performance?

Thank you for your reply.
I’m using Caddy as reverse proxy of Apache.
Caddy with gzip (on the fly) has better resonse than mod_deflate for dynamic created contents by php. (Instead, content-length is not set in hedder)
I think caddy may achieve better response in case of brotli because the Caddy can compress streams paralelly on another processes by php processes. The mod_brotli (and also mod_deflate) looks compress after all contents created.

Performance boost could be huge !

Definitely would like to see caddy support brotli compression - on newer cpus like intel skylake brotli based http/2 https performance really gets a boost compared to gzip from my nginx tests at least

h2load http/2 https benchmarks with nginx with gzip vs brotli and different compilers and rsa 2048bit vs ecdsa 256bit ssl certs

Last action on the Brotli implementation front was this issue, if I’m not mistaken: https://github.com/mholt/caddy/issues/525

Could be worth revisiting, but I don’t know if there’s a pure Go brotli encoder available at the moment.

What are your benchmarks even measuring? Caddy does serve up brotli-compressed content, just not on-the-fly like gzip, because brotli compression is slower.

It was asserted otherwise in the issue thread: Implement brotli compression data-format for faster loading of websites. · Issue #525 · caddyserver/caddy · GitHub

So this might not be the case, assuming we go with the sane middle ground for the default compression level.

@eva2000: That graph does seem to indicate brotli as capable of serving more requests per second than gzip, but to clarify, is that doing the compression on the fly for both gzip and brotli? Can you give us more information, like what gzip and brotli’s settings were? There’s a fair bit missing to draw conclusions.

Ok the above tests were testing on the fly gzip vs on the fly brotli compression where Nginx 1.15.8 was used with

  • cloudflare zlib performance fork library which is already up to 40% faster than standard zlib library for gzip encoded http/https requests and set to level 5 compression defaults for nginx and enabled support for gzip_static so both on the fly and static precompressed gzip are supported and
  • brotli 1.0.7 build via ngx_brotli nginx module which is a more up to date fork of original Google ngx_brotli module maintained and developed by one of the original Google Brotli authors, Eugene Kliuchnikov at eustas/ngx_brotli. That fork is using brotli 1.0.4 but I also updated it to use brotli 1.0.7 latest in my builds. ngx_brotli was built as a dynamic nginx module with level 5 compression defaults for nginx and brotli_static enabled to support both on the fly and static precompressed brotli compression.

The reason why i specifically mentioned new cpus like intel skylake is because in the past my benchmarks for gzip vs brotli for on the fly http/2 https h2load benchmarks showed brotli being slower for above tests but those older tests were done on older Intel Core i7 4790K haswell based cpu at https://community.centminmod.com/threads/nginx-with-cloudflare-zlib-fork-vs-nxg_brotli-compression-level-tests.13820/#post-63601 where brotli on the fly (dynamic) performance was roughly 1/5th the performance of cloudflare gzip performance forked zlib library based gzip on the fly (dynamic) performance. However precompressed brotli was faster than precompressed gzip in those tests with that specific cpu.

old cpu tests (cf fork = cloudflare zlib library)

config compressed size req/s avg latency max latency
Centmin Mod Nginx zlib level 5 pre-compress static (cf fork) 3.79KB 72443 3.27ms 32.38ms
Centmin Mod Nginx zlib level 6 pre-compress static (cf fork) 3.79KB 71905 3.36ms 50.43ms
Centmin Mod Nginx brotli level 5 pre-compress static 3.38KB 84643 2.76ms 39.79ms
Centmin Mod Nginx brotli level 6 pre-compress static 3.38KB 84975 2.96ms 87.54ms
Centmin Mod Nginx zlib level 5 dynamic (cf fork) 4.12KB 21906 9.07ms 74.32ms
Centmin Mod Nginx zlib level 6 dynamic (cf fork) 4.00KB 16997 11.67ms 49.06ms
Centmin Mod Nginx brotli level 5 dynamic 3.93KB 5060 38.61ms 167.43ms
Centmin Mod Nginx brotli level 6 dynamic 3.66KB 4875 40.12ms 119.51ms

Now the above charted tests show a different picture and I have repeated the tests on several intel skylake cpus and all show brotli faster than gzip for on the fly compressed http/2 https h2load benchmarks as well. So it could be intel skylake cpus have optimisations allowing brotli to perform better or could be cause of newer nginx 1.15.8 and newer brotli 1.0.7 library built ngx_brotli in above charted tests. For now for my own Centmin Mod Nginx builds I have automatically enabled ngx_brotli support if Intel skylake cpus are detected and left choice to enable or not ngx_brotli to end users when other cpus are detected.

Just food for thought if you do decide to add brotli on the fly support to caddy as well in future :slight_smile:

2 Likes

@matt @Whitestrake my bad I retested and seems the brotli boost wasn’t due to cpu but an updated ngx_brotli setting i incorrectly set where the assumption was flawed that Intel Skylake cpus accelerated brotli compression when it fact it was an incorrect ngx_brotli setting set in Centmin Mod Nginx default settings as explained here. Basically, I incorrectly configured ngx_brotli to not compress brotli encoded http requests under 65536 bytes :sweat_smile: Correcting this does show the assumption that brotli is still slower than gzip for on the fly compression for nginx at least heh. How much slower for latest versions of Caddy is another matter that probably should be tested for ?

Just gotta find a pure go brotli implementation to import or write a new one for Caddy :+1:

2 Likes

Thanks for all,
I understood the brotli is slower than the optimized gzip.
I expect Caddy with brotli + Apache will be better than Apache with brotli (mod_brotli).

If Caddy could compress the content stream, the compression on Caddy carries on (partially) parallel in another thread for plain output from Apache. mod_brotli may compress after all contents output. Because a header of Caddy+gzip doesn’t contain a content-length:.

Anyway, brotli on the fly may not be good idea, if the compression speed is so slow, but it will be useful as proxy in front of some heavy CMSs.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.