Gzip performance; precompressed content

Hi Caddy people!

Is there any data on Caddy’s on-the-fly gzip performance? From the source it looks like it’s using the golang package, which seems ripe for optimization according to these results.

The other side is that I’d love to have the native ability to serve precompressed files in Caddy, like the gzip static module for nginx. I don’t like making the server do anything that could’ve been done offline, and done better. I can get significantly better gzip compression using zopfli or the new libdeflate than I can typically get on a server.

I can test the gzip performance on my own if there’s nothing out there – I just thought this might have been done already by more talented people than myself. Granted, it might not be a big issue, but I just hate wasting CPU and energy.

This functionality sounds somewhat specialized, perhaps it should be a plugin?

This could be as simple as the gzip middleware looking for a file ending with .gz and if so, rewrite the URL to it and adding the encoding header. (We’d probably want to re-arrange the rewrite, redir, and header directives so that they get executed before gzip…)

FYI, it looks like at least one of Klaus’ gzip optimizations are included in Go 1.6+:

CRC32 CPU instruction: https://go-review.googlesource.com/#/c/14080/

So Caddy automatically gets those speedups, but I don’t know about all his other optimizations, like the SSE 4.2 string comparison instructions.

Hi Matt – Why would we need the rewrite part? Does a gzip content-encoding necessitate a rewrite? I guess another way of asking this is: when Caddy gzips a file on the fly, does it rewrite the URL?

Caddy 0.9.1 on golang 1.7 will be nice Go 1.7 Release Notes - The Go Programming Language

As noted above, there are significant performance optimizations throughout the package. Decompression speed is improved by about 10%, while compression speed for DefaultCompression is roughly doubled.

In addition to those general improvements, the BestSpeed compressor has been replaced entirely and uses an algorithm similar to Snappy, resulting in about a 2.5X speed increase, although the output can be 5-10% larger than with the previous algorithm.

There is also a new compression level HuffmanOnly that applies Huffman but not Lempel-Ziv encoding. Forgoing Lempel-Ziv encoding means that HuffmanOnly runs about 3X faster than the new BestSpeed but at the cost of producing compressed outputs that are 20-40% larger than those generated by the new BestSpeed.

It is important to note that both BestSpeed and HuffmanOnly produce a compressed output that is RFC 1951 compliant. In other words, any valid DEFLATE decompressor will continue to be able to decompress these outputs.

Lastly, there is a minor change to the decompressor’s implementation of io.Reader. In previous versions, the decompressor deferred reporting io.EOF until exactly no more bytes could be read. Now, it reports io.EOF more eagerly when reading the last set of bytes.

No, the rewrite I’m talking about just tells caddy to send the static .gz file down the wire instead of the original URL which would require processing a compression.

Hi George – That looks excellent. They might have integrated more of Klaus Post’s work to help with this.

I looked up DefaultCompression in the library, and I’m puzzled. It’s coded at -1, which sets it to the default compression level, but I don’t see how or where it gets the actual default level. It has to be a number between 1 and 9, or realistically 2 to 8, and it’s usually level 6 in zlib. But looking in the library, I don’t see where it gets it – there’s no mention of 6 or anything else. Do you know where it would get the default level?

1 Like