It’s more a configuration question… I’ve been trying out a NextJS app with pm2. The process is very easy so far:
nextjs.example.com {
encode zstd gzip
# Assets serving (js, css, ...)
handle /_next/static/* {
# here we need to strip the _next prefix, it does not exists
uri strip_prefix /_next
file_server {
root /home/xyz/nextjs-built-project/.next
}
}
handle {
reverse_proxy 127.0.0.1:3001
}
}
It’s very clean, but I want to take advantage of pre-static compression (either gzip, brotli or both).
So I’ve built all compatible assets in a pre-compressed format (suffixed by → js.br, js.gz, css,.br…).
Nowadays in webpack based frontend projects, it’s very easy to pre-compress assets.
But AFAIK no webserver makes it easy to configure it (apache is tricky but possible, nginx in its opensource version pfff… need to a compile ourselves to get the static_gzip, they don’t even have a static_brotli one. The paying version is really better)
Using pre-compressed versions will make a real difference in speed especially on low-end servers (cheap droplets…).
I know cloudflare, cdn’s… gives a wider and more global solution. (i use them a lot)
But sometimes I feel that could give one more very nice reason to move to caddy. Especially for small / personal projects…
This thread shows an example of how to do it. It’s a bit complicated because you need to do rewrites and set headers based on whether a pre-compressed file exists on disk.
We’re tracking support for content negotiation in the following github issue. There’s a lot of design work and consideration that needs to be done to implement it correctly. It’s definitely something we want to get in eventually.
Finally just as a quick note, you can replace handle + uri strip_prefix with just handle_path, it has path prefix stripping logic built in.
Also, I recommend using the root directive rather than the subdirective to file_server, that way the root will be set for all directives in your handle (will be necessary since the file matcher also needs to know the root to work)
I saw that you added another Cache-Control block. The path extensions you have in there have some overlap with the snippet ones. I’d suggest either updating the overlapping ones with the age you want and removing the overlapping extensions from the path matcher.
Also you have this:
tls admin@example.com {
protocols tls1.2 tls1.3
}
Setting protocols here is not useful, because those are already Caddy’s defaults. I recommend you remove that (and the braces) so that when you later upgrade Caddy, if some tls1.4 spec is released and support is added to Caddy for example, it would just work.
To clarify, you can leave in tls <email> (it’s good to have an email set if you can, but it’s not necessary). I was just saying the protocol bit isn’t needed.
Hi,
I noticed that when I enable use encode gzip, the response seems to be encoded (as expected), but the content-type is given as text/html. Nevertheless, Chrome seems to figure out that it’s a binary file and not a text file that it’s getting and successfully showing the webpage.
Is that fine, or do we need to manually add a ‘content encoding: gzip’ header?
That seems fine then, since HEAD never has any content. I think it’s because of this condition rw.buf.Len() >= rw.config.MinLength, i.e. the response has too few bytes to be worth encoding.
A response to a HEAD method should not have a body. If it has one anyway, that body must be ignored: any entity headers that might describe the erroneous body are instead assumed to describe the response which a similar GET request would have received.
For whoever finds this in the future, this is now possible without the funky snippets since v2.4.0, using the file_server directive’s precompressed option.