Hello,
I run caddy as cache for javascript and css to have a cached brotli version of them but if the javascript file is to big and the brotli compression takes longer than 10 seconds it gets cancelled.
I don’t think it’s Caddy cancelling the request. I think it’s either the client or the upstream doing it.
But it would be useful if you enable the debug global option and the log site directive to get more logs. It should show more detail about what’s going on in regards to timing. And possibly also the verbose_logsreverse_proxy option which is new in v2.7.5 which emits super verbose logs for stream read/write operations etc.
All that said, why do you need brotli? It’s known to be very very slow, so I can’t recommend using it. It’s CPU expensive and slow, so overall it’ll harm performance vs using no compression or gzip compression.
Better solution would be to precompress your brotli files. Caddy’s file_server has support for serving precompressed sidecar files. (For example requesting app.js, the file server can also check to see if app.js.br exists and if so serve that).
I want brotli because it is smaller than gzip and since I will compress only ones and then serve the precompressed file I can wait the 30 seconds ones.
I proxy plex, so I have no clue what will get served and can’t precompress files since they are not on the same host.
But brotli isn’t so much smaller that it helps that much compared to gzip. It’s way more effort to solve this problem with brotli than it would be to simply use gzip.
Are you so bandwidth constrained that requests have to be as small as possible? You’re using Plex, you’re streaming video anyway. JS files are a drop in the bucket.
That is only somehow true, I use also use plex to stream music at work and the difference between 2mb with brotli and 8 mb with gzip is noticeable there.
Beside that I don’t know why this shouldn’t work?
After enabling the debug stuff I think the problem is
2023/11/20 07:59:45.479 INFO http.handlers.cache Set backend timeout to 10s
2023/11/20 07:59:45.479 INFO http.handlers.cache Set cache timeout to 10s
Cool, glad you found it. I don’t use cache so I wasn’t aware it had a default timeout. I think that’s not good, there should be no timeout by default IMO.
I’m not surprised. The brotli implementation in Go is incredibly expensive and unoptimized. I really cannot recommend it.
That seems insane. Plex should do better. This isn’t on you to solve. They should make smaller bundle splits, and they should ship precompressed versions as well.
I can’t really tell if the one in nginx is any faster, but since my goal is to do it only ones and keep it cached until the next plex update I can life with it.
I just need to figure out how to do that currently the cached version disappears from cache even though ttl and stale time are set to 7 days
yes I agree but plex is closed source so that is on me.
btw by default plex doesn’t support any encoding + they set Cache-Control: no-cache so they load every time you load the site or press F5 sweet 12,6 mb of javascript.
My problem seems to be that X-Souin-Stored-Ttl:[2m0s] gets set, so my cache is gone after 2 mins.
Google gets me 7 hits on that and none was any help.
So now I really need @darkweak help. How do I bump that to a much higher value and ignore the cache controll header from plex if that is what causes problems?
my current config
{
order cache before rewrite
cache
}
:80 {
cache {
ttl 7d
stale 7d
timeout {
backend 1m
cache 1m
}
}
encode {
br 11
}
reverse_proxy {
to 10.0.5.2:32400
}
}
Hello @Big I made some improvements on souin memory footprint and decrease compute in the latest release that could enhance the caddy instance.
I just discovered that I don’t support go-humanize for the configuration parsing so Xd is not recognised, I will write a patch for that asap. You can use XXh (e.g. 168h) until the patch is available. I apologize for the inconvenience.
ok I still face problems and I’m not sure why.
But first thing how do I stop cache handler from storing the complete body in the log files?
Having there 700 kb of binary between the log lines makes searching very unpleasent.
(that is why I switched to a time tiny css file for my test) http.handlers.cache Store the response {Status: StatusCode:200 Proto: ProtoMajor:0 ProtoMinor:0 Header:map[Cache-Control:[] Content-Encoding:[br] Content-Length:[7852] Content-Type:[text/css] Date:[Tue, 21 Nov 2023 22:18:56 GMT] Server:[Caddy] Vary:[Accept-Encoding] X-Plex-Protocol:[1.0] X-Souin-Stored-Length:[7852] X-Souin-Stored-Ttl:[168h0m0s]] Body: here a lot of binary garbage
Second Problem
a lot of requests simply don’t get logged.
My current caddy file
{
debug
cache
order cache before rewrite
log {
output file /var/log/caddy/access.log
}
}
:80 {
cache {
log_level info
ttl 168h
stale 168h
timeout {
backend 1m
cache 1m
}
}
encode {
br 11
}
#header -Cache-Control
reverse_proxy {
verbose_logs
#header_down -Cache-Control
#header_up -Cache-Control
to 10.0.5.2:32400
}
}
according to caddy log X-Souin-Stored-Ttl:[168h0m0s] gets correctly set but in the curl output you can see that ttl is over the place. Sometimes it is set to 2 mins it seems cache-status: Souin; hit; ttl=114 and then it use the configured value cache-status: Souin; hit; ttl=604763 but the files drops out of cache then anyways. I don’t really understand, do I look at the wrong values or expect something wrong?
As you can see I even tried deleting the Cache-Control header if that is maybe what screws me over but that made no difference.
I would need the next hint, maybe I made a other trivial error and my config does include something which breaks stuff?
Turn off debug logging if you don’t need it anymore.
If a request is served from cache, it won’t get logged by reverse_proxy, because it doesn’t reach there.
You didn’t enable the log directive in your site block which enables access logs.
The log global option configures runtime logs, which are not access logs (though just using log inside a site block will emit access logs to the default logger which is what you configured in global options).
With that config I still don’t log all requests that are answered by cache handler and removing debug everywhere did not stop cache handler from logging the binary data in the logfile.
Maybe you could tell me more in detail what you meant how I should do it?