How to increase timeout for a request beyond 10 seconds

Hello,
I run caddy as cache for javascript and css to have a cached brotli version of them but if the javascript file is to big and the brotli compression takes longer than 10 seconds it gets cancelled.

How do I Increase that limit?

2023/11/20 02:51:45.435 ERROR   http.handlers.reverse_proxy     aborting with incomplete response       {"upstream": "10.13.37.1:80", "duration": 0.002280413, "request": {"remote_ip": "10.0.5.1", "remote_port": "51556", "client_ip": "10.0.5.1", "proto": "HTTP/1.1", "method": "GET", "host": "plex.lan", "uri": "/web/js/main-179-69d37121295f158388fa-plex-4.108.0.32184-ffafb0d.js", "headers": {"User-Agent": ["curl/8.4.0"], "Accept": ["*/*"], "Accept-Encoding": ["br"], "X-Forwarded-Proto": ["http"], "X-Forwarded-Host": ["plex.lan"], "Date": ["Mon, 20 Nov 2023 02:51:35 UTC"], "X-Forwarded-For": ["10.0.5.1"]}}, "error": "reading: context canceled"}

caddy version v2.7.5 h1:HoysvZkLcN2xJExEepaFHK92Qgs7xAiCFydN5x5Hs6Q=

https://caddyserver.com/api/download?os=linux&arch=amd64&p=github.com%2Fueffel%2Fcaddy-brotli&p=github.com%2Fcaddyserver%2Fcache-handler&idempotency=45882532430207

Debian 12 Linux AMD64

service caddy start

{
        order cache before rewrite
        cache
}

:80 {
        cache {
                ttl 7d
        }
        encode {
                br 11
        }
        reverse_proxy {
                to 10.13.37.1:80
        }
}

Thanks for help

I don’t think it’s Caddy cancelling the request. I think it’s either the client or the upstream doing it.

But it would be useful if you enable the debug global option and the log site directive to get more logs. It should show more detail about what’s going on in regards to timing. And possibly also the verbose_logs reverse_proxy option which is new in v2.7.5 which emits super verbose logs for stream read/write operations etc.

All that said, why do you need brotli? It’s known to be very very slow, so I can’t recommend using it. It’s CPU expensive and slow, so overall it’ll harm performance vs using no compression or gzip compression.

Better solution would be to precompress your brotli files. Caddy’s file_server has support for serving precompressed sidecar files. (For example requesting app.js, the file server can also check to see if app.js.br exists and if so serve that).

1 Like

Here the super verbose logs (I did remove the nginx in the middle here to remove a potential problem compared to the initial post)
had to upload it somewhere else because log would bust postlength limit of the forum caddy run --config /etc/caddy/Caddyfile2023/11/20 07:31:42.587 INFO using p - Pastebin.com

I want brotli because it is smaller than gzip and since I will compress only ones and then serve the precompressed file I can wait the 30 seconds ones.

I proxy plex, so I have no clue what will get served and can’t precompress files since they are not on the same host.

But brotli isn’t so much smaller that it helps that much compared to gzip. It’s way more effort to solve this problem with brotli than it would be to simply use gzip.

Are you so bandwidth constrained that requests have to be as small as possible? You’re using Plex, you’re streaming video anyway. JS files are a drop in the bucket.

That is only somehow true, I use also use plex to stream music at work and the difference between 2mb with brotli and 8 mb with gzip is noticeable there.
Beside that I don’t know why this shouldn’t work?

After enabling the debug stuff I think the problem is

2023/11/20 07:59:45.479 INFO    http.handlers.cache     Set backend timeout to 10s
2023/11/20 07:59:45.479 INFO    http.handlers.cache     Set cache timeout to 10s

I tried to increase it with this new config file

{
	debug
    order cache before rewrite
    cache
}

:80	{
	cache	{
		ttl 7d
		timeout	{
			backend 1m,
			cache 1m
			}
		}
	encode	{
    		br 11
		}	
	reverse_proxy {
		verbose_logs
		to 10.0.5.2:32400
		}
	}

But even with the new config file it still says that the timeout is 10 seconds

2023/11/20 08:02:20.350 INFO    http.handlers.cache     Set backend timeout to 10s
2023/11/20 08:02:20.350 INFO    http.handlers.cache     Set cache timeout to 10s

I don’t understand why?

You have a comma there, that’s invalid.

But I’ll need to defer to @darkweak for questions about the cache module. I don’t work on that.

1 Like

ok removing the comma did the trick, now it works.

Thank you very much

1 Like

Cool, glad you found it. I don’t use cache so I wasn’t aware it had a default timeout. I think that’s not good, there should be no timeout by default IMO.

I think I will have a lot more googling before me.
The LXC is dead now after I did press 3 times F5 :sweat_smile:
image
there runs caddy and nothing else.

I used mostly nginx + a little varnish so this is all new to me and I first have to get used to caddy I guess.

I’m not surprised. The brotli implementation in Go is incredibly expensive and unoptimized. I really cannot recommend it.

That seems insane. Plex should do better. This isn’t on you to solve. They should make smaller bundle splits, and they should ship precompressed versions as well.

I can’t really tell if the one in nginx is any faster, but since my goal is to do it only ones and keep it cached until the next plex update I can life with it.
I just need to figure out how to do that :laughing: currently the cached version disappears from cache even though ttl and stale time are set to 7 days

yes I agree but plex is closed source so that is on me.
btw by default plex doesn’t support any encoding + they set Cache-Control: no-cache so they load every time you load the site or press F5 sweet 12,6 mb of javascript.

My problem seems to be that X-Souin-Stored-Ttl:[2m0s] gets set, so my cache is gone after 2 mins.
Google gets me 7 hits on that and none was any help.
So now I really need @darkweak help. How do I bump that to a much higher value and ignore the cache controll header from plex if that is what causes problems?
my current config

{
	order cache before rewrite
	cache
}

:80 {
	cache {
		ttl 7d
		stale 7d
		timeout {
			backend 1m
			cache 1m
		}
	}
	encode {
		br 11
	}
	reverse_proxy {
		to 10.0.5.2:32400
	}
}

Thank you for help

You can probably use header_down in reverse_proxy to change/wipe any headers before it reaches cache on the way back out.

Thank you for the hint.

Sadly it still does default to 2 mins no matter if I do header_down and remove the cache tag or do with header_up overwrite it.

btw is it ok to chase the problem in that thread or should I open a other one because it is different to the original one?

Here is fine. It’s related.

1 Like

Hello @Big I made some improvements on souin memory footprint and decrease compute in the latest release that could enhance the caddy instance.
I just discovered that I don’t support go-humanize for the configuration parsing so Xd is not recognised, I will write a patch for that asap. You can use XXh (e.g. 168h) until the patch is available. I apologize for the inconvenience.

1 Like

Thank you for answering,
using h did indeed solve it. I will test now further and see if I hit any other roadblocks
Thank you for helping me.

1 Like

ok I still face problems and I’m not sure why.
But first thing how do I stop cache handler from storing the complete body in the log files?
Having there 700 kb of binary between the log lines makes searching very unpleasent.
(that is why I switched to a time tiny css file for my test)
http.handlers.cache Store the response {Status: StatusCode:200 Proto: ProtoMajor:0 ProtoMinor:0 Header:map[Cache-Control:[] Content-Encoding:[br] Content-Length:[7852] Content-Type:[text/css] Date:[Tue, 21 Nov 2023 22:18:56 GMT] Server:[Caddy] Vary:[Accept-Encoding] X-Plex-Protocol:[1.0] X-Souin-Stored-Length:[7852] X-Souin-Stored-Ttl:[168h0m0s]] Body: here a lot of binary garbage

Second Problem
a lot of requests simply don’t get logged.
My current caddy file

{
	debug
	cache
	order cache before rewrite
	log {
	output file /var/log/caddy/access.log
}

}

:80 {
	cache {
		log_level info
		ttl 168h
		stale 168h
		timeout {
			backend 1m
			cache 1m
		}
	}
	encode {
		br 11
	}
	#header -Cache-Control
	reverse_proxy {
		verbose_logs
		#header_down -Cache-Control
		#header_up -Cache-Control
		to 10.0.5.2:32400
	}
}

caddy log https://nopaste.net/Uddbb6qmoi
curl log https://nopaste.net/MNjDrmrXOr

according to caddy log X-Souin-Stored-Ttl:[168h0m0s] gets correctly set but in the curl output you can see that ttl is over the place. Sometimes it is set to 2 mins it seems cache-status: Souin; hit; ttl=114 and then it use the configured value cache-status: Souin; hit; ttl=604763 but the files drops out of cache then anyways. I don’t really understand, do I look at the wrong values or expect something wrong?
As you can see I even tried deleting the Cache-Control header if that is maybe what screws me over but that made no difference.
I would need the next hint, maybe I made a other trivial error and my config does include something which breaks stuff?

Thank you for help.

Turn off debug logging if you don’t need it anymore.

If a request is served from cache, it won’t get logged by reverse_proxy, because it doesn’t reach there.

You didn’t enable the log directive in your site block which enables access logs.

The log global option configures runtime logs, which are not access logs (though just using log inside a site block will emit access logs to the default logger which is what you configured in global options).

I’m not exactly sure how you mean I should do it
I tried to understand it with reading

and

and

{
	debug
	cache
	order cache before rewrite
log {
	level debug
	output file /var/log/caddy/access.log
}

}

:80 {
	log {
	level debug
	output file /var/log/caddy/access.log
}
	cache {
		log_level debug
		ttl 168h
		stale 168h
		mode bypass
		default_cache_control public
		timeout {
			backend 1m
			cache 1m
		}
	}
	encode {
		br 11
	}
	#header -Cache-Control
	reverse_proxy {
		verbose_logs
	#	header_down -Cache-Control
	#	header_up -Cache-Control
		to 10.0.5.2:32400
	}
}

With that config I still don’t log all requests that are answered by cache handler and removing debug everywhere did not stop cache handler from logging the binary data in the logfile.

Maybe you could tell me more in detail what you meant how I should do it?

Thank you