I’m working on a project where I use Caddys browse functionality to setup a simple file server as a webseed for cool things with Webtorrent.io
The fileserver / webseed should just work as a backup and should thereby not deliver the content with the maximum server speed (to reduce traffic the webtorrent peers will send the most file parts).
I’ve seen there is the limits directive, but that’s more for upload request and something like this.
Also, there is the http.ratelimit plugin to manage the maximum connections at a special time interval.
I’m trying to archive the following:
For every file, there should be a maximum download speed of X kb/s (for example 100 kb/s).
Any ideas about how to archive that or a workaround for this use case are appreciated
Per-file speed limits need to be implemented in caddy (the process) itself.
In the meanwhile you might want to utilize Linux’ built-in traffic shaping facilities. For example, define a default hierarchy for device ext0 (could be eth0 in your case, or en0p1 or similar):
# tc qdisc show
tc qdisc del dev ext0 root
tc qdisc replace dev ext0 \
root handle 1: htb default 1
tc class add dev ext0 \
parent 1: \
classid 1:10 htb rate 2mbit burst 5k
As you just set 1:1 to the default class (we didn’t need to actually create it) no traffic will go through our rate-limited class 1:10.
Change that, assuming the caddy process for webseeding runs as uid = 1020 by:
For seeding it doesn’t matter much who gets how much of the allocated 2mbit/s. If you cared, you could create a bucketed queueing and group peers by IPv4/16 blocks or the such (→ pcq queue, grouped by hash(dst_addr/16)) so one peer in subnet A doesn’t hog all the bandwidth if there’s other peers in subnet B.