Rewrite performance with local ssd + block storage on Linode

(Jacob Hands) #1

Hi there,

I’m running an application serving custom Google Maps containing thousands of small images. In this use-case, the user experience is pretty latency sensitive.

Right now I’m doing a rewrite that will check local SSD for cached content and fallback on upstream storage.
I wanted more cache space so I added a block storage volume (on Linode.) Will there be any noticable performance impact from checking 1. Local SSD, 2. Block Storage, 3. Upstream? As opposed to 1. Local SSD, 2. Upstream Storage.

Here’s my new Caddyfile: {
    root /var/caddy_files/
    proxy /storage_proxy {
        without /storage_proxy
    rewrite / /cache-a{uri} /cache-b{uri} /storage_proxy{uri}

Here’s the previous one: {
    root /var/caddy_files/
    proxy /storage_proxy {
        without /storage_proxy
    rewrite / /cache-a{uri} /storage_proxy{uri}


(Matthew Fay) #2

For each rewrite that fails to find something in /cache-a, the extra delay you’ll introduce is essentially:

  1. Minor string manipulation to determine the file path to check (root + /cache-b + the URI placeholder)
  2. The file access latency of the storage hardware (probably quite minor as well)

The code that checks if the the rewrite target is valid is here:

If the file isn’t there, it’ll fail pretty fast (during fs.Open).

While I expect it there won’t be any noticeable slow-down unless your block storage is sluggish, nothing short of benchmarking in an equivalent environment is likely to give you a true understanding of the impact.

(Jacob Hands) #3

Okay thanks. I’ll do some testing on Linode’s block storage.

Thinking about it more, if it misses the first cache (local SSD), the request would go through pretty fast if there’s a hit on cache-b.

If it misses both the caches, it hits the backend storage which I find is around 2-3x slower (browser shows 6-900ms vs 3-400 off the SSD.) Given such a large difference, I’d expect impact to be unnoticeable as long as block storage never causes really long latencies.

Thanks for the info!

(Matthew Fay) #4

No worries.

Your final rewrite location (for the proxy) is definitely the slowest by a long margin. Technically, Caddy still checks for a valid file, even for the proxy location - because at this early stage of execution, the proxy hasn’t been considered yet. Then we create a whole new HTTP request to the proxy upstream, which would be a significant overhead on any network when compared to direct file access.

Even if cache-b is a bit slower than you’d hope, it’ll be way faster than proxying for those assets.