Proxy DigitalOcean Spaces (S3 clone) with authorization

(Read this, then delete it before you post.) To get the best help possible, please:

I want to proxy content in a DigitalOcean Spaces bucket. But since requesting an invalid file exposes the bucket name, I’d like to use auth and non-public objects.

Current Caddyfile:

example.com {
    proxy / https://example.nyc3.digitaloceanspaces.com {
        header_upstream Host example.nyc3.digitaloceanspaces.com
    }
}

This allows me to proxy public objects, but not private. S3 auth requires stuff outside of what’s possible in the Caddyfile. Would this be doable with a plugin?

Here’s how S3 auth is done in Bash: S3 signed GET in plain bash (Requires openssl and curl) · GitHub

2 Likes

Sounds like a job for a closed issue: https://github.com/mholt/caddy/issues/378

What I mean, of course, is that there is definitely some interest in a plugin that does this! That is, IF the proxy directive really can’t do it (I wouldn’t know off hand, not a whole lot of experience with S3, and there are many use cases).

It’d be cool if there was a way to setup bounties for plugins.
I need this for my personal file host, but I’d throw $20 at anyone who could make this happen.
It’d save me a ton of trouble if someone decided to DOS my DO Spaces bucket.

You could probably just hire someone to build it. I’ve been hired to build Caddy plugins before, but any skilled Go developer could do it. Probably in the process of building it, it’ll become clear whether a whole separate middleware is needed or just some layer of auth or even just a small change/improvement to the existing proxy middleware. We’ll see!

How do I go about that? It’s only for sharing screenshots and stuff with friends (doesn’t make money), so I can’t offer more than around $20, which doesn’t go far for dev time.

I’m sure there are probably other people out there who would be happy to pay for it, which is why I mentioned a bounty system.

This is interesting to me. I’d also like some sort of authorization level in front of this. So essentially I’d have only authorized users to my app that can request a URL path inside my app domain and be transparently proxied to the content on a private DO spaces.

Perhaps I would have my app send back an X-Accel-Redirect header and then caddy would know how to proxy that to the DO space? Or maybe try the JWT middleware?

Jacob, I noticed that the invalid file returns a 403 status code. Couldn’t caddy just intercept that and display your own message using the errors directive?

I thought about that. But it’d still be security through obscurity. It’d be much cleaner to use proper authorization. Then you could protect files with basic auth in Caddy as well.

So is this not possible with the internal directive combined with some sort of auth?

Based on the gist I linked to, I don’t think so. S3’s API appears to require a special signature to be computed when making a request:

string="GET\n\n${contentType}\n\nx-amz-date:${date}\n${resource}"
signature=`echo -en $string | openssl sha1 -hmac "${secret}" -binary | base64` 

One thing a plugin might be able to fix is mime types.
Right now, I have to set mime types for each domain using the mime directive.
Otherwise, a text file named script.sh gets downloaded instead of shown.

I tried setting errors to redirect 403 to 404.html but it didn’t work.

errors {
    403 404.html
}

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.