How to do strict matching whitelisting?

I would like to ensure that only whitelisted URIs are accepted by Caddy on a static site.

Today, the fully open site is defined as

https://hello.example.com {
  root * /path/to/site/dir
  file_server
}

It works great, https://hello.example.com/afile.txt is retrieved correctly.

I now would like to make the following restrictions:

  • the content of the directory stays as it: afile.txt and anotherfile.txt
  • afile.txt is available through https://hello.example.com/asecretstringhardtoguess/afile.txt
  • anotherfile.txt and anything else is not available at all

In other words, you have to know asecretstringhardtoguess to build the URI for a specific file.

How can I approach that?

Version 2: if the setup above is too complicated, having a common asecretstringhardtoguess-based URI for all files would be OK too:

https://hello.example.com/asecretstringhardtoguess/afile.txt
https://hello.example.com/asecretstringhardtoguess/anotherfile.txt

are available, but not https://hello.example.com/afile.txt or https://hello.example.com/anotherfile.txt

I’m not sure what you’re trying to do here… But “security through obscurity” is not a good way to go. Why do you want to do this?

Realize that crawlers may still find the files if it ever gets linked to from elsewhere.

Yes, I am really aware of that (as an infosec professional). I want to put a layer of protection on data that is quasi-public anyway.

But my question was not really about “should I do it” or “is it safe”, but rather “is it doable”? :slight_smile:

Why not simply place the files in the directory /path/to/site/dir/asecretstringhardtoguess/ ?

3 Likes

Version 1:

example.com {
  @secretpaths {
    path /asecretstringhardtoguess/afile.txt
    path /adifferentstringentirely/anotherfile.txt
  }
  handle @secretpaths {
    uri strip_prefix /asecretstringhardtoguess
    uri strip_prefix /adifferentstringentirely
    root * /path/to/site/dir
    file_server
  }
}

We declare a strict (non-globular) list of paths that we allow in @secretpaths.

Then we handle all the paths we’ve allowed, first stripping all the possible secret strings (throwing a full set of URI stripping at every request shouldn’t be much of a bother at all for the CPU because these are simple string manipulations and I don’t envision you’ll have a lot of these…)

The secret strings are double handled, but by allowlisting entire paths, we prevent any cross-secretstring-requests because our allowlist items contain a 1:1 secret to requested file relationship.

Version 2:

example.com {
  handle_path /asecretstringhardtoguess/* {
    root * /path/to/site/dir
    file_server
  }
}

This version simply requires (and then strips) the asecretstringhardtoguess as the first path element for any request to successfully be routed to the file server.

3 Likes

Ahhhhh… this is how one misses obvious solutions by thinking out of the box :slight_smile: Thank you.

And thank you for the solutions in your other post as well!

2 Likes

This topic was automatically closed after 30 days. New replies are no longer allowed.