First I want to say thanks to all the folks that make Caddy possible! Before running into Caddy I tried several other solutions but I always ended in frustration and fruitless results. I am still amazed on how fast and easy it was to setup Caddy!
Now that setting up a reverse proxy doesn’t take up all my time and brain cells any more, I started to have some specific questions about reverse proxy in general.
When reading about reverse proxy and how to set them up, half of the time the endpoints behind the proxy (internal) are using TLS and the other half they don’t (http vs https). What are the pros and cons of both approaches? I guess if my LAN would be compromised, normal http traffic could be at risk? But since the config for https is not much more work, why not always use https internally?
Are there any other good practices to enhance security with Caddy besides using it as reverse proxy with https?
I though that I had to open and redirect port 80 in my firewall to Caddy. Due to an error I didn’t but everything seems to be working. Can that be?
I’ve noticed some setups on the internet that are cascading (Caddy) reverse proxy. What is the advantage of this?
Yeah, Caddy may use the TLS-ALPN challenge instead of the HTTP challenge for ACME validation, so your server can work without HTTP. But it’s typically recommended to also serve HTTP so you can serve redirects to your users, and solve the HTTP challenge as a fallback, should the ALPN challenge fail for whatever reason (best to have more than one option for redundancy)
Well, it depends. You were vague on your actual setup, so it’s hard to really talk specifics without being hand-wavey about it all. Best for my time and yours if we talk about specifics here.
Again, this is pretty vague. What are your concerns? Every app has different requirements.
Again, it depends I think you mean having one instance of Caddy in front of others? Lots of reasons why that might be desired. Could be they do it to implement mTLS, i.e. trust between machines, using Caddy’s acme_server feature, or to load balance across multiple servers, where each server is just the app plus Caddy.
OK, thanks for clarifying that. I’ve opened port 80 permanently now.
Ya, I wasn’t even vague I basically didn’t mention any of it I was aiming for general Q&A but I understand it would depend on specific setups. So to be more specific, I have a Keriomail (exchange) server with several domains and Nextcloud with Collabra that I’m setting up behind Caddy. The below Caddyfile seems to be working although I’m not sure about Nextcloud/Collabra as I broke that VM and still need to fix that.
I had to add tls_insecure_skip_verify because the internal certificates are self-signed. In the documentation it’s written this option should be avoided (with production systems) but was wondering if I should just go plain http instead. I just came across step-ca and wonder if this would be a solution but I haven’t found more clarifying documentation about this.
Just general concerns about sleezy people trying to snoop around I found one option posted by @Whitestrakehere which is based on IP filtering. But I think doing this dynmically at the end-point level with Fail2ban is more efficient.
Using tls_insecure_skip_verify skips all security offered by HTTPS. Anyone who wants to man-in-the-middle the connections could do so without either the client or server complaining (i.e. by decrypting, then re-encrypting the connection).
At that point you might as well use HTTP because you’ll skip the overhead of the TLS handshake when proxying.
The “easiest” way to establish trust between your internal servers would be to use Caddy’s acme_server on the one that’s publicly accessible, then run Caddy instances on each of your other servers, using the publicly accessible one as their ACME server to issue them certificates that are trusted. But even this isn’t that easy to manage.
You should instead focus on preventing bad actors from gaining access to your network, instead of worrying about what happens once they’re already in.
Tools like fail2ban don’t so much enhance security as much as prevent resource overuse by blocking the connections earlier in the pipeline.
Using fail2ban with Caddy is unfortunately kinda tricky though, because Caddy’s default of JSON logging isn’t supported by it. But you can follow this thread to get an idea of what might be involved:
I just realised/decided that either nextcloud or bitwarden will be my password manager. I totally agree with you that No1 security is keeping the bad people out the local network but I think it makes sense to keep local connections secured too in this case.
You mention that the “easiest” way is to use Caddy’s acme_server. What is the “not so easy” part of this appraoch? I’ve searched around for documentation but there isn’t much yet or maybe I am searching for the wrong stuff.
@matt posted this last week but this is different to the acme_server, right?
There’s not a whole lot to document, that I know of, that isn’t mentioned here. You enable that handler and then Caddy can issue certificates. We need to add more documentation but I’m not sure what people need to know yet. Contributions welcomed.