Caddy on QNAP - set up reverse proxy

(mupet0000) #1

Hello!

I am completely new to Caddy and I’m finding myself very confused, I think what I am trying to achieve is relatively simple for someone with the right knowledge so hopefully someone out there can help me out.

I have a QNAP NAS and my goal is to access NZBGet, Radarr & Sonarr remotely and over SSL. From what I have read, using a reverse proxy is the easiest way to achieve this on QNAP. I am able to access my NAS home page remotely via SSL and it’s set up via DDNS.

So I can type mynas.myurl:443 for example and it will resolve to my IP and display my nas login. I can forward any port and substitute that port in the url to go to that page, anything non-ssl works fine but when attempting to use SSL, I would need to configure each app individually and it looks like trouble on QNAP.

My understanding is that I can use a Caddyfile so that I can type mynas.myurl/nzbget and then I could be directed there with SSL.

My first problem is my understanding of where do I find or where do I put the Caddyfile, my default installation of Caddy is “/share/CACHEDEV1_DATA/.qpkg/Caddy” but there is no caddyfile. Any documentation I have seen states that the Caddyfile can go anywhere but I don’t quite understand how that can be possible.

Secondly, as you can probably tell, I am very new to this and I don’t understand even after reading the caddyfile tutorial it just goes over my head and I find myself no closer to understanding anything over the first line of the caddyfile.

I would appreciate any insight, thank you :slight_smile:

(Matthew Fay) #2

Hi @mupet0000, welcome to the Caddy community!

You said you’ve read over the Caddyfile tutorial, were you referring to this page?

https://caddyserver.com/tutorial/caddyfile

That’s really the best documentation we have for beginners. If you’re having trouble understanding it, I suggest taking it one part at a time - you can post questions you have about each part as you go.

To quote the Caddyfile Primer, the reason this works:

If the Caddyfile is in a different location or has a different name, tell Caddy where it is:

caddy -conf ../path/to/Caddyfile

Basically, you can put the Caddyfile wherever you like as long as you tell Caddy where to find it by specifying the file path using the -conf flag.

(mupet0000) #3

Thank you for your response. I believe I have understood how to do what I want, but I’ve run into some issues.

I created a simple Caddyfile as follows:
my_url_here_com
proxy /radarr localhost:1234

I placed the caddyfile in the directory of my choice and ran the command but this is the result:

[~] # caddy -conf …/share/CACHEDEV1_DATA/.qpkg/Caddy/Caddyfile
Activating privacy features… 2019/05/10 22:30:42 [INFO] [myurlhere] acme: Obtaining bundled SAN certificate
2019/05/10 22:30:43 [INFO] [myurlhere] AuthURL: letsencrypturlhere
2019/05/10 22:30:43 [INFO] [myurlhere] acme: use tls-alpn-01 solver
2019/05/10 22:30:43 [INFO] [myurlhere] acme: Trying to solve TLS-ALPN-01
2019/05/10 22:31:29 [INFO] Unable to deactivated authorizations: letsencrypturlhere
2019/05/10 22:31:29 [myurlhere] failed to obtain certificate: acme: Error -> One or more domains had a problem:
[myurlhere] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Cannot negotiate ALPN protocol “acme-tls/1” for tls-alpn-01 challenge,

I’m not sure what’s going wrong and I’m wondering if you could point me in the correct direction.

EDIT:
I think the above error was caused by my NAS having port 443 already binded. I disabled the NAS’s built in SSL and this allowed caddy to get the SSL certificate validated.

This time the command ended in:

2019/05/10 23:39:51 [INFO] [myurlhere] The server validated our request
2019/05/10 23:39:51 [INFO] [myurlhere] acme: Validations succeeded; requesting certificates
2019/05/10 23:39:53 [INFO] [myurlhere] Server responded with a certificate.
done.
2019/05/10 23:39:53 Listen: listen tcp :80: bind: address already in use

When attempting to visit myurl/radarr it’s a “The requested URL /radarr was not found on this server.”.
When attempting to visit myurl I am not redirected to an SSL page.
When attempting to visit https://myurl I am given “ERR_CONNECTION_REFUSED”

So I changed the http-port to 85 and the port to 2019 and got this:

Serving HTTP on port 2019
http://:2019

WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with ulimit -n 8192.

I watched a video on YouTube where the caddyfile was literally one line which was the url and in 30 seconds the guy had his SSL domain loaded. I cannot understand why I am facing so many errors when I’m trying to achieve the exact same thing :confused:

I don’t know where to go from here.

(Matthew Fay) #4

For Automatic HTTPS, Caddy needs to be able to bind ports 80 and 443.

This makes me think that Apache is running on port 80, not Caddy. Caddy would redirect you, as you noticed.

Caddy’s not running at all, so you don’t get a response from HTTPS on port 443, the OS just refuses connection.

How exactly did you go about this?

(mupet0000) #5

You were right, Apache had port 80 so I reconfigured the NAS to use alternative ports to free up the default ports for Caddy and it is now working and reverse proxy things like /radarr are functioning correctly with SSL too so that’s fantastic!

I am facing one last hurdle which is now that when I go to myurl, instead of being presented with the default webpage that it usually serves up, I get a (the page is served in SSL though):

https://myurl/redirect.html?count=0.36205010057744524
with the content:
404 Not Found

Normal behaviour without caddy is:

  1. myurlhere (port 80, but now 86) goes to /home/Qthttpd/index.html
  2. it changes to webserver port 8080 and redir to /home/httpd/index.html
  3. then to /home/httpd/redirect.html
  4. finally to /home/httpd/cgi-bin/Qts.cgi and the homepage is displayed

If I tell caddy the root is /home/httpd it gives me the same error but it also downloads qts.cgi through chrome.

I have configured the NAS to use port 86 for normal http and port 446 for SSL. When I attempt to visit myurl:86 I am automatically changed to HTTPS with:

ERR_SSL_PROTOCOL_ERROR

If I go to myurl:446 I get served my page with a default cert issued from my NAS and chrome says:

You cannot visit myurlhere right now because the website uses HSTS. Network errors and attacks are usually temporary, so this page will probably work later.

If I can get the normal home page to load up then everything is configured and I’m done. What do you think I need to check to fix this?

For reference, here is my current caddyfile which I have built using an example file:

myurlhere
root /home/Qhttpd
gzip

header / {
X-Content-Type-Options nosniff
X-XSS-Protection “1; mode=block”
Strict-Transport-Security “max-age=31536000; includeSubDomains; preload”
-Server
}

proxy /radarr http://127.0.0.1:7878 { # https://radarr.video/
transparent
header_upstream X-Forwarded-Host {host}
}

proxy /sonarr http://127.0.0.1:8989 { # https://sonarr.tv/
transparent
header_upstream X-Forwarded-Host {host}
}

proxy /nzbget http://127.0.0.1:6789 { # http://nzbget.net/
transparent
header_upstream X-Forwarded-Host {host}
}

}

My current solution is to turn on the NAS built in LetsEncrypt cert and browse to myurl:446 which allows me to visit the nas home page via ssl without going through caddy. Not sure why going through caddy is breaking the home page.

Sorry for the barrage of information but I’m trying to provide everything I know so that you have all the correct information and also if anyone else faces the same issue they are able to reference this thread :slight_smile:

(Matthew Fay) #6

More information is never bad!

This is a side effect of this:

header / {
  X-Content-Type-Options nosniff
  X-XSS-Protection “1; mode=block”
  Strict-Transport-Security “max-age=31536000; includeSubDomains; preload”
  -Server
}

Specifically, the Strict-Transport-Security. You’ve now visited myurl hosted by Caddy, your browser received that header, and now until 31,536,000 seconds (365 days) have passed since your last visit, your browser won’t allow non-HTTPS access to that URL.

That said, you’re on a non-standard HTTP port, so it doesn’t change the port when it attempts HTTPS, it just tries it on 86, which you’re serving HTTP from. Hence the protocol error - your browser is trying to negotiate HTTPS and your HTTP server is understandably confused by that.

Also a side effect of Strict-Transport-Security. You cannot use an untrusted certificate with HSTS. You will not be able to use myurl to access any website that doesn’t have valid HTTPS, at least until the HSTS header wears off for myurl in your browser.

Suggestion: Don’t navigate to myurl:86 or myurl:446. Instead, have Caddy proxy to your NAS, e.g. nas.myurl with a proxy to localhost:86 or similar as required, and use Caddy’s valid HTTPS with your HSTS header.

(mupet0000) #7

Thank you very much!

All appears to be working with HTTPS once I added this to my caddyfile:

proxy / 127.0.0.1:8080 {
transparent
header_upstream X-Forwarded-Host {host}
}

I originally tried with port 86 but it seems that the nas’s web portal automatically redirects from your web port to the system port which defaults as 8080, with caddy directing to port 86 and then the nas redirecting me to port 8080 I lost SSL. But with caddy going straight to 8080 SSL is fully working.

It looks like I have achieved exactly what I set out to so I am very grateful for your help as I really couldn’t have done it without you!

Lastly I have a question about security. I originally had that code that was causing issues in my caddyfile because it’s from a sample caddyfile and it supposedly increases security. Obviously I don’t want to be insecure in any way so I’m wondering if I should add anything into my caddyfile or be aware of anything?

(Matthew Fay) #8

Good to hear Caddy’s working for you!

Not particularly! Caddy is secure by default.

Many people believe it’s necessary to implement a battery of HTTP security headers, like the ones you had above. There’s plenty of guides out there on the common headers and what they do (such as this one: https://www.keycdn.com/blog/http-security-headers).

In my opinion, these aren’t necessary most of the time. In cases like Strict-Transport-Security, they can actually be harmful if you don’t have a full understanding of how it works and when you should (or shouldn’t) use it. Others like Content-Security-Policy can also cause lots of headaches for site owners that haven’t taken the time to fully understand the usage.

The oddball there is -Server, which some people believe reduces the likelihood a malicious actor can identify the software you use to serve your website, and therefore make it harder for them to exploit it. I’m not particularly convinced that it’s very effective, though.

I don’t generally run any of these security measures on my own sites, except occasionally Content-Security-Policy - but I don’t configure that on the web server, I specify it in a <meta> tag in the HTML document.

1 Like