Caddy reverse proxy on QNAP

Hi,

I’m trying to do the same as you but i’m more newbie.

I don’t know how to do this part:

“Note that for other apps like Radarr, Sonarr etc, you need to configure them to have an URL base like /radarr so that it’s 192.0.0:1234/radarr rather than just 192.0.0:1234”

I’ve done some search but nothing is explicit on how to do it. It’s an index.php path? I don’t know, it is a configuration inside the web app? DNS recorder?

I would appreciate any insight, thank you.

Hey @Joao_Mendes,

To set the app’s URL base, you’ll need to access its own settings.

Here’s what it looks like for Radarr:

2 Likes

Hi @Whitestrake,

I figure that out with some more research. Thank You.

I’ve been struggling because qbittorrent dosen’t support that, so i’ve thinking what could i do:

1. I try to redirect to myurl.myqnapcloud.com:6564/qbittorrent with an index.php so that caddy could reverse the port, but i got a “bad getway” message or the page dosent load:

  • The “/qbittorrent” part i get it putting an index.php inside a “qbittorrent” folder (this folder inside web server main folder) with code to redirect me to myurl.myqnapcloud.com:6564 (qbittorrent port);

  • So when i write myurl.myqnapcloud.com/qbittorrent that index.php is pull out and i got myurl.myqnapcloud.com:6564/qbittorrent;

  • But maybe my caddyfile (caddy.conf) isn’t properly configure for this. Maybe for this i don’t need the “proxy /qbittorrent” part in caddy.conf.

2. So i saw your other post here Reverse Proxy With QBittorrent Web UI but i got “502 Bad Getway” message too:

QNAP System Configuration:

  • So like @mupet0000 i disable HTTPS in both Qnap System and Web Server. For HTTP i use port 8080 in System and 86 in Web Server;

  • I disable the built in Let’s Encrypt certificates in QNAP too;

  • All the ports are forward in the router, including 80/443 so they are free for caddy to bind. For qbittorrent i use port 6564 forward in the router too.

Caddyfile (caddy.conf):

In “/share/CACHEDEV1_DATA/.qpkg/Caddy/caddy.conf”

So my host is: myurl.myqnapcloud.com
(it runs in 8080 for http but i think caddy pull out reverse and it runs on 443 for https).

In your post as you mention for your configuration to qbittorrent: in caddyfile “The port (443 above, as I used standard HTTPS during these tests) needs to match the port shown in the browser (…)”.

I based on that and my caddy.conf is:

myurl.myqnapcloud.com {
tls email
root /home/Qhttpd/
gzip

proxy /qbittorrent http://myurl.myqnapcloud.com:6465 {
header_upstream X-Forwarded-Host {host}:443
header_upstream -Origin
header_upstream -Referer
}

proxy / localhost:8080 { # blank
transparent
header_upstream X-Forwarded-Host {host}
}

}

I’ll test this with port 8080 in "{host}: " but in order to resolve an issue i needed to reboot my nas and now i’m getting let’s encrypt rate limit errors when running caddy again. I think i need to wait a hole week to pull out renew certificates and run caddy again.

I’ll putt some research material to use built in let’s encrypt certificates with caddy because there is no much information in one place about it for qnap users i guess:

Caddy reverse proxy on QNAP-There is a way to script and auto start caddy on boot and how to renew LE certificates (I don´t know if caddy does it)

PS: Sorry for the bad english.

Hey @Joao_Mendes, I’ve moved you to a new topic as you’ve got a whole host of issues to resolve.

We could try to tackle your entire current setup and all its issues at once, but I think it would be much faster if you simply ditch the current configuration and start from scratch.

Put together a simple Caddyfile that just proxies to qbittorrent. Get that working, then start adding complexity back in.

FYI - Caddy handles your certificates automatically. If it’s running, and you can access it from the open internet on HTTP(S) standard ports, it’ll handle it. That said…

This shouldn’t be required. You probably need to sort out your certificate storage. The CADDYPATH (defaults to $home/.caddy) must be preserved between starts. If it is being preserved, Caddy will keep using the valid certificates it’s already acquired, won’t have to acquire new ones every session, and won’t run into LetsEncrypt rate limits (unless you legitimately add so many domains that it clogs up LE).

Hi @Whitestrake, thank you for the support, i really appreciate it.

Okay, so i try to run it on my host and with radarr that supports url base path and its more simple than qbittorrent, i suppose, but i got this when i run $ caddy -conf /share/CACHEDEV1_DATA/.qpkg/Caddy/caddy.conf

$ caddy -conf /share/CACHEDEV1_DATA/.qpkg/Caddy/caddy.conf
/share/CACHEDEV1_DATA/.qpkg/Caddy/caddy.conf:13 - Error during parsing: Unknown directive 'transparent`

I have to ip’s pointing to my host (myurl.myqnapcloud.com):

  • 192.168.xx.x50 (Main getway)
  • 192.168.xx.x51 (Running a vpn)

So instead of “localhost” i assume that i need to put the first ip.

My caddy.conf in /share/CACHEDEV1_DATA/.qpkg/Caddy/caddy.conf is:

myurl.myqnapcloud.com
root /home/Qhttpd/
gzip

proxy /radarr http://192.168.xx.x50:7878 { # https://radarr.video/
transparent
header_upstream X-Forwarded-Host {host}
}

proxy / http://192.168.xx.x50:8080 { # blank
transparent
header_upstream X-Forwarded-Host {host}
}

}

The issue above its because of the “transparent” but i don’t know how to solve it. I puted this caddyfile based on this post: Caddy on QNAP - set up reverse proxy.

When i run just $ caddy i got:

Activating privacy features… done.

Serving HTTP on port 2015

http://:2015

WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with ulimit -n 8192.

When i pull myurl.myqnapcloud.com:2015 i got “404 not found” but caddy is running.
But my host is “missing an index file” based on this Beginner Tutorial

I try to follow that guide to resolve the issue so i run $ caddy -host myurl.myqnapcloud.comand i got:

$ caddy -host myurl.myqnapcloud.com
Activating privacy features… 2019/11/17 16:31:46 get directory at ‘https://acme-v02.api.letsencrypt.org/directory’: acme: error: 0 :: GET :: https://acme-v02.api.letsencrypt.org/directory :: urn:acme:error:serverInternal :: The service is down for maintenance or had an internal error. Check https://letsencrypt.status.io/ for more details., url:

"This shouldn’t be required. You probably need to sort out your certificate storage. The CADDYPATH (defaults to $home/.caddy ) must be preserved between starts. If it is being preserved, Caddy will keep using the valid certificates it’s already acquired, won’t have to acquire new ones every session, and won’t run into LetsEncrypt rate limits (unless you legitimately add so many domains that it clogs up LE)."

So i only need to run $ caddy - conf /path/caddyfile one time for each host? I don’t need to run it each time i edit the caddyfile ? It makes sense.

So when i need to edit caddyfile i just stop caddy from running and then i run $ caddy again and caddy will pull that caddyfile in that path?

So with this caddyfile i can´t run caddy on my host. I try to put myurl.myqnapcloud.com instead of that ip but it’s the same.

PS: Sorry for the bad english.

I think you’re missing an opening brace after your site address.

This is running with no configuration. So it tells us that the Caddy binary functions, but not much more.

Looks like LE was down when you tried. This is just a matter of unfortunate timing, not an issue on your end.

You should only ever need to run Caddy normally. You shouldn’t need to run it in any special manner after editing the Caddyfile or for any other reason. Always run it the same way (as a service maybe, e.g. via supervisor).

You don’t need to stop Caddy at all. You can signal it with USR1 and it will - as it’s running - load the new Caddyfile. Pay attention to the output (process log) when you do this, it will tell you if the new Caddyfile was successful or not (if the Caddyfile is bad, it will reject it and keep running the previous working Caddyfile).

Hi @Whitestrake, thank you once again.

So i try to run $ caddy -host myurl.myqnapcloud.com to see if i can just run it on my host i got:

Activating privacy features... done.

Serving HTTPS on port 443 

https://myurl.myqnapcloud.com

Serving HTTP on port 80 

http://myurl.myqnapcloud.com

WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with `ulimit -n 8192`.

But when i go to myurl.myqnapcloud.com i got: 404 not found. I guess it’s because this way caddy can’t access the “root /home/Qhttpd” needed to run myurl.myqnapcloud.com, it’s specified in caddyfile.

While running $ caddy - conf /share/CACHEDEV1_DATA/.qpkg/Caddy/caddy.conf now i got:

args:2 - Error during parsing: Unknown directive ‘-’

So i assume that caddyfile isn´t pulled out when i run $ caddy

I can’t understand where “-” is on my caddyfile.

The open bracket after my host ? So like this at the beginning of the caddyfile: myurl.myqnapnas.com {

But the “transparent” issue doesn’t now (yet).

You can signal it with USR1 and it will - as it’s running - load the new Caddyfile. Pay attention to the output (process log) when you do this, it will tell you if the new Caddyfile was successful or not (if the Caddyfile is bad, it will reject it and keep running the previous working Caddyfile).

So i need to signal the caddyfile with USR1 to see it on the logs? How i do that? Because when i go to the caddy.log i didn’t saw any request to pull caddy.conf.

PS:Sorry for the bad english,

Best regards.

Yes, that’s the likely cause of 404s here.

You’ve got a space here between - and conf. There should be no space, i.e. caddy -conf /share/CACHEDEV1_DATA/.qpkg/Caddy/caddy.conf.

Essentially, this is causing Caddy to think you’re supplying a “short” Caddyfile straight on the command line. (You can do this, like caddy -host example.com gzip 'log stdout' 'proxy / localhost:8080' for an arbitrary example).

Yes. I can see a close curly bracket at the very end of your Caddyfile that isn’t matched. I’m assuming it’s meant to match to that exactly as you’ve noted here.

Here’s what it looks like in my logs when I signal one of my Caddy servers in a Docker container:

caddy_1       | 2019/11/18 16:06:48 [INFO] SIGUSR1: Reloading
caddy_1       | 2019/11/18 16:06:48 [INFO] Reloading
caddy_1       | 2019/11/18 16:06:48 [INFO][cache:0xc000211720] Started certificate maintenance routine
caddy_1       | JWT middleware is initiated
caddy_1       | JWT middleware is initiated
caddy_1       | JWT middleware is initiated
caddy_1       | JWT middleware is initiated
caddy_1       | JWT middleware is initiated
caddy_1       | JWT middleware is initiated
caddy_1       | JWT middleware is initiated
caddy_1       | JWT middleware is initiated
caddy_1       | 2019/11/18 16:06:51 [INFO][cache:0xc0001b6dc0] Stopped certificate maintenance routine
caddy_1       | 2019/11/18 16:06:51 [INFO] Reloading complete

Ignore the JWT spam, it’s just a plugin I have configured individually across a number of my own sites.

When it does this reload, it checks for changes in the Caddyfile. If it’s changed, and the new version is valid, it’ll load it. If it’s not valid, it’ll spit out and error and keep running with the old configuration.

1 Like

Hi @Whitestrake, thank you once again for your patience,

It works now !!

Yes i saw the log file and i saw that. So caddy stops and starts the requests for existing certificates automatically when i do edit the caddyfile. Nice.

For radarr when i run myurl.myqnapnas.com/radarr https works but when i run the app it starts only with the port like myurl.myqnapcloud.com:7878/radarr and i lost https. Caddy should proxy the port right?

What about this:

WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with ulimit -n 8192.

I have runned the $ ulimit -n 8192 command and nothing happen so it was recognize and i guess that i’ll have no problem with that in the future.

Best regards.
João Mendes