Help for a beginner please

Hi - I’m laid up after an operation so I’m playing around quite a lot with my synology NAS.

I’ve got a number of docker apps setup and able to access externally (using synology built in ddns and application portal reverse proxy). I can also access on my Lan using the built in dns server as my router does not support Nat loopback.

I have been trying to setup traefik but had issues as I found adding labels via portainer problematic - think this was due to synology docker bring an old version and me having linked containers (e.g. NZBGet linked to sonarr, etc).

Now moved onto caddy instead and have been trying a number of different caddyfiles based on links from the internet. I’ve turned off my dns server in case that interferes and have forwarded ports 80/443 on my router to 40080/40443 on my NAS which are then setup in port mapping on the caddy docker container. I wanted to do this so I could leave my existing application portal setup in place rather than deleting it.

Main issue I have is that when I try to access mysynologyddns/sonarr I just get redirected to the DSM logon page. There’s nothing in the caddy docker logs or anything else that I can see to work out why this happens.

Any ideas?

For info caddy file I have tried are based on the info on these pages

Anyone using Caddy on Docker on a Synology NAS (as Reverse Proxy only)? - #41 by Whitestrake Home · causefx/Organizr Wiki · GitHub

The latest caddyfile I’ve tried is: {
	tls yyyyyyyyyy@gmail
	errors stdout
	log stdout
	header / {
		Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
		X-XSS-Protection "1; mode=block"
		X-Content-Type-Options "nosniff"
		X-Frame-Options "DENY"

	#basicauth [user] [pass] {

	proxy /radarr
	proxy /hydra
	proxy /nzbget
	proxy /sonarr
	proxy /organizr
} {
	redir /organizr/

Ultimate aim here is to have all requests go via caddy to my docker apps and probably to have organizr as the main way of accessing them - including authorisation to underlying apps based on organizr login (e.g. no user/pass on each individual app).

If anyone could share tips or their caddyfile that does something like the above I’d really appreciate it.

Forgot to mention, I’m using the official caddy image from dockerhub and the ports in my caddyfile are the local ports that are defined on each of the docker containers.

I can access the apps using the local url listed in the proxy section of the caddyfile

Thought I may have stumbled onto a solution but sadly not.

I went back to traefik that had auto discovered some of my services due to labels I had added on them in docker. Noticed that the IP address traefik was using was that of the relevant docker container (172.x.x.x) rather than my NAS drive IP address ( In my caddyfile I’ve changed the setup to use the docker ip address but still get redirected to the DSM login page.

Hi @gdb19, welcome to the Caddy community.

There’s nothing in the Caddyfile that would redirect to the Synology login (I’m assuming that’s what the DSM logon page is?).

The logical conclusion is that requests to your IP address are reaching the Synology’s web server instead of Caddy.

You can test this really quick by running curl -IL mysynologyddns/sonarr and looking for the Server header.

Thanks, that seems to be exactly what is happening. The output of the curl command shows server Nginx which is what synology use under the hood for their built in application portal reverse proxy.

I did try a few other things including setting up a duckdns ddns and using that in the Caddyfile and also forwarding ports 80/443 on my router to 81/444 in case there was anything setup by default on the nas.

Just looks like there is something in synology that forwards to built in Nginx instead of going to the caddy docker even though my router sends to 81/444 and these are the local ports defined on my caddy docker container.

Interestingly when I use the duckdns config in my Caddyfile and curl that address it doesn’t even connect.

I’m googling this now and wondering if this is what I need to do

But a little hesitant to do this in case it knackers everything else up.

It shouldn’t be strictly necessary.

All that should be required is that Caddy listens on X and Y ports for HTTP/S on the Synology and is reachable on them; and that your router forwards from 80 and 443 to ports X and Y properly on the Synology. If those are set up properly, it will work.

I’d be looking further at the router’s port forwarding configuration. Consider wiping all the port forwards currently in place, hard reset it, put new rules up and save them, and hard reset it again.

Thanks, I did spot that I also needed to add rules for ports 81/444 to my router firewall so have done that. Still having issues so I’m going to remove all the existing built in synology application portal reverse proxy rules and try again from scratch.

No luck unfortunately.

I’ve removed all my existing application portal setup and have forwarded ports 81/444 from my router to my NAS ip using the same ports. These are defined as the local ports on my caddy docker container and also allowed in my router and DSM firewalls.

I’m using the built in synology ddns but when I try to access high fActory Group for example this redirects me to the DSM login on port 50005 (I have the option to redirect to https in DSM enabled).

I can’t for the life of me figure out why traffic isn’t hitting caddy when I believe the ports all line up, don’t think it can be a firewall issue as the traffic gets through to the synology DSM login.

If anyone is using caddy on a synology in docker and sees this I’d really appreciate some help.


Just to confirm - these ports aren’t relevant to LetsEncrypt, did you also reinstate the ports 80/443 forwards to Caddy’s ports on the NAS?

i.e. router:80synology:8080 and router:443synology:8443 (or whichever ports on the Synology you’ve got opened to the Caddy Docker container).

No, I didn’t - I’d thought let’s encrypt would just use the ports defined within docker. I didn’t realise if needed specific ports.

Just tried to use 80/443 instead but when I try to assign them in docker it says they are already in use. I think from what I’ve read synology reserves these ports so the fix i mentioned above from reddit may be my only option here.

Nah, you don’t need to assign them in Docker.

LetsEncrypt is connecting to your WAN address - to your router. You can port forward from your router to an arbitrary port on your Synology.

So you can use 81 and 444 if you like on your Synology host, but the router needs to forward port 80 to synology:81 and forward port 443 to synology:444.

ACME challenges specify the requirement for standard ports to be used.

Sorry I misunderstood

Yes I have router port 80 forwarded to port 81 on my NAS which in turn is used in my docker container. Likewise my router forwards 443 to port 444 on my NAS and this is used in docker.

My router firewall rule is setup to allow traffic over this port forwarding in both directions.

Alright, try curl -IH "Host:mysynologyddns/sonarr" synology:81, replacing as necessary, e.g. synology with the private IP address of your NAS.

If we get a different result here than the previous curl command, we know the port forwarding isn’t working as expected. If we get the same result as the previous curl command, then we can infer that the Synology is not handing the traffic off to your Caddy Docker container properly.

Looks like my nas is blocking port 81 somehow, I’ve run the curl command for all apps I have setup in caddy and get this each time

Failed to connect to port 81: Connection refused

When I try the same curl command with port 80 I get this

HTTP/1.1 400 Bad Request
Server: nginx
Date: Wed, 24 Apr 2019 08:29:36 GMT
Content-Type: text/html
Content-Length: 166
Connection: close

So that looks to me like port 80 goes to built in Nginx then fails as no rules are setup

Yeah, without actually having a port open to Caddy, there won’t be any way to access it.

I’m not familiar enough with Synology to troubleshoot that one, I’m afraid. Declaring the exposed port via Docker should be enough.

Thanks, really appreciate all your help. I’ll continue digging into the synology setup and have asked for help on a synology forum too.

If I get it sorted I’ll post here what the solution was.

Thanks again.

Actually just one thing to note. I only have iOS devices handy so I’m running all these commands in the Termius app that is linked via ssh to my nas itself. So in efffect I’m running the commands on my nas and not on another computer - no idea if that makes a difference.

Just coming back to this, I managed to get traefik up and running using the automatic watch function for all my docker containers. This was with ports 30080 and 30443 forwarded from my router to the same port on my nas and then those mapped to internal ports 80 and 443 in the traefik container.

I’ve found a few limitations with traefik so have come back to trying to get caddy to work.

I’m now using ports 30080 and 30443 as per traefik so I know those ports are open and not blocked by any firewalls but still can’t get caddy to work.

New Caddyfile is below {
errors stdout
log stdout

header / {
	Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
	X-XSS-Protection "1; mode=block"
	X-Content-Type-Options "nosniff"
	X-Frame-Options "DENY"

#reauth {
#path /calibre
#path /calibreweb
#path /heimdall
#path /lazylibrarian
#path /lidarr
#path /nzbget
#path /nzbhydra
#path /ombi

#path /organizr
#path /portainer
#path /radarr
#path /sonarr
#path /tautulli
#path /traefik

# if someone is not authorized for a page, send them here instead
#failure redirect target=

#upstream url=<1>,cookies=true


proxy /calibre
proxy /calibreweb
proxy /heimdall 172.18.04:80
proxy /lazylibrarian
proxy /lidarr
proxy /nzbget
proxy /nzbhydra
proxy /ombi
proxy /organizr
proxy /portainer
proxy /radarr
proxy /sonarr
proxy /tautulli
#proxy /traefik

} {



I’ve added in code to let organizr manage authentication but have commented it out until I get basic routing working.

At the moment when I try to go to I get an error saying safari cannot open the page because it could not establish a secure connection to the server, trying the http equivalent link gives a message saying the network conn3cyion was lost.

Few points to note - my router does not support Nat loopback so all testing is done from my phone over 4g & the ip addresses in my Caddyfile are the docker container ip/port as successfully used in traefik. There are no errors in my stdout log in docker but there is very little there other than the standard message about sites being served over https and lets Encrypt terms, also can’t see any certs created in the mapped /root/.caddy file which seems odd?

Any ideas?

First up:

Don’t rely on this. Those addresses handed out by Docker can change at just about any time. They will not be permanent. You’re better off exposing those ports and using the NAS’ private IP in the long run. Ideally, you’d put all these containers in the same Docker Network so that they can refer to each other by the actual service name, but I don’t know if Synology does this.

Secondly, I think you’re trying to fix too many things at once. I strongly suggest you go back to basics and fix the issues one at a time.

Starting with:

So lets replace the Caddyfile with something really simple.

http:// {
  status 200 /

Then run curl -IX GET synology:30080 on a computer in the same private network as the NAS. Substitute the NAS’ private IP address for synology here. Forget about external networking for now; lets just get basic HTTP responding properly on the expected port.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.