Anyone using Caddy on Docker on a Synology NAS (as Reverse Proxy only)?

Oh, not sure if helpful, but I could mount the file Caddyfile instead of just that folder Caddyfile to clean it up a little?

EDIT: wait so the first two are indeed file mounts while the third is a folder mount?

Yes, the first two are file mounts. You need them both, Caddyfile relies on common.conf (unless you want to copy paste the contents of common.conf into the Caddyfile wherever you see import, then you could just mount the one file and forget about it).

1 Like

OMG it started!!!

It created the certs in that folder too!

I noticed the container just says restarting for a long time, and I looked at logs. THe logs say this over and over:

2017-06-23 03:17:37 stdout 2017/06/23 03:17:37 [MYNo-ip-URL] failed to get certificate: [MYNo-ip-URL] error presenting token: Could not start HTTP server for challenge → listen tcp :80: bind: address already in use
2017-06-23 03:17:37 stdout 2017/06/23 03:17:37 [INFO][MYNo-ip-URL] acme: Trying to solve HTTP-01
2017-06-23 03:17:37 stdout 2017/06/23 03:17:37 [INFO][MYNo-ip-URL] acme: Could not find solver for: dns-01
2017-06-23 03:17:37 stdout 2017/06/23 03:17:37 [INFO][MYNo-ip-URL] AuthURL: https://acme-v01.api.letsencrypt.org/
2017-06-23 03:17:37 stdout Activating privacy features…2017/06/23 03:17:37 [INFO][MYNo-ip-URL] acme: Obtaining bundled SAN certificate
2017-06-23 03:16:45 stdout 2017/06/23 03:16:45 [MYNo-ip-URL] failed to get certificate: [MYNo-ip-URL] error presenting token: Could not start HTTP server for challenge → listen tcp :80: bind: address already in use

Oh boy, you’re lucky you didn’t single handedly rate limit yourself then and there!

The problem appears to be that you’re using the host’s network, so Caddy inside the container is trying to bind to port 80 on the NAS, but some other process on the NAS is already using port 80, so it fails.

Not being familiar with Synology, I don’t know what that is. It could be another Docker container with host networking, it could be the web UI for the NAS… You’ll need to find out and either disable it or work around port 80. If you can’t disable it and free up port 80 on the NAS, read on.

I have this exact issue with my unRAID server because its web interface listens on 0.0.0.0:80. Luckily my unRAID server is not actually at the edge of my network, I’m using port forwarding, so I can do this tricky business:

Router -> forwarded -> unRAID -> port mapped -> abiosoft/caddy
80                     8080                     8080:80
443                    8443                     8443:443

Now all that matters is the external IP, on port 80, eventually gets around to Caddy on port 80 on the inside of the container. It might look weird but it works quite well.

To do this you need to set the Docker container to bridge networking (rather than host) and configure those Docker port maps.

2 Likes

I am googling now, and I am already sure what it was… port 80 redirects to the stock 5000 port!
http://192.168.1.99 in a web browser will absolutely redirect to http://192.168.1.99:5000

It ironically uses nginx built in.

Reading this now and seeing what best fix will be:
http://tonylawrence.com/post/unix/synology/freeing-port-80/
I am NOT liking this fix.

I want to add that I am fascinated by caddy server and really want to use it. I like the built in SSL with Let’s Encrypt for sure. Hopefully you don’t think I am silly for not just using the built in nginx reverse proxy: Synology Reverse Proxy with DSM 6.0 – Primal Cortex's Weblog

Not at all. Caddy is basically fire-and-forget HTTPS, it’s so great, and I absolutely love the Caddyfile.

Yeah, it’s unfortunate. Both the nginx config fix and the port forwarding workaround are complicated, which makes it really hard to do as a neat little guide. You’re up the creek for getting port 80 without committing to either fix, though, sadly.

1 Like

OK, I am ready to give this a shot in whatever way you think is best/easiest.
I am not 100% sure how to do that forwarding.
On my asus router, I can port forward 80 to port 8080 , input my Synology NAS local IP address.
Now in my caddy docker container,… if I dont’ use “Use same the network as docker host”, this allows me access to the Network, Port Settings, and Links tabs.

Under the port settings, I would change local port auto to be 8080 to intercept those forwards from router mapping to container port 80?

Then the same thing again but for 443 to 8443?

Will I need to do anything in the network tab, bridge listed there.

In links will I need to allow links to those containers nzbget, sonarr, and radarr? I am thinking port stuff may be enough as those containers don’t need to read/write to each other?

I think I am close, but missing a few things to make this work.
here are some screenshots of the docker wizard I use basically: http://imgur.com/a/E0x4Y

Perfect. Repeat that, forwarding :443 to :8443 with the NAS IP as well.

I’m guessing here, but from the way it’s laid out, I think “auto” just means it’s going to line it up to the same port within the container. You don’t want that, you want :8080 on the host to line up with :80 in the container. I’d say you’re spot on with how to do it.

Nope, bridge is good. You’re not doing any complex networking (well, any more complex than we’ve already got). If we were (like having it on multiple networks, different network drivers, etc.) here’s where we’d do that.

Links are great - link all of them. Alias them to their service name (e.g. linuxserver-radarr1 to radarr). Once they’re linked, Docker’s internal DNS will convert the alias into the container’s IP. Then, in the Caddyfile, you can refer to http://radarr:7878 or just radarr:7878, as per the alias. You’ll need to change your Caddyfile anyway (127.0.0.1 won’t work anymore, since now the container is it’s own entity on the LAN, “bridged” to the network through the server’s physical adapter - hence the name bridge network).

You could, instead of using the alias, put in the IP of your NAS on the local network. That would work too.

1 Like

SHoot I missed this! I attempted to start without the caddyfile change! it failed to start. I will try correcting the caddyfile and try again.

SO basically change
proxy /sonarr 127.0.0.1:8989 {
transparent
}
to
proxy /sonarr sonarr:8989 {
transparent
}

for each?

Thoughts on how to do that DiskStation?
We have that redir coded…
proxy / 127.0.0.1:5000 {
transparent

I removed the auto to 2015 port mapping as a trouble shooting step. No dice as in wont create the container. NO good error message or log.

Next I removed the 3 links for the corresponding containers sonarr,nzbget, and radarr.

This time it started. Unfortunately I got this…
Looks like I need to use a new email address too as I am rate limited…

failed to get certificate: acme: Error 429 - urn:acme:error:rateLimited - Error creating new authz :: Too many invalid authorizations recently.

Is there another way without the aliases? what do i change these too if the 127 nor these aliases can work?

proxy /sonarr sonarr:8989 {
transparent
}

The Caddy container can see the host by its IP address on the local network now, so use the IP of your NAS instead of 127.0.0.1.

1 Like

Bummer. LetsEncrypt locked you out for making too many failed certificate requests. You will have to wait a week for the rate limit to clear.

The aliases aren’t stopping Caddy from working. Caddy will start up with a proxy to literally anywhere, it doesn’t have to be valid - the errors will come later when you try to connect via that proxy (it would give you error 502 if the upstream isn’t reachable). If those containers are linked correctly, though, they’ll be fine.

The problem is probably something wrong with the port forwarding. :80 and :443 on the router MUST reach :8080 and :8443 on the NAS, and Caddy MUST have the ports :8080 and :8443 on the NAS mapped to :80 and :443 in the container. If there’s anything not lined up right, the ACME challenge will fail, Caddy will exit, and as you specified auto-restart, it will repeat this again and again (and LetsEncrypt has rate limited you as a result).

1 Like

Can I use a different email and/or DDNS to not wait a week?

I don’t understand why, but it is the aliases that keep the container from even starting. As soon as I removed those 3 it readily started. With the 3 in there, it just says “Create container abiosoft-caddy1 failed.” No other info :frowning:

If you can change your DDNS, yep. You can also use the LetsEncrypt staging endpoint. In fact I’d highly recommend that. Or turn auto restart off until you’ve got all the config nailed. Or all of the above, really.

Can’t be… Rewrite the proxy directive out again, something might be mistyped.

I just started up an instance of Caddy 0.10.3 on my PC with a proxy / some-imaginary-backend:80 (literally) and it runs fine. Needless to say there is no some-imaginary-backend available on my network. Conks out when I try to access localhost:2015, though (error 502).

Worst case scenario, remove the links entirely, change the upstream in all your proxies to your NAS IP on the LAN, and try again.

1 Like

I did have a second DDNS setup that I was wanting to move to. I used that and a different email address.

I redid everything and made a new container, including the 3 aliases. It will not create the container. It bothers me as I dont understand a reason.

I remove those 3 aliases. Then it creates the container and starts up perfectly. Everything is working from my cellphone not on my wifi. Sonarr, RAdarr, and Synology are perfect. I am indeed using my NAS IP address in my caddy file since i cant get it to take the aliases.

WHat else can i try in regards to getting that to work with aliases? very weird.

nzbget isn’t working. I do have nzbget requiring its stock userID and pw. First I get prompted for the proxy pw via pop-up. Then I get another pop-up that I presume is the nzbget web password. Does that not play nice together and I should remove the nzbget web server app password?

NZBGet auth is basic too, I think. Turn it off. Basic auth uses the Authorization header, which obviously can’t hold both Caddy’s required credential and NZBGet’s as well (unless they’re identical).

1 Like

Yep that was it. Looking in the logs, it was indeed passing the basic auth creds to nzbget! I guess I dont need it any more anyway!

So far so good still! So happy! Ill whip up a guide for the community and see if any other tweaks needed.

Are you familiar with nzb360? Technically i don’t need it, but curious if you know how to get it working with caddy.

It allows for local and remote setup. basically using a comma.
My local setup works fine, but I cannot get remote to work.
The key fields are:
IP/Host Address (notes to input HTTP auth as user:pass@host): IP_OF_NAS,myuser:mypass@CustomDDNSName.DDNSProvider.com

Server Port (If you are using a sub dir, enter it after the port): 8989/sonarr,80/sonarr

I assume its probably an issue with caddy starting at 80 and then wanting to go to 443?

I found one older thread: nzb360

I also took a stab at converting to the full CNAME setup and merging into one Caddyfile without common.conf.

https://zerobin.net/?fb716416c8de9ebd#P2Ymysw6yAXrqPtn+QPJR+8zT3rcJkIEyKxmgRqbdHA=

Thoughts? definitely less pleasing to look at! What are your thoughts on condensing? Maybe go back to common.conf?

I was unsure how to handle the redirs for sure, but took a guess