Help with Caddy Install in Docker(container station) on Qnap

Would be grateful for some guidance. trying to install Caddy on QNAP via container station. I’ve been led to this while scouring forums for best way to secure Home Assistant/MQTT/Node-Red all running on my QNAP NAS. I’m new to much of this and at this point can’t even determine if I’m installing it correctly. After a few tries some of the config settings default to:

Command: --conf /etc/Caddyfile --log stdout --agree=$ACME_AGREE

Entrypoint: /bin/parent caddy

In the advance settings, I know to select Host mode for network, but i’m not sure about the environment variables or shared folders. The shared folders seems important regarding persistent cert storage, as well as a location to add, or update a config (Caddyfile?)

Environment variables default to this.



I’m not sure If I should be adding or removing anything but had seen a note on one for CADDYPATH that would let me specify a location for future certs.

For shared folders, there are a few defaults for new volumes, but I seem to believe I need to instead do volume from host and select existing folders on my NAS, but not sure of the important ones and their corresponding mount points

After many trial and errors, i’ve had some configs that didn’t error out and I could to the IP of the NAS plus port 2015 in a browser and initially see a page that said “if you see this, Caddy is running”. Eventually somehow I ended up with a new page that showed 0 directories and an empy name, size, modified list

Now I’m somewhat stuck. I don’t know if its installed properly. I’ve played around with dropping a text file called CaddyFile.txt in a few locations and then tried to see what happened when trying to reach my duckdns domain. I also port forwarded 80 and 443 to my NAS IP. And I change the ports used by the Qnap web server to (theoretically) open those two ports for Caddy.

Any idea what I’m missing? pulling my hair out.

Thanks in advance to anyone who can help!


Almost all the environmental variables and volume mounts you need to set are highly dependent on the container you select.

A good container will have documentation listing and explaining all the different options you need to set and what you should set them to.

Caddy itself only cares about a few environmental variables:

The container author might have set additional variables to be handled inside the container, though. You’ll need to refer to their documentation.

In Docker jargon, a “shared folder” is called a bind mount. This is when you mount a file or folder from the host onto the inside of the container.

You’ll want bind mounts for your site and your Caddyfile, but you could use either a Docker volume or a bind mount for your ACME files (/root/.caddy). I personally use entirely bind mounts.

I don’t know who wrote this container, or the template QNAP uses for running it, but setting the PATH manually is a rare sight, and specifying ACME agreement to be false is weird.

Agreement to ACME terms is not assumed by Caddy, and the -agree flag is there to explicitly instruct Caddy that you do agree, so I consider setting -agree but specifying it to be FALSE to be some strange double handling indeed.

Anyway, with -agree=FALSE, Caddy will prompt you inside the container to accept the LetsEncrypt T&Cs. You don’t want that - the inside of the container is generally non-interactive. You can change ACME_AGREE from FALSE to TRUE (I mean, as long as you want to accept the terms and have Automatic HTTPS working).

By default, this is ~/.caddy. For a root user, this would be /root/.caddy. You can change this if you want, but you’ll need to update your volumes. I don’t bother; it’s a good a place as any.

In the Command, you can see exactly what config file Caddy is looking for: --conf /etc/Caddyfile. There will be a default Caddyfile there. You want to bind mount your own Caddyfile on top of that.

If your heart is set on using the name “CaddyFile.txt” instead of the conventional Caddyfile, you’ll need to alter the Command so that Caddy looks for the right file, e.g. --conf /etc/CaddyFile.txt.

Once that’s done, you’ll need a Caddyfile that configures sites that listen on ports 80 and 443; if you’re using Automatic HTTPS, those are the ports it will use.

1 Like

First, let me say thank you for your response. As I’ve search and searched, I’ve seen your name all over this forum, going to great lengths to help others. You are very kind.

I’ve spent time with the documentation for the container and it just wasn’t very clear. Some of if was fitting it into the UI for container station, and also because they list many different scenarios without much guidance on when or why you might use a particular one.

But i think i made headway through trial and error. I finally did a bind mount on the following:

i don’t know if I really need the srv since i’m only planning on doing proxy. And i have no idea if I need the .caddy as a local folder,but figure better to have and not need. The etc one with a Caddyfile in it (removed the .txt thanks) finally elicits a different response from the console and i feel a little more confident the install is reasonable.

The new problem I’m having is that caddy fails while trying to setup the HTTPS (I think):

Activating privacy features… 2019/01/31 23:38:45 making ACME client to get ToS URL: get directory at 'https://acme-v02.api.letsencryp’: Get x509: certificate signed by unknown authority

exit status 1

my Caddyfile looks like this: {
proxy / {



any ideas?

1 Like

Don’t bind mount /etc (the directory). There’s so much stuff in there that’s necessary for most containers to operate. For example, Caddy will fail if you ever have it try to connect to a HTTPS upstream because the ca-certificates won’t be available with which to validate.

Just mount /etc/Caddyfile specifically. Or, mount /etc/caddy folder and put the Caddyfile in it, then point the --conf flag at /etc/caddy/Caddyfile.

Then yeah, it’s not gonna be necessary, you can skip it.

I’m assuming you mean mounted from the host; I mount my .caddy folder on the host, personally. I never need to interact with it, but I like to have everything that’s mounted in a container be next to each other on the host so I can get to all the important bits and know that they’re in one place.

Haha, yep. Mounted over /etc, no longer have root trusted certificates. Can’t verify LetsEncrypt.

By the way, can’t recommend enough that if your approach is a trial-and-error one, you add -ca to your Command to use their staging environment (fake certificates, but no much less strict rate limits - for doing exactly what you’re doing - testing setups).

Last thing you want is to blow through your rate limits while trialing and testing somehow and end up not being able to get certificates at all for a week!

Staging Environment - Let's Encrypt

You can just remove the flag when you’re completely happy with the setup and it’ll migrate over to “real” LetsEncrypt certificates.

oh man. I’m SO close thanks to you. Those last directions were the key and i’ve now got some action in my console. I’ve added the staging link on your advice. thank you. I think my last real challenge now is getting my Caddyfile syntax just right. Ideally, i’d like to secure my mqtt and node-red nodes through this. That seem reasonable? mqtt seems useful as an external endpoint potentially and woudl love to work with node-red flows remotely.

Are mqtt and node-red just hosting HTTP services? You could reverse proxy to them just like in your example Caddyfile.

one last question. There is a lot of activity in the log. Looks like the process is scanning with responses like this:
2019/02/01 03:22:55 [INFO] - No such site at :80 (Remote:, Referer: )

That seem normal?

This error is generated when a client requests a website from Caddy that you haven’t configured it to serve.

So you’ve got configured, but someone requests or an empty string (no hostname) or an IP address. Caddy’s got no clue how to serve that request, hence the response.

You’ll see this kind of thing happen a lot on the public internet - people scan public ports all the time just to collect information or look for vulnerabilities. It’s fairly normal.

The remote in this case was, though - which is private, likely a device on your network - and they requested a private IP address. I’m guessing you typed in to your browser’s address bar?

hmmm. i think there are some features in Home assistant that do network scanning. Wonder if that is going on and triggering this.

Yeah, that’s a plausible explanation.

Although it was a bit more than just a port scan, it looks like a full on HTTP request was sent. If Home Assistant is trying to keep track of HTTP servers on the local network, or it’s looking for devices that it communicates with via HTTP, that’d explain it.

Tracking that down sounds like the next exciting challenge. :slight_smile:

1 Like

well I finally got those constant errors to go away by restarting other containers (node-red) was the offender. Logs are quiet now, but the proxy doesn’t seem to work. getting this in the url:
and on the page:

404: Not Found

What does your Caddyfile look like right now?

[xxx] {
proxy / {
log stdout

Hmm. Pull up a shell and run curl -IL, let us know what headers you get back.

well assuming i did that right (used putty to get into NAS, which is where i’ve been able to run docker commands the few times I’ve tried SSH approach),

HTTP/1.1 301 Moved Permanently
Connection: close
Content-Type: text/html; charset=utf-8
Server: Caddy
Date: Fri, 01 Feb 2019 05:07:30 GMT

HTTP/1.1 404 Not Found
Content-Length: 14
Content-Type: text/plain; charset=utf-8
Date: Fri, 01 Feb 2019 05:07:30 GMT
Server: Caddy
Server: Python/3.6 aiohttp/3.5.4

1 Like

That was perfect.

The first block is Caddy getting the request for HTTP and redirecting the client to secure HTTPS.

The second block we can see Caddy has faithfully proxied to the upstream server at

Which means that the upstream server is giving you the 404 and Caddy is just passing it along.

hmmmm. ok that’s good to know. strange that the site doesn’t work. When I use the proxy url directly it works fine. what ports should be forwarded? just 80 and 443?

the new url it forwards to is the https version of the home assistant site using external domain. that url certainly wouldn’t resolve on the server i don’t think. is there some kind of DNS scenario needed here?