Anyone using Caddy on Docker on a Synology NAS (as Reverse Proxy only)?

I am trying to setup a reserve proxy to my Synology NAS as well as a few other apps running on it. I found helpful guides on writing the caddyfile. I feel pretty comfortable with it.

I am missing everything else leading me to use the caddyfile.

I fire up Docker on my NAS, search the images database for caddy, and I see 423 caddy images. For the most part, one sees the top 5 or so images and would probably go with one of those. I won’t list the ones I’ve been reading about for now.

At this point, as with my other images, I setup the container mapping mounts to my physical volume. This is where I am also unsure what needs mounted to where basically. As an example, its noted that the Let’s Encrypt Certificates be housed outside of the container, so this is a prime example of a mount I’d need. Another important mount will be where I will put/keep the caddyfile. Again I don’t fully understand how/where to mount this exactly.

Before diving into more details, I figured I’d start and see if anyone else has this same objective completed. So, is anyone already doing this and mind shedding some light?

Ultimately, I will create a step by step guide based on my learning to go along with my other guides. It appears most people are using nginx, but I am really excited about caddyserver and hope I can make it happen.

I do this on my unRAID server and on a number of VPS’, so I can definitely help you out here with Docker volumes, configuration, etc. It’s mostly arbitrary how you want to set it up, the only requirement is that it’s internally consistent.

If you’re pulling from the Docker hub, look no further than abiosoft/caddy (unless you need specific plugins).

Here’s what you need to worry about:

  • Caddyfile
  • Certificate storage
  • Static assets, websites

I usually store my Caddyfile right next to the folder I have Caddy use for certificates. Lets call it /home/whitestrake/caddy. My websites are in /srv (e.g. /srv/example.com). I’m going to map my Caddyfile from its real location in /home/whitestrake/caddy/Caddyfile to the container’s /etc/caddy/Caddyfile. Looking at the documentation for abiosoft/caddy, the certificates in the container are located in /root/.caddy, so I’ll map /home/whitestrake/caddy/certificates to that. My static assets just get mirrored right over from /srv to /srv.

I don’t know how familiar you are with Compose, but here’s what my docker-compose.yml would look like. I have the official php container in there as well, but you could use abiosoft/caddy:php instead and include startup php-fpm7 in your Caddyfile as mentioned in the docs on Docker hub.

version: '3'

services:
  caddy:
    image: abiosoft:caddy/latest
    command: ["-log", "stdout",
      "-email", "letsencrypt@example.com",
      "-conf", "/etc/caddy/Caddyfile"]
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /home/whitestrake/caddy/certificates:/root/.caddy
      - /home/whitestrake/caddy/Caddyfile:/etc/caddy/Caddyfile
      - /srv:/srv

  php-fpm:
    image: php:7-fpm-alpine
    volumes:
      - /srv:/srv

My Caddyfile usually looks like this:

example.com {
  root /srv/example.com
  ...
  fastcgi / php-fpm:9000 php # Just works because sites are in /srv in both containers
}
sub.example.com {
  ...
}

So basically swap /home/whitestrake/caddy for wherever on your server you want to actually store this (I usually use /docker/web, actually) and you should be good to go. If you change which Docker container you use, you’ll need to consult the new container’s documentation to figure out where you’ll need to map to on the inside, but it’s mostly the same.

3 Likes

Wow, thank you so much! Abiosoft was indeed the top hit, so I was planning to use that one!
This is exactly what I needed! I can’t wait to try it when I get home from work!

Let me know what you think about my approach and let me know if I am off in anything…

In my other 3 containers, all from linuxserver.io, they utilize:
-e PGID for for GroupID - see below for explanation
-e PUID for for UserID - see below for explanation
-e TZ for timezone EG. Europe/London

I then map/add environmental variables 101 ,1026 , and respectively, as confirmed when I SSH in. How is this handled in this setup? I basically have one user account on my NAS, it s a renamed admin acct and admin itself is disabled.

This is probably common knowledge, but let me map out my overall my process.
“On the first screen, tick the ‘Enable Auto Restart’”
Click Advanced, and edit a few options within the tabs…

Folder creation: In regards to my physical volume i create all of the folders I will need first. I have /apps/ right off root. Therefore in this case, I would create the folder /apps/caddy/ and /apps/caddy/certificates/. I also keep a separate config folder for each app, so this sounds like a perfect place for the caddy file itself. Therefore I would also create: /apps/configs/caddy/ and /apps/configs/caddy/Caddyfile/
Lastly, it sounds like I need to create /srv too. I currently do not have srv or any PHP going on though.

Volume Mapping: All of my other containers also have volume tab have File/Folder /apps with Mount Path /apps, so I was hoping to keep that consistent. So for this case in this container in my volume tab, I would have:
/apps /apps
/apps/caddy/certificates /root/.caddy
/apps/configs/caddy/Caddyfile /etc/caddy/Caddyfile
/srv /srv

I would place my Caddyfile in /apps/configs/caddy before launching.

hmm wait a moment… is this duplicating what the docker-compose.yml does? For now I will continue noting how I usually do this inside the Synology Docker setup process…

network tab I “Tick the box for “Use the same network as Docker Host”.”

Port settings tab and links I leave alone. The default ports should then work,

Then in the variables tab I add the 3 environment variables discussed above, PGID, PUID, and TZ.

Then I click OK, next, and finish. Then the container starts to boot up.

EDIT: made some corrections to what I think the mount mapping is…

linuxserver.io are a fantastic crew, quite active in the unRAID community too I believe. Their containers are tailor made for setups like ours, and their container init script is designed (along with other useful features) to read env vars, such as PGID, PUID, TZ, etc (all usually documented on their container READMEs). The Caddy container isn’t designed in the same way, and doesn’t check for these vars, so while you could set them if you like they would have no effect. Caddy runs as root in abiosoft/caddy, which avoids port binding issues, file permission issues, etc… I should note that running as root in a container is a point of contention for the highly security-minded, though.

As a point of note, I advise against this. A misconfiguration of DNS, networking, or storage may cause Caddy to go into a boot loop powered by Docker’s auto-restart, quickly smashing LetsEncrypt’s rate limits.

Neither are necessary unless you want to serve a website off disk. It sounds like you’re planning to just proxy services, gonna take a wild stab and say Sonarr, NZBGet, etc… I personally also host a landing page with links to those and some other useful services, and it uses PHP. If you don’t want to do that, you can skip /srv entirely and skip a php container while you’re at it.

Pretty much! Docker Compose is basically a way of templating out an arbitrary number of docker run containers, in a shared network, into a YAML formatted file for ease of reading. You put that in a folder and run docker-compose up (usually with -d to daemonize it) and Docker does the rest internally. The UI for your NAS also fills the role of Docker Compose in that you use it to write out your configurations and hit “finish” and the NAS does the rest.

The rest of your steps look pretty good to me (although I don’t run a Synology myself so I can’t speak to the NAS-specific steps).

3 Likes

I didn’t get a chance to tinker last night, but let me ask some clarifying questions before i try tonight…

You are 100% on point! I don’t need a landing page and basically just want to be able to get to those 3 apps securely remotely, so absolutely 100% reverse proxy only. When you say “you can skip /srv entirely and skip a php container while you’re at it.” what does that mean exactly as far as my setup.config? I won’t mount srv, but what else do i change? basically make sure my caddyfile and common.conf do not start php, such as startup php-fpm7 ?

I was planning on using these two, but I’ve stripped out all the apps I am not using:
“caddyfile” config example: Caddyfile (v0.10.x) - Reverse Proxy Usenet Apps Config - Pastebin.com
“common.conf” config example: Caddyfile (v0.10.x) - Common.conf Example - Pastebin.com

The potentially related lines i see are:
caddyfile:

####################################################################################

web domain server block

####################################################################################

SiteNameOmitted.no-ip.org {

tls EmailOmitted@mail.com # Email for Let’s Encrypt Verification

startup /caddy/php/php-cgi -b 127.0.0.1:9000 &

import common.conf

}

####################################################################################

localhost server block

####################################################################################

http://localhost {

import common.conf

}

common.conf:

The code below will proxy PHP requests

fastcgi / 127.0.0.1:9000 php

In regards to the issue with using “Enable Auto Restart”, as I was browsing other Caddy containers a few had something along the lines of this being needed to reuse the same Let’s Encrypt Certificates…
Docker :

Certificate Persistance
If you use alpine-caddy to generate SSL certificates from Let’s Encrypt, you should persist those certificates outside of the container. In the instance of a container failure, this allows the container to reuse the same certificates, instead of generating new ones from Let’s Encrypt.

For information on including this into your Caddyfile see the Caddyfile tls specification.

The certificates are stored in /root/.caddy inside of the container, and thus you must connect an outside directory to that directory to allow persistance. For docker-compose.yml files, under the volumes declaration, include:

  • ./.caddy:/root/.caddy
    or

docker run -v $(pwd)/.caddy:/root/.caddy

Yoba has:

Optional but advised. Save certificates on host machine to prevent regeneration every time container starts.
Let’s Encrypts RATE LIMITS explain the number of times you can regenerate certificates.
$ docker run -d -v $(pwd)/Caddyfile:/etc/Caddyfile -v $HOME/.caddy:/root/.caddy -p 80:80 -p 443:443 yobasystems/alpine-caddy

Would that allow me to do auto restart without fear of slamming Let’s Encrypt? if so, I am not 100% sure how to enable that in my caddyfile or common.conf ? I am storing my certs at /apps/caddy/certificates/, so that is probably half the battle.

Yep. Just use abiosoft/caddy:latest, don’t bother using startup php or fastcgi or similar, you’re good. You’re spot on with the lines you can remove from your boilerplate Caddyfiles.

To be perfectly honest, I use restart: always in my docker-compose.yml because I’m pretty confident in the config I use. But it only takes one DNS issue for you to plow through pending auth rate limits, even if your own config is perfect, so it’s always something I advise against. Making sure your certificates are persisted (this is what you’re achieving by mounting /root/.caddy from inside the container to your /apps/caddy/certificates folder) will definitely let you restart your Caddy instance as much as you like, though, assuming no other issues.

1 Like

I appreciate all of your help, especially with my obvious ignorance to what I am doing, and helping me learn along the way.

I can’t wait to try this tonight! I have a few stupid questions…
what happens when I connect to my DDNS without the URL base?

Where/how is the authentication taking place? I know how to use logins for each app, but I was thinking their was an initial secure login of sorts first? I may be confusing with the nginx reading I did. It will only be me accessing remotely. I just want to make sure I am secure. I read around a little bit, it sounds like http.login plugin may be needed?

A follow-up question to that would be if I should be doing something similar Enable Brute Force Protection.

I may be missing something, but shouldn’t I enable Strict Transport Security? I read the blurb but I do not think I fully understand why I wouldn’t want to use it…
Strict-Transport-Security “max-age=31536000;”

Going off the boilerplate:

root /caddy/www

It will try to serve you files out of that directory. You might want to redirect it, perhaps to Google or your most commonly used service:

redir {
  # Only redirect if the request is for the webroot
  if {path} is /
  # Replace with wherever you like
  to /sonarr
}

Nowhere, currently. The easiest and quickest way to remedy this would be to use basicauth:

# Blanket basicauth requirement for all requests
basicauth / user pass

Caddy deprecated the use of .htpasswd files and does not log failed basicauth attempts, as far as I know. This makes it a little difficult to implement fail2ban.

There’s also the third-party plugin http.login, which supports a few auth backends including .htpasswd and Github OAuth2, and works using JSON Web Tokens (via the http.jwt third-party plugin). Github supports 2 factor authentication, though, which would make brute forcing astronomically difficult. I’d rate this as a low priority, though. Basic auth should be plenty.

Looks like that’s included in the common.conf boilerplate too:

Strict-Transport-Security “max-age=31536000; includeSubDomains; preload”

1 Like

Wow thank you again! I appreciate your fast responses!!!

My goal here is to provide a guide and best practice for others following my other guides for this specific setup. What would be best practice as far as placement goes for both of these within the caddyfile or common.conf?

I’ve initially placed them right after the header / section in common.conf , but I’d like to confirm.

I just thought of an unrelated question. The Synology NAS is accessible via web admin via ServerName:5000 via HTTP. I am not sure if I can add a URL base to it like /synology so far via googling. If not possible, I could implement a CNAME of sorts such as synology.MyDDNS.com ? else maybe I make my redirect always point there too I suppose. I like Sonaar being the default though.

I was under the impression that for each proxy that i use, that I would need to enable the URL base for each?

I would probably set both HSTS and Basicauth in the Caddyfile, just before you import common.conf. By virtue of being very difficult to revoke, HSTS should probably be a conscious opt-in on each site you want to enable it for. Basicauth credentials are definitely not “common”, so they should also be set manually for each site.

Then again, lots of the “common” conf file isn’t very universal. I don’t use a common file at all because the repetition is usually just gzip. Here’s how I do my own sites:

# /etc/caddy/Caddyfile
import /etc/caddy/sites/*.caddy
# /etc/caddy/sites/example.com.caddy
example.com, www.example.com {
   ...
}

Since you’ve probably got just the one site, I wouldn’t bother with multiple files at all, just keep it all together. The additional complexity doesn’t serve you, so keep it simple.

If the backend doesn’t support setting a URL base, you’re going to run into issues if you try to subfolder it. You’ll need to use subdomains. I saved myself a lot of time and effort by setting a CNAME record for *.example.comexample.com. Now I can use any subdomains I like with zero notice, and if I need a subdomain to point somewhere else instead, I can override it for that subdomain (by adding a subdomain.example.com A/CNAME record).

You can set up a different redir quite easily - it’s just redir /synology https://synology.example.com. No need to hijack your web root away from Sonarr.

Things get real weird real fast when backends can’t serve their CSS and JS and none of the hyperlinks function, and it’s nearly impossible to have a complex app that is URI agnostic. That leaves us with two options - either set a URL base, or host it on the web root, using subdomains as required.

1 Like

I fired everything up. I decided to not mess with CNAMES or figuring out how to get caddy to point to the root 5000 port for now.

I couldnt recall what ports for my router to forward. I forwarded 80 TCP to my NAS and nothing happened. So I tried adding 443 and nothing happened. When I say nothing happened, I mean I tried to connect to my DDNS from my phone over cellular data and it didn’t connect.

Next i got back on my laptop and decided to connect to my nas:2015 in CHrome.
I think this is bad news in a way, as it worked:

Caddy web server.
If you see this page, Caddy container works.
More instructions about this image is here.
More instructions about Caddy is here.

From the caddyfile I had removed “startup /caddy/php/php-cgi -b 127.0.0.1:9000 &”
from the common.conf I had removed “fastcgi / 127.0.0.1:9000 php”

Since that page worked, that means its using your sample/default caddyfile and not mine?

I made a slight change to my mounts than we had originally discussed to basically keep everything under /apps/config/caddy having both:
/apps/config/caddy/Caddyfile
/apps/config/caddy/certificates

In the volume tab I mounted /apps/config/caddy/Caddyfile to /etc/caddy/Caddyfile
and /apps/config/caddy/certificates to /root/.caddy
_

In the /apps/config/caddy/Caddyfile i put the caddyfile and the common.conf.
FYI the caddyfile has no file extension. Maybe this is another failure point. I removed the .txt from it. I just used windows Notepad and not Notepad++ i use for python stuff…

I noticed on the Environment tab, there is a Execution Command box. It was pre-populated with:

/usr/bin/caddy --conf /etc/Caddyfile --log stdout

I realize we are venturing into Synology land, hopefully you can still assist.
Thanks!

Sorta - but we’re still within the realm of Docker container configuration.

Remember - the Caddy process in the container only sees the file system on the inside, so you want to line it up to where you mounted the Caddyfile.

Change this:

/usr/bin/caddy --conf /etc/Caddyfile --log stdout

To the location of the Caddyfile inside the container. Based on your post:

It should be:

/usr/bin/caddy --conf /etc/caddy/Caddyfile --log stdout

P.S. Your screenshots look pretty good!

1 Like

I will try again tonight! Darn it! Good catch!

Since that command is automatically already in there, I am thinking I’d rather just change that mount so future users won’t have to edit that spot. Basically map /apps/config/caddy/Caddyfile to /etc/Caddyfile right?

Do you mind also setting me straight on my Router Port Forwarding?
Only need TCP 80 and not TCP 443?

Also, my NAS appears to not offer a URL base option. I am not really sure how and where to add it in the configs as proxy and/or the redir option you provided. It is only reachable at http://MyNASName:5000 or http://192.168.1.99:5000. Therefore I’d expect http://MyDDNSName:5000 to somehow work

redir /synology https://synology.example.com

I am just tempted to make this whole setup work with the CNAMES too for all apps…
I am almost afraid to start over though. I kind of like having the common.conf so that both the web domain server block and localhost block works the same. I think they did this as they had issues connecting to their servers locally? I do question that though in my setup as I dont have PHP going on. When local I would just connect directly anyway… so in that case I could remove that block and then just move everything into caddyfile.

EDIT:
I started looking at the example caddy I have… although I dont use PLex, I could create this for all 4 things: NZBGet, Sonarr, Radarr, and Synology?

####################################################################################

Plex subdomain code block

####################################################################################

plex.yourdomain.com {

gzip

Separate log file for Plex server

log /caddy/logs/plexaccess.log {
rotate_size 1 # Rotate after 1 MB
rotate_age 7 # Keep log files for 7 days
rotate_keep 2 # Keep at most 2 log files
}

errors /caddy/logs/plexerror.log {
rotate_size 1 # Set max size 1 MB
rotate_age 7 # Keep log files for 7 days
rotate_keep 2 # Keep at most 2 log files
}

proxy / 127.0.0.1:32400 {
transparent
}

}

EDIT2: I started reading the manual more and decided to make my own “reverse proxy only” config.
I am definitely curious how to combined this into just a caddyFile.
I tried to lookup every command to understand what it is doing. One thing I couldn’t find is what “-server” does in the common.conf in the header HSTS stuff.

Another thing, ideally would I remove the URL base proxy method all together in the common.conf since I’ve added the CNAME method in the caddyfile?

I think I may also need to create a /caddy/logs folder and mount it too.

I’d greatly appreciate your help making this all into one Caddyfile without the common.conf file that I can easily share and others can simply plug in their info and use it.

Reverse Proxy Only New Caddyfile:
https://zerobin.net/?67894bcbb2fb5c6a#iQ1QL21B0tsDikCT41GgwPXf6kSGc/cWF8oq8e5L7LA=

Reverse Proxy Only Common.conf example:
https://zerobin.net/?4ed2769528df1c4a#nJfoL04CRcvyxNlCRMmkbtlhfLXB/22Hb74A9X71cFE=

EDIT 64: haha
OK I got a response from the original caddyfile creator. It was leftover from his nginx to caddy conversion. What are your thoughts on those other HSTS dirrectives like “preload + subdomains” ?

EDIT 99:
I made all the changes I thought I should. got this error and container wouldnt start “Start container abiosoft-caddy1 failed: rpc error: code = 2 desc = “oci runtime error: could not synchronise with container process: not a directory”” Google didnt find anything helpful, but I am guessing a path or folder or mount is wrong. I tried adding the /caddy/logs and also a /caddy/www

Yeah, that’ll do it nicely.

:443 is mandatory IF you want Caddy to manage your certificates. Without both :80 and :443 available it will either fail to start (if Caddy can’t bind them) or fail to validate to LetsEncrypt (if Caddy can bind to them inside the container but they aren’t exposed to the public). You’ll want both.

This is not an option your NAS, or Caddy, is capable of providing. Setting a URL base is a function of the app you’re proxying, and it must be built into the app as a feature (many don’t). When correctly implemented, it makes it so that links like yourapp/foo.html are instead rendered as yourapp/baseurl/foo.html, and are correctly routed by the app. Without some kind of advanced on-the-fly HTML-rewriting proxy, this wouldn’t really be possible without the app doing it itself.

It should work like that, yes. If you redir /synology synology.myddnsname.com and also proxy / mynasname:5000 under the synology.myddnsname.com label, you can access it from synology.mynasname.com without the port.

Honestly - I just connect to my public-facing website instead of the server’s local IP/hostname even when I’m right next to / working on the server. There’s no issues connecting to it locally. If I needed to, like if I had no DNS resolution for some reason, I would just use myserver:8989 to directly get Sonarr, etc, without the Caddy middleman, like you do.

Setting a header with a minus sign in front removes that header if it exists. By default Caddy sends Caddy in the Server header, and if it’s proxying transparently to another webserver, it will send that webserver’s Server header instead. By setting -Server, it’s telling Caddy never to send that header at all - which is essentially security through obscurity. The idea is that if it’s not blatantly obvious which web server you’re using, an attacker might not know which webserver-specific exploits they can use on you.

I don’t consider it to be very important.

includeSubdomains tells a viewer that any subdomains of this website should also be treated as having HSTS enabled. preload essentially indicates your willingness to be included in the HSTS Preload list, which is distributed to most major browsers, so that clients know your site has HSTS before even visiting once. They both increase the effectiveness of HSTS on your site.

This happens when you try to create a container with a volume mapping, and try to map a HOST-side folder to a CONTAINER-side file. This fails because Docker can’t mount a directory as a file on the inside of the container. It also happens if the volume mapped file doesn’t exist on the HOST yet, because Docker will assume it’s supposed to be a folder and helpfully create it for you, and we’re back at the “you can’t make a folder into a file” error (a.k.a. what you’re seeing).

The solution is to go through all your volume mappings and ensure that any file mappings match up with real, existing files on the HOST side before starting the container.

1 Like

I wouldn’t even bother with separate logging, or subdomains for apps that do have base URL options. Sonarr, Radarr, NZBget all function quite well with a base URL set. Here’s how I’d do it.

Just note that in the below I’m logging to stdout because I let Docker’s logging driver handle the work (size, rotation, etc).

# common.conf
gzip
log stdout
errors stdout
header / {
  Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
  X-XSS-Protection "1; mode=block"
  X-Content-Type-Options "nosniff"
  X-Frame-Options "DENY"
  -Server
}
# Caddyfile
example.com {
  import common.conf
  basicauth / user pass
  redir /synology synology.example.com

  proxy /sonarr 127.0.0.1:8989 {
    transparent
  }

  proxy /radarr 127.0.0.1:7878 {
    transparent
  }

  proxy /nzbget 127.0.0.1:6789 {
    transparent
  }
}

synology.example.com {
  import common.conf
  basicauth / admin pass
  proxy / 127.0.0.1:5000 {
    transparent
  }
}

Keeping it nice and simple! Bonus: Having all your apps in subfolders of the same site means passing basicauth for one of the apps effectively authorises all of them so you don’t need to log in again for each app.

1 Like

Thank you once again. I think it may be related to my log nonsense. I will go ahead and use yours, I like simple too. So it still makes sense to keep two files (caddyfile and common.conf)?

I was planning to use tls still, so do I just add that back before import?

example.com {
tls myemail@email.com
import common.conf
basicauth / user pass
redir /synology synology.example.com

I was going to strip off those extra things includeSubDomains; preload and -server too.

Oh yeah, do that. I forgot I set -email letsencrypt@example.com on the command, but you’d like to keep the default command.

Put them in common.conf and save writing it twice.

1 Like

Still no dice, same error. I’ve really simplified the setup and mounts.

I am using the two files using your stuff. I plugged in my MyDDNSName.DDNSPROVIDER.com instead of example. I added:
tls myemail

I only have two mounts. Synology forces me to select actual existing folders for the first column of file/folder, I don’t type that in. I do have to type in the right side column.

So I navigate and select add folder:
/apps/configs/caddy/Caddyfile
Then type in /etc/Caddyfile
I’ve placed the two files Caddyfile and common.conf in the folder /apps/configs/caddy/Caddyfile

I then add the second mapping.
I navigate and select add folder:
/apps/configs/caddy/certificates
Then type in /root/.caddy

I am looking thru your site trying to see what I may be missing, Docker Hub

here are screenshots: http://imgur.com/a/Zbg2K

Which error? This one?

This is because you’re mounting a folder from HOST (/apps/configs/caddy/Caddyfile/, which contains Caddyfile and common.conf) to a file in the CONTAINER (/etc/Caddyfile, a text file with actual Caddy config).

Not only will this provoke the “not a directory” error, but it won’t match up with your default Execution Command (/usr/bin/caddy --conf /etc/Caddyfile --log stdout) because /etc/Caddyfile/Caddyfile != /etc/Caddyfile.

Instead, do three mounts:

HOST → CONTAINER
/apps/configs/caddy/Caddyfile/Caddyfile/etc/Caddyfile
/apps/configs/caddy/Caddyfile/common.conf/etc/common.conf
/apps/configs/caddy/certificates/root/.caddy

Just in case you’re tempted to mount /apps/configs/caddy/Caddyfile to /etc/ to do this in a single mount mapping (to try and reduce the number of mappings and simplify things), this will break the container in horrible ways (people have done this before).

Also, a minor correction to the Caddyfile example I gave - change the import directive to point to /etc/common.conf instead (referring to it by absolute path within the container). Otherwise it will look for common.conf in the working directory, which is /srv in the abiosoft/caddy container (and it’s not in /srv, it’s in /etc).

1 Like

Thank you! Yes that same error. I will try right now before I go to bed! It definitely crossed my mind and I was wondering if some confusion around the file vs folder Caddyfile.