After reviewing Anyone using Caddy on Docker on a Synology NAS (as Reverse Proxy only)? and trying my best to implement Caddy via Docker on my own, I am hoping someone can help me determine where I’m going wrong with my setup.
[cc: @Whitestrake, @abiosoft: I have seen many of your posts where you have helped others through similar issues, so I would be most grateful if you would review my configuration and let me know your suggestions.]
Problem: I cannot seem to access localhost:2015 and the external access to my services using subdomains results in a connection refusal with the following message in the caddy logs: http: TLS handshake error from 172.17.0.1:39462: tls: first record does not look like a TLS handshake
.
Objective: Setup secure external access via mydomain.tld to a few services hosted on my personal server.
Current Setup:
-
NAS: Synology DS 718+ running Docker
-
Connection: Actiontec G1100 router
-
External IP: officially dynamic but has not changed in years
-
Internal IP: nas-internal-ip set as static through router interface
-
Port forwarding: have opened nas-internal-ip:80 and :443 (via router interface), which forward to intermediate ports (e.g., :5555, :5556) that are exposed in the Docker container hosting Caddy. Docker handles the forwarding of the ports to 80/443 within the Caddy container.
-
DNS: currently using Cloudflare name servers because of Cloudflare’s compatibility with LetsEncrypt (but open to any other solution)
-
Gitea runs in a Docker container on the NAS and is accessed internally at port 33000. Since the Gitea instance is for proprietary coding projects, I want to ensure that any external access is as secure as possible, i.e., https.
-
Plex runs directly on the NAS and external access was configured through the Plex web interface which automatically setup an external-facing port to forward to the internal port where I access Plex. This seems to work fine.
-
Caddy has been difficult to configure, even using the abiosoft/caddy:latest image, since it does not have the Cloudflare plugin installed. Attempts to build the
caddy
executable locally have succeeded, but integrating with a base-image container (e.g., alpine:3.8) hasn’t worked. Note that since I do not intend to serve any content to the public, I have not included thephp
packages referenced in theabiosoft/caddy
documentation.
I have gotten to the point where I can see the http: TLS handshake error from 172.17.0.1:39462: tls: first record does not look like a TLS handshake
message in the logs using the following configuration files for Caddy:
docker-compose.yml
version: '3'
services:
caddy:
image: abiosoft/caddy:latest
command: ["-log", "stdout", "-agree",
"-email", "email@gmail.com",
"-conf", "/etc/Caddyfile"]
restart: unless-stopped # Once working, this will be changed to always but trying to control against rate-limiting
# issues during setup.
ports:
- "8880:80" # Router forwards :80 to nas-local-ip:8881 in order to hit localhost:80 within the container
- "4443:443" # Ibid., but router forwards :443 to nas-local-ip:4444 in order to hit localhost:443
volumes:
- /volume1/docker/caddy/config/Caddyfile:/etc/Caddyfile
- /volume1/docker/caddy/config/common.conf:/etc/common.conf
- /volume1/docker/caddy/certs:/root/.caddy
- /volume1/docker/caddy/public:/srv # I have a simple index.html page here to serve.
- /volume1/docker/gitea:/apps # I believe that I need to have my /docker/gitea path
# mapped in order to access gitea securely through caddy
Caddyfile
# Caddyfile v0.1-20180914
# server: /volume1/docker/caddy/config/Caddyfile
# container: /etc/caddy/Caddyfile
# Objective: To provide a secure (i.e., https) connection to git.mydomain.xyz for my use with
# Working Copy and other iOS applications.
# The working directory in the abiosoft:caddy image is `/srv`.
mydomain.xyz,
mydomain.xyz:8880, # :8880 and :4443 included to ensure that LetsEncrypt issues the certificate
mydomain.xyz:4443 { # for the correct URL in light of the port-forward from 80/443 that takes place
# through the router
root / # I'm not certain what this line is saying; since /root is not mapped to anything
# on the host, I don't know what is intended to be served at "mydomain.xyz/"
import /etc/common.conf # `common.conf` is intended here to make future provisioning easy(ier).
redir 301 { # Forced redirect to www.google.com for attempts to access
if {path} is / # mydomain.xyz (i.e., no subdomain)
/ www.google.com
}
redir 301 /git https://git.mydomain.xyz # Should the redirect point to _http_ or _https_?
redir 301 /plex https://plex.mydomain.xyz
redir 301 /syno https://syno.mydomain.xyz
basicauth / [user] [pass] { # Might be unnecessary; included because I want to be sure that access to
/gitea # mydomain.xyz:8880 will not provide open access to my Gitea data or to
/plex # the Synology DSM login page. This would be a buffer requiring login before
/syno # the redirect to the subdomain.
}
}
syno.mydomain.xyz { # Synology DSM
root / # Should this be for syno.mydomain.xyz AND nas-local-ip:5000 since the
import /etc/common.conf # directives are the same?
proxy / nas-local-ip:5000 {
transparent
}
}
git.mydomain.xyz { # Gitea
root /
import /etc/common.conf
proxy / nas-local-ip:33000 {
transparent
}
}
plex.mydomain.xyz { # Plex
root /
import /etc/common.conf
proxy / nas-local-ip:32400 {
transparent
}
}
localhost { # Is this even necessary? My understanding from LetsEncrypt is that I can't serve
root /srv # https locally unless I use a self-signed certificate (different project for another day).
import /etc/common.conf # If provisioning localhost is required, should I include _nas-internal-ip_ and
# host-identifier as well as localhost? Should any be preceded by http/https?
proxy / nas-local-ip:5000 { # Synology DSM
transparent
}
proxy / nas-local-ip:33000 { # Gitea
transparent # Since Gitea is Dockerized in its own container, should I use its container name instead
} # nas-local-ip? This would make future setup easier if for some reason I had to change
# nas-local-ip.
proxy / nas-local-ip:32400 { # Plex
transparent # Assuming this is fine as is; all settings for Plex (which Plex set on its own) seem to work
} # without issue.
}
common.conf
# common.conf v0.2-20180916
# Server: /volume1/docker/caddy/config/common.conf
# Container: /etc/caddy/common.conf
# Objective: To minimize repetitive code blocks within the Caddyfile where the settings for a given service / proxy are the same.
# Instead, in each such instance we will use `import common.conf` within `Caddyfile`.
#
tls {
dns cloudflare # Initially this was `dns cloudflare [MY_CLOUDFLARE_EMAIL] [MY_CLOUDFLARE_API]`
} # but this was throwing errors, so I removed the API key and placed the email in a separate
tls email@gmail.com # `tls` directive. Is this right?
gzip
# log stdout # I prefer to have the logs/errors written to file and preserved for a period of time once this is up and running,
# errors stdout # so I would like to test with the log/error directives below instead of `stdout`.
log /logs/access.log { # I was getting a `no such file/directory` error; how do I create the /logs path in the container
rotate_size 1 # before the container is deployed successfully?
rotate_age 7
rotate_keep 2
}
errors {
log /logs/error.log {
rotate_size 1
rotate_age 7
rotate_keep 2
}
}
header / {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-XSS-Protection "1; mode=block"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
-Server
}
Sorry for the long-winded post; hopefully I have provided enough information that someone will be able to help me understand what is amiss in my configuration. I am very eager to get this up and running.