Caddy via Docker as reverse-proxy to Gitea instance on private server


(Nicholas) #1

After reviewing Anyone using Caddy on Docker on a Synology NAS (as Reverse Proxy only)? and trying my best to implement Caddy via Docker on my own, I am hoping someone can help me determine where I’m going wrong with my setup.
[cc: @Whitestrake, @abiosoft: I have seen many of your posts where you have helped others through similar issues, so I would be most grateful if you would review my configuration and let me know your suggestions.]

Problem: I cannot seem to access localhost:2015 and the external access to my services using subdomains results in a connection refusal with the following message in the caddy logs: http: TLS handshake error from 172.17.0.1:39462: tls: first record does not look like a TLS handshake.

Objective: Setup secure external access via mydomain.tld to a few services hosted on my personal server.

Current Setup:

  • NAS: Synology DS 718+ running Docker

  • Connection: Actiontec G1100 router

  • External IP: officially dynamic but has not changed in years

  • Internal IP: nas-internal-ip set as static through router interface

  • Port forwarding: have opened nas-internal-ip:80 and :443 (via router interface), which forward to intermediate ports (e.g., :5555, :5556) that are exposed in the Docker container hosting Caddy. Docker handles the forwarding of the ports to 80/443 within the Caddy container.

  • DNS: currently using Cloudflare name servers because of Cloudflare’s compatibility with LetsEncrypt (but open to any other solution)

  • Gitea runs in a Docker container on the NAS and is accessed internally at port 33000. Since the Gitea instance is for proprietary coding projects, I want to ensure that any external access is as secure as possible, i.e., https.

  • Plex runs directly on the NAS and external access was configured through the Plex web interface which automatically setup an external-facing port to forward to the internal port where I access Plex. This seems to work fine.

  • Caddy has been difficult to configure, even using the abiosoft/caddy:latest image, since it does not have the Cloudflare plugin installed. Attempts to build the caddy executable locally have succeeded, but integrating with a base-image container (e.g., alpine:3.8) hasn’t worked. Note that since I do not intend to serve any content to the public, I have not included the php packages referenced in the abiosoft/caddy documentation.

I have gotten to the point where I can see the http: TLS handshake error from 172.17.0.1:39462: tls: first record does not look like a TLS handshake message in the logs using the following configuration files for Caddy:

docker-compose.yml
version: '3'
services:
  caddy:
    image: abiosoft/caddy:latest          
    command: ["-log", "stdout", "-agree", 
      "-email", "email@gmail.com",      
      "-conf", "/etc/Caddyfile"]          
    restart: unless-stopped   # Once working, this will be changed to always but trying to control against rate-limiting 
                              # issues during setup.
    ports:                                
      - "8880:80"   # Router forwards :80 to nas-local-ip:8881 in order to hit localhost:80 within the container
      - "4443:443"  # Ibid., but router forwards :443 to nas-local-ip:4444 in order to hit localhost:443
    volumes:
      - /volume1/docker/caddy/config/Caddyfile:/etc/Caddyfile           
      - /volume1/docker/caddy/config/common.conf:/etc/common.conf   
      - /volume1/docker/caddy/certs:/root/.caddy                              
      - /volume1/docker/caddy/public:/srv    # I have a simple index.html page here to serve.
      - /volume1/docker/gitea:/apps          # I believe that I need to have my /docker/gitea path
                                             #  mapped in order to access gitea securely through caddy
Caddyfile
# Caddyfile v0.1-20180914
# server: /volume1/docker/caddy/config/Caddyfile
# container: /etc/caddy/Caddyfile
# Objective: To provide a secure (i.e., https) connection to git.mydomain.xyz for my use with
#            Working Copy and other iOS applications.
# The working directory in the abiosoft:caddy image is `/srv`.

mydomain.xyz, 
mydomain.xyz:8880,   # :8880 and :4443 included to ensure that LetsEncrypt issues the certificate
mydomain.xyz:4443 {  # for the correct URL in light of the port-forward from 80/443 that takes place
                     # through the router

  root /             # I'm not certain what this line is saying; since /root is not mapped to anything
                     # on the host, I don't know what is intended to be served at "mydomain.xyz/"
  
  import /etc/common.conf          # `common.conf` is intended here to make future provisioning easy(ier).

  redir 301 {          # Forced redirect to www.google.com for attempts to access     
    if {path} is /     # mydomain.xyz (i.e., no subdomain)
    / www.google.com
  }

  redir 301 /git    https://git.mydomain.xyz        # Should the redirect point to _http_ or _https_?
  redir 301 /plex  https://plex.mydomain.xyz
  redir 301 /syno  https://syno.mydomain.xyz

  basicauth / [user] [pass] {          # Might be unnecessary; included because I want to be sure that access to 
   /gitea                              # mydomain.xyz:8880 will not provide open access to my Gitea data or to
   /plex                               # the Synology DSM login page. This would be a buffer requiring login before
   /syno                               # the redirect to the subdomain.
  }
}

syno.mydomain.xyz {             # Synology DSM
  root /                        # Should this be for syno.mydomain.xyz AND nas-local-ip:5000 since the
  import /etc/common.conf       # directives are the same?
  proxy / nas-local-ip:5000 {
    transparent
  }
}

git.mydomain.xyz {                   # Gitea
  root /                                   
  import /etc/common.conf         
  proxy / nas-local-ip:33000 {
    transparent
  }
}

plex.mydomain.xyz {                   # Plex
  root /                           
  import /etc/common.conf
  proxy / nas-local-ip:32400 {
    transparent
  }
}

localhost {                        # Is this even necessary? My understanding from LetsEncrypt is that I can't serve
  root /srv                        # https locally unless I use a self-signed certificate (different project for another day).
  import /etc/common.conf          # If provisioning localhost is required, should I include _nas-internal-ip_ and 
                                   # host-identifier as well as localhost? Should any be preceded by http/https?
  
  proxy / nas-local-ip:5000 {         # Synology DSM
    transparent                           
  }

  proxy / nas-local-ip:33000 {        # Gitea
    transparent                       # Since Gitea is Dockerized in its own container, should I use its container name instead
  }                                   # nas-local-ip? This would make future setup easier if for some reason I had to change
                                      # nas-local-ip.

  proxy / nas-local-ip:32400 {        # Plex
    transparent                       # Assuming this is fine as is; all settings for Plex (which Plex set on its own) seem to work
  }                                   # without issue.
}
common.conf
# common.conf v0.2-20180916
# Server: /volume1/docker/caddy/config/common.conf
# Container: /etc/caddy/common.conf
# Objective: To minimize repetitive code blocks within the Caddyfile where the settings for a given service / proxy are the same.
#            Instead, in each such instance we will use `import common.conf` within `Caddyfile`.
# 
 tls {
     dns cloudflare             # Initially this was `dns cloudflare [MY_CLOUDFLARE_EMAIL] [MY_CLOUDFLARE_API]`
  }                             # but this was throwing errors, so I removed the API key and placed the email in a separate

 tls email@gmail.com            # `tls` directive. Is this right?
  
 gzip

#   log stdout                  # I prefer to have the logs/errors written to file and preserved for a period of time once this is up and running,
#   errors stdout               # so I would like to test with the log/error directives below instead of `stdout`.

  log /logs/access.log {        # I was getting a `no such file/directory` error; how do I create the /logs path in the container
          rotate_size 1         # before the container is deployed successfully?
          rotate_age  7           
          rotate_keep 2          
  }

  errors {
     log /logs/error.log {         
          rotate_size 1             
          rotate_age  7            
          rotate_keep 2           
    }
  }

  header / {
    Strict-Transport-Security "max-age=31536000; includeSubDomains"
    X-XSS-Protection "1; mode=block"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "DENY"
    -Server
  }

Sorry for the long-winded post; hopefully I have provided enough information that someone will be able to help me understand what is amiss in my configuration. I am very eager to get this up and running.


(Matthew Fay) #2

Hi @njm2112,

I’m gonna go through this from top to bottom, quoting and offering some replies as I go.

This (most commonly?) happens when a client attempts to negotiate HTTP to a HTTPS endpoint. HTTPS is HTTP, but it happens after a TLS connection is established. Trying to talk HTTP to a HTTPS port is jumping the gun, hence first record does not look like a TLS handshake.

This all looks good and makes sense to me. Seems well documented.

Abiosoft has an excellent method available to build a Caddy container with the required plugins. Here’s a current working example from my homelab:

docker-compose.yml Caddy service
  caddy:
    build:
      context: github.com/abiosoft/caddy-docker.git
      args:
        - plugins=git,cloudflare,jwt,login,filter
    command: ["-log", "stdout", "-agree",
      "-email", "letsencrypt@whitestrake.net",
      "-conf", "/etc/caddyfile"]
    ports:
      - 80:80/tcp
      - 443:443/tcp
    environment:
      CLOUDFLARE_EMAIL: [snip]
      CLOUDFLARE_API_KEY: [snip]
    volumes:
      - ./conf/caddy/certs:/root/.caddy
      - ./conf/caddy/caddyfile:/etc/caddyfile
      - ./conf/caddy/.htpasswd:/etc/.htpasswd
      - ./conf/caddy/sites:/srv
    restart: unless-stopped

Basically, use build instead of image, point it straight at the Github, supply the plugin list as the build arg plugins.

docker-compose.yml

Just leave it out. unless-stopped is effectively the same as always - it will smash through a rate limit if Caddy starts exiting on its own. The only difference between the two is that the former won’t come back up at boot if you bring it down with a docker command.

Commented port doesn’t seem to match the bound port here, worth double checking.

The rest looks good.

Caddyfile

Unnecessary. Because you’re doing the port detour, Caddy’s still getting connections on :80 and :443 internally and those connections are being made on the same ports externally. The point of the port detour is to hide the oddball ports from either end, effectively transparently.

VERY BAD

The slash here refers to the root of the entire container’s filesystem - which contains the certs, the site, and your Caddy configuration, mapped from your host. Under certain circumstances, an attacker could craft a URI to retrieve these and steal them.

Always limit your web root to something sane and safe - /srv, or /var/www/html, or some empty folder somewhere if you’re not serving static files.

You can have a catch-all redirect as well as more specific ones. With almost all things in the Caddyfile, the longest path match is chosen. Also, yes, redirect to HTTPS if you’re planning on having Caddy handle all your sites with Automatic HTTPS; it’s not terrible to redirect to HTTP, but your visitors will get double-redirected. I’d do it like this:

redir 301 {
  # Google redirects to HTTPS, might as
  # well save visitors the double redirect
  / https://www.google.com/

  /git  https://git.mydomain.xyz
  /plex https://plex.mydomain.xyz
  /syno https://syno.mydomain.xyz
}

Nope. Caddy isn’t taking requests for nas-local-ip:5000, the NAS is. Site label is for what you want Caddy to respond to.

Pretty useless unless you’re browsing to Caddy from within Caddy’s container. Anything else (including the Docker host) is not localhost (unless you set net=host for the Caddy Docker service). You can scrap this whole section, or change it to some other hostname.

You can serve HTTPS locally with a self-signed certificate, yes. My advice: don’t provision for localhost, just rely on the (validated) real domain certificates. Hairpin NAT is the way to go - it’ll have internal requests for those domains treated as though they came from the external network, and the port detouring will work normally.

Also, you’ve got a number of catch-all proxies in this block. Only the first one matters, because they all are always valid, and Caddy has to pick one, so the latter ones are just dead weight in the Caddyfile.

common.conf

Looks good. -Server isn’t as effective as you might like; a request for any hostname (including an invalid one like, literally, foo.bar) that doesn’t import common.conf will leak the Server header. You can solve this with a catch-all:

http://, https:// {
  tls self_signed
  header / -Server
}

But it can’t, obviously, have a valid certificate, and security through obscurity is mostly bunk anyway. The other headers are a good idea, though.


I think that’s everything. Hit me back with further questions, etc


(Nicholas) #3

@whitestrake: this is amazing; thank you so much for your help!

Regarding the first record does not look like a TLS handshake:

quoted comment from @Whitestrake

This (most commonly?) happens when a client attempts to negotiate HTTP to a HTTPS endpoint. HTTPS is HTTP, but it happens after a TLS connection is established. Trying to talk HTTP to a HTTPS port is jumping the gun, hence first record does not look like a TLS handshake .

From your explanation, it sounds like this is happening because mydomain.xyz is resolving to my external IP but accessing using https in the first instance. Right now I have a CNAME record that points mydomain.xyz to my external IP, and then four A records pointing to www plus the three subdomains for the services I want to access remotely. The A records, however, all point to the external IP and do not allow me to include a port designation, so I also have setup page forwarding, with each subdomain pointing to the appropriate port at https://my-external-ip:port/. I will have to try adjusting this to http and see if that resolves the error.

I think I’ve adapted my config files to your comments and have only a few points of clarification if you wouldn’t mind.

docker-build.yml
version: '3'

services:
  caddy:
    build:
      context: github.com/abiosoft/caddy-docker.git
      args:
        - plugins=git,cloudflare,jwt,login,filter,cors,realip,filemanager,cache,expires
    command: ["-log", "stdout", "-agree",
      "-email", "letsencrypt@mydomain.xyz",
      "-conf", "/etc/Caddyfile"]
    ports:
      - 8881:80/tcp   # Router forwards :80 to nas-local-ip:8881 in order to hit localhost:80 within the container
      - 4444:443/tcp  # Ibid., but router forwards :443 to nas-local-ip:4444 in order to hit localhost:443
    environment:
      CLOUDFLARE_EMAIL: letsencrypt@mydomain.xyz
      CLOUDFLARE_API_KEY: [snip]
    volumes:
      - /volume1/docker/caddy/config/Caddyfile:/etc/Caddyfile
      - /volume1/docker/caddy/config/common.conf:/etc/common.conf
      - /volume1/docker/caddy/certs:/root/.caddy
      - /volume1/docker/caddy/public:/srv                     # This is where I'll store a static `index.html` page (and nothing else).
      - /volume1/docker/gitea:/apps                           # Necessary? Or will I be able to access my Gitea container using the
                                                              # container name since they are both in the bridge network? Or are they?
#     - /volume1/docker/caddy/config/.htpasswd:/etc/.htpasswd # You had this in your Caddyfile but I am not sure what this does.
  • As a general matter, in terms of building for Caddy, does the ‘version’ declared at the outset make any difference? (I realize this is a Docker-specific question but I am asking in the context of how it impacts building the Caddy container, if you happen to know.)

  • I added a few of the plugins that are standard in the pre-packaged abiosoft/caddy:latest image in case they are not standard in the context of building from source through the git URL. Is this necessary?

  • Out of curiosity, why do we need to declare the email address both for the command and then separately as an environment keypair? Am I mistaken that the flags in the command block are passed at the same time that the environment variables are passed, i.e., within the Docker container at the time of executing the caddy binary?

  • Thanks for catching the inconsistency in my ports block; this is adjusted now. I also added /tcp to each one; is this for “clean code”/stylistic purposes, or does it add something to the declaration of ports to expose/listen?

  • Regarding the volumes block:

    • Is it necessary to include my docker/ path in order to have access to my Gitea repos externally? Since the Gitea host is a separate container and I am using Caddy only to reverse-proxy to my server, shouldn’t I be able to omit this and then access my Gitea repos normally since I am declaring a proxy to the subdomain in Caddyfile? Plus, isn’t the Caddy container going to be on the bridge network (where the Gitea container is), and thus access between the two is available that way?

    • I don’t have a .htpasswd file but I see that you are mounting this in your config. What does this do? I vaguely remember reading somewhere that Docker has deprecated (or plans to) referencing .passwd when loading. (I could be thinking of something else; I can’t find the reference to this.)

  • In terms of execution, should I run this as docker-compose -f /path/to/docker-compose.yml run or do I use the up command?

Caddyfile
mydomain.xyz {
  root /srv                   # Must point to something other than `/` to avoid exposing
  import /etc/common.conf     # container's complete filesystem

# basicauth / [user] [password] {
#   /git
#   /plex
#   /syno
#    git.mydomain.xyz
# }

redir 301 {
  / https://www.google.com/   # Redirect from mydomain.xyz to Google with https since Google
                              # automatically redirects http --> https anyway.

  /git  https://git.mydomain.xyz
  /plex https://plex.mydomain.xyz
  /syno https://syno.mydomain.xyz
  }
}

syno.mydomain.xyz {             # Synology
  root /
  import /etc/common.conf
  proxy / nas-local-ip:5000 {
    transparent
  }
}

git.mydomain.xyz {              # Gitea
  root /
  import /etc/common.conf
  proxy / nas-local-ip:33000 {
    transparent
  }
}

plex.mydomain.xyz {             # Plex
  root /
  import /etc/common.conf
  proxy / nas-local-ip:32400 {
    transparent
  }
}
  • Thanks for pointing out the potential exposure to using using root /. I’ve adjusted that to /srv, which I think is the default working directory on the Docker container. As declared in docker-build.yml, I have \srv mapped to a path on my server that holds index.html. Should I be mapping \srv\subfolder to that path instead so that the root is not exposing the working directory, but just a subfolder of it?

  • Does it hurt to include a basicauth block in between the import /etc/common.conf and the redir 301 instructions? I have included what I propose above but commented it out for now. The idea is that if http://www.subdomain.xyz/subdomain were requested from an outside IP, a correct user/pass would be required before the redirection to https://subdomain.mydomain.xyz (which would be a Gitea login page anyway). How would I setup the basicauth to require correct user/pass in the case of a request for http://subdomain.mydomain.xyz/ in the first instance? Since all of the directives seem to operate off of the mydomain.xyz as the root, would I just include subdomain.mydomain.xyz as in the proposed block?

  • For the Gitea subdomain, what does root / expose here, since the Gitea host is a self-contained Docker instance? That container has volume mappings to /volume1/docker/gitea/data which contains all of the required git files and the repos. Do I need to have a separate (empty) path on the Gitea container to which Caddy will direct root /?

  • Since the Synology DSM and Plex are not docker-ized and live on the NAS itself, I want to be careful to provide myself with the functionality to use the DSM/Plex web interfaces externally but not with any access to the filesystem. How does this affect what I use for root / for the respective subdomains?

  • I scrapped the entire localhost block, as suggested.

quoted comment from @Whitestrake

Pretty useless unless you’re browsing to Caddy from within Caddy’s container. Anything else (including the Docker host) is not localhost (unless you set net=host for the Caddy Docker service). You can scrap this whole section, or change it to some other hostname.

I assume this means that my access to the various services when within my own
LAN will just not pass through Caddy and I would just access these using nas-internal-ip:port (which won’t be served using https unless/until I get a self-signed certificate, etc.) Would you confirm whether I understand correctly the access method from within my LAN?

common.conf
 tls {
     dns cloudflare
     email@mydomain.xyz
    }

gzip

# log stdout
# errors stdout

log /logs/access.log {        # Do I have to create this path in the Caddy container? 
         rotate_size 1        # If so, how do I do so before the container runs?
         rotate_age  7        # Also, should I map `/logs` to a NAS path for my access?
         rotate_keep 2        # Keep log files for 7 days, at most 2 log files
    }

errors {
    log /logs/error.log {     # Change path syntax for your OS or your preferred location.
         rotate_size 1        # Set max size 1 MB
         rotate_age  7        # Keep log files for 7 days
         rotate_keep 2        # Keep at most 2 log files
         }
    }

header / {
  http://, https:// {
    tls self_signed
    header / -Server
    }
  Strict-Transport-Security "max-age=31536000; includeSubDomains"
  X-XSS-Protection "1; mode=block"
  X-Content-Type-Options "nosniff"
  X-Frame-Options "DENY"
# -Server                  # Deactivated in favor of the `http://, https://` sub-block above.
     }
  • I combined the dns cloudflare and statement of my email address into one tld block; do these two need to be separate?

  • Ideally I would like to have logs written to file once this config is in production; is it advisable to activate the log and errors directives only once I have the container up and running? I am assuming that I cannot create the necessary /logs path before I have the container functioning properly in the first place, and then once it is, I can do something like docker exec and include a mkdir -p path/to/logs command. Is there some other recommended way?

    • Related question: since the log and error directives are in common.conf, which is imported into each site-label section in Caddyfile, do I need separate paths for each site-label or will Caddy write the logs for all to a single file?
  • I added the http://, https:// sub-block to header as you suggested, but I am a bit confused about what this is doing. Was this intended not for common.conf but for Caddyfile in the event that I were to leave localhost (changing the hostname) in Caddyfile? If not, what confuses me is that the header suggests that regardless of the access protocol, i.e., http or https, Caddy should send tls self_signed in the header. If I understood correctly your explanation regarding the inutility of localhost in Caddyfile, the self-signed header is the way to deal with security when accessing LAN resources from inside the LAN. Maybe I am completely misunderstanding?

quoted comment from @Whitestrake

Looks good. -Server isn’t as effective as you might like; a request for any hostname (including an invalid one like, literally, foo.bar ) that doesn’t import common.conf will leak the Server header. You can solve this with a catch-all:

http://, https:// {
 tls self_signed
 header / -Server
}

But it can’t, obviously, have a valid certificate, and security through obscurity is mostly bunk anyway. The other headers are a good idea, though.

  • If the http://, https:// sub-block is not intended for common.conf, I think I should re-activate -Server for the “security by obscurity” it provides, even if the security it provides is insignificant. What do you think?

Thank you so much, again, for your initial reply and I hope you are able to clarify the remaining points whenever you have the time.

BR


(Matthew Fay) #4

Lots of questions to answer indeed! Here we go:

Not really, I’m afraid. You’re serving HTTPS, but something is trying to request HTTP. It’s like, for example, trying to browse to http://www.google.com:443/. The server’s expecting to see TLS traffic on that port first, but gets an unencrypted HTTP message right out of the gate, which is incorrect.

If it was accessing your external IP over HTTPS, everything would be working fine.

Yikes, this is just gratuitously complicated.

When you say page forwarding, do you mean port forwarding, or redirection, or proxying?

My strong suggestion is to simplify things by only ever exposing standard ports (80 and 443) externally, and proxying subdomains to the correct service internally by their http://internal-ip:port/.

No, no container versioning is involved at all; your build has an implicit latest tag and nothing else.

github.com/abiosoft/caddy-docker.git is just the location of a Dockerfile, it’s like a cheat way to avoid having to download the Dockerfile into the project directory yourself before building it.

As for the Caddy version, Abiosioft’s builder always builds the latest release.

Edit: Actually, I was wrong about that. builder.sh uses the version Docker build arg to tell which version of Caddy to check out and build, see: https://github.com/abiosoft/caddy-docker/blob/6aca2e9ccf3a345b1ae2aab41a51621d1e1e78cf/builder/builder.sh#L7

Yes; the prebuilt container comes with some plugins, but the builder doesn’t include any by default.

Edit: I was wrong about this one, too. If you don’t specify a plugins arg, it reverts to git,filemanager,cors,realip,expires,cache. See: https://github.com/abiosoft/caddy-docker/blob/6aca2e9ccf3a345b1ae2aab41a51621d1e1e78cf/Dockerfile#L7

It’s important to note that these two are entirely separate. I’ve snipped it from my config, because that’s not the LetsEncrypt email, that’s my CloudFlare login; I use that env keypair for Caddy’s DNS validation. The -email flag is for LetsEncrypt to send account specific notices (like non-renewed certificate expiry).

i.e. -email letsencrypt@whitestrake.net, but CLOUDFLARE_EMAIL: some-other-email@whitestrake.net

Specifically, adding /tcp excludes UDP traffic from being passed through on this port.

It should be fine, HTTP(S) is TCP traffic.

Exactly as you say - omit this, Caddy doesn’t need access to the files, it’s just going to talk HTTP to the Gitea container.

Bridge networking does not imply shared volume access, only shared bridged access to the host’s physical network. As above, Caddy doesn’t need access to the files anyway, though.

One fun fact, if those services are both declared in the compose file, they can talk to each other directly by the service name - for example, if Gitea’s service name was git, you could use proxy / git:33000. This means you don’t actually need to expose those ports to the host in Docker at all, except for Caddy itself, if you use this form of reference.

I use it for the jwt and login plugins, it’s a backend for user/password storage for authenticating because I don’t like having to re-type my basic auth every time I move to a new subdomain. (Nothing to do with Docker)

I like to docker-compose up on first run, just to watch the logs and make sure everything works as expected. If it all seems good, I will bring it down (^C aka SIGINT) and bring it back up daemonised with docker-compose up -d. If you’re very confident, you can go straight to the latter - just be quick on the draw with docker-compose down if it looks like something’s awry…

Folder/file overlaying in Docker is pretty touchy if you cross the two. Just make sure a folder is mapped to a folder, OR a file is mapped to a file. You can either map index.html directly onto /srv/index.html or you can map a folder containing the index onto /srv.

As for not exposing the working directory… there’s no inherent property of the working directory that makes it dangerous to expose - only its contents. As long as it’s just the index file in there, it’s smooth sailing.

I will sometimes divide /srv up if I’m serving multiple sites, so /srv/whitestrake.net, /srv/example.com, etc. and declare root appropriately in each site definition, but if Caddy’s only serving one set of files, that’s unnecessary.

P.S: Don’t let Linux catch you trying to use backslashes (\) instead of slashes (/) for file paths. Backslashes are a Windows thing. In some contexts, they’re equivalent/interchangable - in others, not so much.

Harmless, but also useless, in my opinion. You don’t need to protect a redirect, you should be protecting the resource at the end of the redirect instead.

Firstly, you should be thinking of protecting subdomain.mydomain.xyz (protocol-agnostic), not only HTTP access to that site. Caddy should be serving this site, e.g. git.mydomain.xyz as you have it in your latest Caddyfile proposal, and you can simply include a basicauth directive in the site definition of that site to protect it. Here’s a simplified example:

example.com {
  redir /foo https://bar.example.com/
}

bar.example.com {
  browse
  basicauth / user pass
}

OK, so strictly speaking, since there’s a catch-all proxy (i.e. proxy / upstream), no requests should ever make it to Caddy’s static file server.

If a request for the Gitea subdomain ever does, for any reason (bug, configuration change, malicious exploit), Caddy will attempt to serve the file at $ROOT + $URI.

Imagine, if you will, that I craft a request like git.mydomain.xyz/root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/git.mydomain.xyz/git.mydomain.xyz.key and your proxy directive fails somehow. I think you can imagine what happens next if your root is configured as /. If it’s set to /srv (or even omitted entirely - it’ll use the running directory of the process in that case, which is /srv), this request fails.

Your root directive does not relate in any way to whatever is upstream - it’s always referencing the local disk of the host ( / container) Caddy is running on. The Gitea container, and its file system and volumes or lack thereof, are totally irrelevant.

Following up from explanation above, just don’t ever use root /. It’s about how Caddy’s static file server determines where the site files for that host are located on Caddy’s filesystem. Just declare it to an empty folder, or alternately, it’s safe to ignore (because in Abiosoft’s containers’ case, it defaults to the usually-empty /srv).

Most of these subdomains won’t ever serve static files (the proxies take care of all the requests before Caddy has to start worrying about serving files off disk), but that doesn’t mean root / is a smart move. It’s useless and potentially massively harmful.

That’s one way. Another way is to literally just browse to your website as though you were external; go straight to the standard HTTPS address as normal. You might need to set up hairpin NAT for this, some consumer router/modems have this behaviour by default.

The -email CLI flag in your Compose file configures the same thing as tls [email], but universally. You can omit your email from this block unless it needs to be separate for individual sites.

That said, I believe, based off the documentation (I’ve never actually needed to do this myself), that the form used to give your email isn’t compatible with the form used to open a subdirective block and specify the DNS validation. So it should be like this:

tls [EMAIL]
tls {
  dns [PROVIDER]
}

I may be wrong about that, though.

There’s no best practice either way. Turn them on beforehand if you like; just delete the logs before you move to production stage, unless you’re happy to have the testing phase data at the start of the logs.

In your Compose file:

volumes:
  - ./logs:/var/log/caddy # or similar

In your Caddyfile:

log /var/log/caddy/access.log
errors /var/log/caddy/errors.log

No exec fiddling required. /var/log comes standard on the file system. You don’t even need to pre-create the ./logs folder (which Docker will create, if it doesn’t already exist, in the Compose project directory). You will probably need sudo access to read these logs, however (Docker will create the directory chowned to root user, and Caddy runs as root in its own container, so the log files will also be chowned by root user).

Caddy will write these into a single file.

It might get confusing. You should consider specifying the log format explicitly to sort them out, e.g:

log / /var/log/caddy/access.log "{remote} - {user} [{when}] {host} \"{method} {uri} {proto}\" {status} {size}"

(note the added {host} in the middle there - see log docs for more info https://caddyserver.com/docs/log)

They don’t belong under header, they’re a top level site configuration block. As to its purpose:

When I contact Caddy, and request a site that you haven’t configured Caddy to serve, it will serve a 404 [HOST] not served on this interface. It does this using very basic configuration, which includes sending the Server header. This means, if you’re trying to hide your server software from me, I can figure it out by crafting a nonsense request to provoke this response and determine your server is Caddy.

You can override this behaviour by specifying catch-all sites. That’s what that block does - it’s a full site definition, and any request to Caddy that doesn’t match a longer site will “fall back” to it. Then we strip the Server header there, and now Caddy doesn’t leak that info any more for a nonsense request.

Yes, it does not belong in common.conf. It is not a directive, it’s a top-level Caddyfile configuration, an entire site block in its own right.

Whether or not you leave localhost in has no bearing on what this catch-all site configuration is meant to achieve (preventing leaking the Server header).

Well, tls self_signed has nothing to do with headers. Caddy is smart enough to know that this means using a self-signed certificate when HTTPS is requested, and no certificate otherwise.

Why include this? Well, if we don’t, for a nonsense HTTPS request, Caddy will attempt to satisfy the certificate requirement with a random certificate from its stores. This means I could use those nonsense requests to eventually get a full listing off all the valid certificates your Caddy is configured to serve.

If you care about obscurity enough to strip the Server header, that’s worth preventing, too. (I still protest the practicality of this policy, though.)

OK, the response to this question is two-fold.

Firstly, the misunderstanding. Having a HTTP localhost site in the Caddyfile isn’t about security, it’s just non-functional. When you browse to localhost, your client makes a DNS lookup and immediately resolves localhost to itself. Whenever you type localhost in your browser, you’re accessing your own computer. When you curl localhost, you’re accessing your own computer.

Since Caddy is in a container, and the container thinks it’s its own computer, it is its own host separate from the NAS, separate from your computer, separate from everything else. There’s only one expected source of localhost requests for any host - itself.

So serving localhost is nonsensical from a container, because it’ll only ever serve those sites (under normal circumstances[*]) if the user / client is trying to access Caddy from the Caddy container itself. That means physically running a browser or HTTP client from inside the container - docker exec -it caddy curl localhost, for example. There’s just no point configuring Caddy for this, you’re not going to get in the habit of accessing your sites through the Docker CLI in a terminal, nobody is, because that would be silly.

[*] The NAS itself is an exception, because ports on the NAS are redirected to ports in the Caddy container. You might be able to request localhost:8881 from the NAS itself and get a response (because that request will make its way to localhost:80 inside the container). Technically you could also craft a request that would work, e.g. curl http://[CADDY-HOST]/ -H "Host:localhost". Neither of these are ever going to be normal usage…


Secondly, security when accessing from an internal network is a different concern to external. When we get down to it, validated HTTPS is meant to be a guarantee of two things:

  1. Your requests are encrypted and can’t be read by anyone other than the server that presented you the certificate
  2. The owner of the certificate was validated by a trusted third party, and is definitely who the certificate claims they are (rather than an impersonator)
    (note I said the owner, not the server, is validated - security is hard, yo. Server is implied to be valid as long as the owner isn’t careless with their keys.)

A self-signed certificate only guarantees the first point, because self-signed certificates naturally can’t be validated by a trusted third party. So we have to ask ourselves - is the first point important in the context of an internal network?

Having unknown eavesdroppers is a given on the internet at large, but an internal network implies access control. Do you expect to have malicious actors inside your network who might be able to read your internal HTTP communication? If so, self-signed HTTPS might be the way to go - or, even better, generate your own root certificate authority and distribute the CA cert to your internal hosts, so you CAN validate inside the private network. If not, it’s mostly useless overhead, but that’s my opinion - others on these forums prefer to use self-signed HTTPS even in this case.

You need both. Strip the Server header in all your sites (via common.conf if convenient), and ALSO include the catch-all, so (as explained above) I can’t make Caddy leak that header with a request for a site you haven’t configured.


(Nicholas) #5

@Whitestrake Thank you again! It has been exhausting trying to get Caddy up and running in my environment, but I’ve finally made it happen (albeit with a few remaining issues that I would like to resolve / understand better).

FWIW, here are the relevant config files that ultimately got things working:

Caddyfile
# Object: Caddyfile v0.4-20180920225844
# Server: /volume1/docker/caddy/config/Caddyfile
# Container: /etc/caddy/Caddyfile
# Comments:

mydomain.xyz {
  root /srv
  import /etc/common.conf
  proxy my-local-ip:4444 {
          transparent
     }

  redir 301 {
#   / https://www.google.com/   # Redirect from mydomain.xyz to Google with https since Google
                                # automatically redirects http --> https anyway.

    /git    https://git.mydomain.xyz
    /gitea  https://git.mydomain.xyz
    /plex   https://plex.mydomain.xyz
    /dsm    https://syno.mydomain.xyz
    /syno   https://syno.mydomain.xyz
#   /monica https://crm.mydomain.xyz     # Service not yet setup
    /       https://www.google.com/
    }
}

http://, https:// {           # This block prevents inadvertent leakage of the header, _i.e._
  # tls {                     # that you are using Caddy, when there is a request resulting in 404.
  #   load /root/.caddy/local # _E.g._, if I craft a non-sense request (http://zyxor.xyz/abc123),
  # }                         # will result in a 404 because there is no such address, but will not
    tls self_signed           # the header indication that you are using Caddy.
    header / -Server
  }

syno.mydomain.xyz {             # Synology DSM
  import /etc/common.conf
  basicauth [snip] "[snip]"
  proxy / my-local-ip:5000 {
    transparent
  }
}

git.mydomain.xyz {              # Gitea
  import /etc/common.conf
  basicauth [snip] "[snip]"
  proxy / my-local-ip:33000 {
    transparent
  }
}

plex.mydomain.xyz {             # Plex
  import /etc/common.conf
# basicauth [snip] "[snip]"
  proxy / my-local-ip:32400 {
    transparent
  }
}
common.conf
# Object: common.conf v0.3-20180920225844
# Local: /volume1/docker/caddy/config/common.conf
# Container: /etc/caddy/common.conf
# Objective: To minimize repetitive code blocks within the Caddyfile where the settings for a given service /
# proxy are the same. Instead, in each such instance we will use `import common.conf`.
# Comments:
#
tls letsencrypt@mydomain.xyz
tls {
     dns cloudflare
    }

gzip

# log stdout
errors stdout

log /var/log/caddy/access.log "{remote} - {user} [{when}] {host} ({>Referrer} {>User-Agent}) {method} {uri} {proto} {status} {size}" {
         rotate_size 2            
         rotate_age  7            
         rotate_keep 2            
    }

# errors {
#     log /var/log/caddy/error.log "{remote} - {user} [{when}] {host} ({>Referrer} {>User-Agent}) {method} {uri} {proto} {status} {size}"
#     }

header / {
  Strict-Transport-Security "max-age=31536000; includeSubDomains"
  X-XSS-Protection "1; mode=block"
  X-Content-Type-Options "nosniff"
  X-Frame-Options "DENY"
  -Server
    }

docker-compose.yml
# object: docker-compose.yml - v0.3 (20180920225844)
# server: /volume1/docker/caddy/build/docker-compose.yml
# container: _not mapped_
# Comments: See notes/commentary on which these updates were based at
#           [Caddy.community](https://caddy.community/t/caddy-via-docker-as-reverse-proxy-to-gitea-instance-on-private-server/4438/2)
# Usage:  First-time run using `docker-compose -f /path/to/docker-compose.yml up`; end using `Ctrl + C`.
#         Subsequent run using `docker-compose  -f /path/to/docker-compose.yml up -d`.
#
version: '3'

services:
  caddy:
# build:
#   context: github.com/njm2112/caddy-docker.git
#   args:
#     - plugins=git,cloudflare,jwt,login,filter,cors,realip,filemanager,cache,expires
    image: njm2112/caddy-docker:latest
    command: ["-log", "stdout", "-agree",
      "-email", "letsencrypt@mydomain.xyz",
      "-conf", "/etc/Caddyfile"]
    ports:
      - 888:80/tcp
      - 4444:443/tcp
    environment:
      CLOUDFLARE_EMAIL: cloudflare@mydomain.xyz
      CLOUDFLARE_API_KEY: [snip]
    volumes:
      - /volume1/docker/caddy/config/Caddyfile:/etc/Caddyfile
      - /volume1/docker/caddy/config/common.conf:/etc/common.conf
      - /volume1/docker/caddy/certs:/root/.caddy
      - /volume1/docker/caddy/public:/srv
      - /volume1/docker/caddy/logs:/var/log/caddy
# gitea:          # Not adding these to the docker-compose instructions as of now
# monica:      # because they are pre-built, working services but will add them
# mysql:        # once I get Caddy functioning properly.

Here’s where things stand as of right now:

  1. the tls { dns cloudflare } directive gave me a very hard time because, for the life of me, I could not manage to get the abiosoft/caddy docker image to build with a customized set of plugins. After trying to parse the code and determine why the builder.sh was ignoring the plugins argument that should have been passed through per docker-compose.yml, I just decided to fork the repo, manually modify the list of plugins to be included, and then to push to an automated build on docker hub that I could build from.

  2. Even though I eventually got this working and built the container using docker-compose up, I revised docker-compose.yml so that I could use docker-compose up -d in the future without having to re-build the container. Now it just runs using the njm2112/caddy-docker image that is a modified fork of abiosoft/caddy:0.11.0.

  3. The log and errors directives in common.conf seem to have something wrong the syntax, so I had to revert back to using log stdout and errors log stdout. If you happen to know how I can fix this, I would appreciate it; the error is:

2018/09/21 20:35:04 /etc/common.conf:24 - Error during \
parsing: Wrong argument count or unexpected line ending after \
'{remote} - {user} [{when}] {host} ({>Referrer} {>User-Agent}) {method} {uri} {proto {status} {size}' \
exit status 1
  1. I had to move the catch-all redir to google.com after the individual redirects, otherwise the individual redirects listed in the block would go to google.com instead of to the subdomain.mydomain.xyz address.

  2. basicauth simply does not work. I can’t figure out what I’m doing wrong here.

  3. I resolved the “tls handshake” issue that was popping up by having Cloudflare force all traffic to https. Once I did that, no matter how I typed the destination URL, Cloudflare would redirect to https and Caddy would handle it from there.

  4. With respect to the following comment, I followed a very helpful guide for generating my own certificate authority to sign a certificate so that even my internal traffic would route through https. You described the approach as follows:

The question is what I do now with the various keys, certs, configs, etc. so that, e.g., my-local-ip:32400 serves Plex traffic over http. As of now, it redirects me to https://plex.tv to login but then after login, traffic passes through http://my-local-ip:32400/. All of the cert/key files are accessible to Caddy through the root/.caddy path binding (which is why I tried using the tls { load /root/.caddy/local } (per Caddy documentation), which didn’t work).

  1. I can’t figure out why, but trying to access mydomain.xyz gives me a 526 error from Cloudflare. I have CNAME records setup for my specific subdomains as well as for www.mydomain.xyz, all of which point to my external IP. (I don’t have a wildcard CNAME record.) The request for mydomain.xyz immediately becomes a request for https://www.mydomain.xyz, and that’s where there is apparently a “host error.” What I don’t understand is that in spite of an error being reported here (“Invalid SSL certificate”), I can see that there is a valid certificate issued by Cloudflare and which lists mydomain.xyz and *.mydomain.xyz in the SAN section of the certificate. What am I doing wrong here?

Again, I sincerely appreciate your help thus far! If you notice anything else about my config that you suggest I change for security reasons (or otherwise), please do share.

BR/


(Matthew Fay) #6

You should always be able to use docker-compose up -d. It’s just docker-compose up but it doesn’t run in the foreground, it runs daemonised. You want to use docker-compose build to build the containers, up should only build them if they’re not currently built.

Double check what you put in common.conf vs. what I put in my example. When you specify the log format, you need three arguments to the directive - the path you’re logging, the location of the log file, and the format. You’ve used only the latter two. Use log / /var/log/caddy/access.log "[FORMAT]", not log /var/log/caddy/access.log "[FORMAT]" - that’s why you’re getting wrong argument count.

You’ve made the exact same syntax mistake here as you did for log. basicauth needs three arguments - the path you’re protecting, then the username, then the password. You’ve just put the username and password in. basicauth / user pass, not basicauth user pass.

This has little to nothing to do with Caddy. Caddy’s not listening on port 32400, Plex is, and you’re connecting directly to it.

My honest advice - Plex is a complicated beast. Run it accessible via https://plex.tv and use their site as the frontend, forget about trying to front it with Caddy. It will still transfer video over the private network if you access it via https://plex.tv.

Were the certificate chain/keys concatenated together in PEM files?

Gotta say, mate, you’re really making it hard for yourself, taking on a lot of complications for this project. Cloudflare is another complicated beast. I’ve written about it a few times in the past.

The certificate that YOU see, issued by Cloudflare, isn’t the certificate your host is presenting. You’re not even connecting to your host, you’re connecting to Cloudflare, and Cloudflare is connecting to your host. Misunderstanding how this system works is very detrimental to trying to deploy a complicated project like this, and my suggestion is that you forget entirely about trying to use Cloudflare to reverse proxy to your Caddy reverse proxy. Just use Cloudflare as DNS - disable all the orange-clouds in your Cloudflare dashboard - and get everything working that way first; get CF on board later if you absolutely need to.

As for what’s going wrong, I don’t know. Cloudflare might be giving you a valid certificate, but your Caddy isn’t giving Cloudflare a valid certificate. Moving the SSL setting from Full (Strict) back down to Full might solve this, but it’s not ideal. Again, get rid of CF’s reverse proxy, get everything working, then reintroduce them later.


I can’t recommend enough paying closer attention to the docs and understanding the directives you’re using. A few of the mistakes above are really simple and come down to not knowing how many arguments the directive should be given.


(Nicholas) #7

Thanks again!

Do I understand you correctly that since access to my services from my own network don’t pass through Caddy (e.g., my-local-ip:33000 for Gitea), the question of https authentication is a distinct one from that of accessing the service via, e.g., git.mydomain.xyz? What has me confused is that I thought that the root CA that I created and used to sign my certificate so that my internal traffic would be https was tied to mydomain.xyz. If this has nothing to do with Caddy, I can pose this question elsewhere and try to figure it out. Also, Plex was a poor example; I should’ve used Gitea since that is the one that is really at the heart of my desire to use Caddy / https for secure, external access to my repos (which, fwiw, are for writing projects, not code).

I am not quite sure whether they were concatenated together in PEM files as I only have the one PEM file, but it is part of the root CA that I created. Here are the files that were generated when I followed the process for using a root CA to self-sign. I put the CA files in a mycertauth subfolder and the certificate/related files in a selfsigned subfolder when I created them, but I copied the items designated with * into the \local subfolder so I could point the tls { load /root/.caddy/local } directive at a single path.

\certs\mycertauth
     -myCA.key*
     -myCA.pem*
     -myCA.srl
\certs\selfsigned
     -ss_mydomain.xyz.crt*
     -ss_mydomain.xyz.csr
     -ss_mydomain.xyz.key*

I definitely would like to de-complicate this setup! To be honest, I didn’t realize I was adding complexity with Cloudflare. I just went that route because I came across Cloudflare in a list of hosting providers that work well with LetsEncrypt. If you think I’d be better served with AWS or Digital Ocean, etc., please do let me know. There is some inherent complexity that seems to stem from the Synology NAS monopolizing ports 80/443, so I have had to have my router forward incoming requests for 80/443 to intermediate ports (as listed in docker-compose.yml).

Orange-cloud option is now deactivated for the A record and all CNAME records. (The A-record directs mydomain.xyz to the external IP of my router and the following subdomains are listed as aliases: admin, crm, git, plex, syno, www.)

I had enabled “Always use HTTPS” in Cloudflare, which actually seemed to have resolved the “TLS handshake” error I had been seeing. Apart from that, there are a few settings in Cloudflare’s “crypto” panel that I haven’t changed but that strike me as potentially relevant here:

  • HSTS is not enabled
  • “Authenticated Origin pulls” is off
  • Opportunistic encryption is on
  • Automatic HTTPS rewrites is off

Question: Do you advise changing any of these either for the specifics of my setup or just because they provide some security/additional functionality that you recommend?

I note also that Cloudflare’s “crypto” panel shows that there is an “edge certificate” (for hosts mydomain.xyz and *.mydomain.xyz) of type SHA 2 ECDSA that is managed by Cloudflare. Although Cloudflare offers to generate a free “origin certificate” for me to install on my origin server (i.e., my NAS, I believe?), I haven’t done this.

I think what is confusing me most about the certificates/validation is that there are more certificate/key pairs involved here than I have read about being involved in such a setup in the course of my reading into how I would set this up. Before I deactivated the orange-cloud option, it appeared that the secure connection was being validated using a bulk certificate that was issued for not only mydomain.xyz, but also quite a few other domains that do not belong to me. Now that I have deactivated the orange-cloud option, when I visit, for example, git.mydomain.xyz, the connection appears to be secured using a certificate issued by “Let’s Encrypt Authority X3” through the Digital Signature Trust Co. CA, for git.mydomain.xyz. Same is true for the other subdomains that were included in my Caddyfile, e.g., plex.mydomain.xyz and syno.mydomain.xyz. However, even though Chrome reports that the certificate for www.mydomain.xyz (which serves the index.html file through a proxy /srv NAS-local-ip:4444 { transparent } directive) is valid, it displays the i security symbol (instead of the green lock) because, per Google’s explanation, the site “isn’t using a private connection.” Why would connecting to www.mydomain.xyz not use a private connection, but connecting to sub.mydomain.xyz appears to do so?

Trying to access https://mydomain.xyz and also trying to connect to https://NAS-local-ip:port (or any port that I’m using for a service, for that matter), causes an “invalid certificate” error that prevents https. I believe this is because of the certificate that I signed using the root CA that I created. Though I did, in fact, add the relevant keys to my Apple Keychain on the computer on which I’m trying to access these locations, I have not had the certificate verified by a third-party. If you have a recommendation for going about getting third-party verification of this, I would appreciate that information. Otherwise, I would like to clarify whether I can resolve this issue by generating a pem file using the crt, key, and csr files described a few paragraphs above, and then replacing tls self_signed directive in the Caddyfile’s http://, https:// section as follows:

http://, https:// {
    tls {
       load /root/.caddy/local
    }
    header / -Server
}

As usual, your counsel is very much appreciated!

BR/


(Matthew Fay) #8

Yes. Caddy’s not in the picture when you go straight to the source (i.e. gitea:33000).

The CA you created to generate HTTPS certificates gives you the capability to issue one for the common name gitea:33000 (or [ip-addr]:33000 or whatever hostname you use to connect to it). Then you put your trusted CA cert on the Caddy host, so that Caddy knows it can trust the certificate presented from upstream.

The end result is that you have a LetsEncrypt validated HTTPS connection to Caddy from the outside world, and then Caddy has a privately validated HTTPS connection to Gitea directly. This second hop is complicated because there’s no public infrastructure tools to facilitate this, so you need to know a fair bit about certificate generation, naturally. The benefit is validated private/internal HTTPS, which is about as secure as you can hope to get over a potentially-compromised private internet.

When you connect to Caddy, you’ll still be seeing the publicly validated LetsEncrypt certificate.

You need to convert the .crt file into a .pem file as a certificate chain, then append the key to the end. Caddy will want a single file ending in .pem, I believe.

None of these options do anything unless Cloudflare is acting as a reverse proxy.

HSTS is awesome and you should use it. You can instruct Caddy to set HSTS, though, you don’t need Cloudflare to do it for you - just add this directive to any site definition you want HSTS enabled for:

header / Strict-Transport-Security "max-age=31536000;"

Authenticated Origin Pulls will have Cloudflare present a certificate to Caddy to prove it is really Cloudflare. You can then configure Caddy to reject any requests that don’t present this certificate. Enabling this would mean that the Cloudflare reverse proxy is the only method of accessing your website and naturally only works with orange-cloud records.

Opportunistic Encryption is mostly useless if you are forcing HTTP->S upgrades (like Caddy does by default). It is a mechanism to allow certain modern browsers to recognize an asset is available over HTTPS and use the encrypted protocol transparently to the user.

Automatic HTTPS Rewrites will filter your HTML and headers coming back from Caddy to make sure that all links to your site are HTTPS. This can help fix some issues, like ones common with WordPress, where some assets like gallery images are saved as HTTP links, resulting in a “mixed content” / partially unencrypted webpage.

Again, all of these are only usable with a Cloudflare reverse proxy. I don’t really see any need to change any of those settings, even if you did re-enable the proxy.

This is another useful tool for if you plan to have Caddy entirely behind Cloudflare. They generate a really long-lived origin certificate that’s only trusted by Cloudflare (i.e. not publicly trusted). You install that to Caddy, specify it for all your sites with the tls directive, and then your website is only trusted by Cloudflare’s clients.

Combine this with Authenticated Origin Pulls for a trust/trust relationship with Cloudflare and Cloudflare alone.

Yes, this is exactly the expected behaviour. With Cloudflare in front, your client is not talking to Caddy, it’s talking to Cloudflare itself; Cloudflare issues its own certificates, in bulk, for all its clients (it calls this Flexible SSL). Cloudflare then talks to Caddy, validating the LetsEncrypt certificate being presented by Caddy. Internally then you’d have Caddy talk to Gitea, validating the private certificate (you can see how this chain can get very long…). By removing Cloudflare, your browser talks directly to Caddy, and you see the LetsEncrypt certificate.

This is ambiguous; neither of those hosts specify a private connection or otherwise. Prefixing with http:// or https:// indicates whether a connection should be private. Try https://www.mydomain.xyz/ and see if it changes.

Caddy should, by default, redirect from HTTP to HTTPS whenever possible.

The former should really route you out and about to Caddy externally, which should present a validated LetsEncrypt certificate.

The latter won’t work while tls self_signed is in play, and basically never will; Caddy generates the signing keys in-memory, so there’s no root CA to copy to other clients. You will need to fix the .pem files so Caddy can present the signed private certificates (which must be issued for NAS-local-ip:port as a subject name) so that the CA certificate you copied to the keychain can be used to validate them.