Caddy to Caddy - Certificate problem?

1. The problem I’m having:

I have two servers running in my home. Since I can only forward port 443 to one of the machines, I want it to reverse-proxy to the other server

Currently, port 443 is forwarded to yeda-server
Within that server’s Caddyfile, any hostname under a deanayalon.com is reverse-proxied to dean-linux (I’d have used a wildcard domain, but I can’t do the DNS-01 challenge)

I believe that the configuration is generally correct for yeda-server. If I spin up a container on dean-linux:443, the hostnames under deanayalon.com get there and I can use the service

But if I try to create a Caddy container on dean-linux, it tries issuing certificates, which is not possible, seeing as :80 arrives at yeda-server, which itself issues certificate requests.

Is there a way to have those two Caddy servers communicate so that the inner one uses the external one’s certificates, and does not try to issue them itself?

2. Error messages and/or full log output:

{
    "level": "error",
    "ts": 1726527929.4295866,
    "logger": "tls.issuance.acme.acme_client",
    "msg": "validating authorization",
    "identifier": "filemaker.deanayalon.com",
    "problem": {
        "type": "urn:ietf:params:acme:error:tls",
        "title": "",
        "detail": "62.0.120.246: remote error: tls: internal error",
        "instance": "",
        "subproblems": []
    },
    "order": "https://acme-v02.api.letsencrypt.org/acme/order/1950502556/305757599826",
    "attempt": 1,
    "max_attempts": 3
}
{
    "level": "error",
    "ts": 1726527930.6718748,
    "logger": "tls.obtain",
    "msg": "could not get certificate from issuer",
    "identifier": "filemaker.deanayalon.com",
    "issuer": "acme-v02.api.letsencrypt.org-directory",
    "error": "HTTP 429 urn:ietf:params:acme:error:rateLimited - Error creating new order :: too many failed authorizations recently: see https://letsencrypt.org/docs/failed-validation-limit/"
}

{
    "level": "error",
    "ts": 1726527932.7435188,
    "logger": "tls.issuance.acme.acme_client",
    "msg": "validating authorization",
    "identifier": "onedev.deanayalon.com",
    "problem": {
        "type": "urn:ietf:params:acme:error:unauthorized",
        "title": "",
        "detail": "62.0.120.246: Invalid response from https://onedev.deanayalon.com/.well-known/acme-challenge/tLat3xbKaZ3QIe6wAW6YtRwwa3tGUPg6xYsUQ9ile8g: 502",
        "instance": "",
        "subproblems": []
    },
    "order": "https://acme-v02.api.letsencrypt.org/acme/order/1950502556/305757609216",
    "attempt": 2,
    "max_attempts": 3
}
{
    "level": "error",
    "ts": 1726527932.743571,
    "logger": "tls.obtain",
    "msg": "could not get certificate from issuer",
    "identifier": "onedev.deanayalon.com",
    "issuer": "acme-v02.api.letsencrypt.org-directory",
    "error": "HTTP 403 urn:ietf:params:acme:error:unauthorized - 62.0.120.246: Invalid response from https://onedev.deanayalon.com/.well-known/acme-challenge/tLat3xbKaZ3QIe6wAW6YtRwwa3tGUPg6xYsUQ9ile8g: 502"
}

3. Caddy version:

v2.8.4 h1:q3pe0wpBj1OcHFZ3n/1nl4V4bxBrYoSoab7rL9BMYNk=

4. How I installed and ran Caddy:

Using the caddy image for Docker

a. System environment:

Yeda-Server:
MacOS Sonoma 14.6.1 (M1 Max chip)
Docker Desktop 4.33.0 (Engine 27.1.1)

Dean-Linux:
Manjaro Linux 24.0.8 Wynsdey (AMD64)
Docker Engine 27.1.2

b. Command

docker compose up -d

c. Service/unit/compose file:

compose.yml (Identical in both servers)

services:
  caddy:
    image: caddy
    container_name: caddy
    hostname: caddy
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
      - 443:443/udp
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./site:/srv:ro
      - ./log:/var/log
      - data:/data/caddy
      - config:/config/caddy
    env_file:
      - .env

volumes:
  data:
    name: caddy-data
  config:
    name: caddy-config

d. My complete Caddy config:

Yeda-server’s Caddyfile:

{
	email {$EMAIL}
}

(tls-insecure) {
	reverse_proxy {args[0]}:443 {
		transport http {
			tls
			tls_insecure_skip_verify
		}
	}
}
(log) {
	log {
		output file /var/log/{args[0]}.log
		format json
	}
}


filemaker.yeda-water.com:443 {
	import log filemaker
	import tls-insecure fms
}

(dean) {
	{args[0]}.deanayalon.com:443 {
		import log dean/{args[0]}
		import tls-insecure {$DEAN_IP}
	}
}

import dean filemaker
import dean onedev

dean-linux’s Caddyfile:

{
	email {$EMAIL}
}

(tls-insecure) {
	reverse_proxy {args[0]} {
		transport http {
			tls
			tls_insecure_skip_verify
		}
	}
}
(log) {
	log {
		output file /var/log/{args[0]}.log
		format json
	}
}

filemaker.deanayalon.com:443 {
	import log filemaker
	import tls-insecure fms:443
}
onedev.deanayalon.com:443 {
	import log onedev
	reverse_proxy onedev:6610 {
		transport http {
			tls
		}
	}
}

5. Links to relevant resources:

Unfortunately, though I remember reading mention of multiple Caddy server management, I could not find any documentation or forum threads about configuring that, my apologies if I missed something

Thank you, and have a great night :slight_smile:

Howdy @DeanAyalon, welcome to the Caddy community.

If you can share certificate storage - it’s that easy. Put it on an NFS share for example and Caddy instances will seamlessly cluster with each other for certificate maintenance, solving challenges, and sharing TLS assets. You get working publicly-trusted HTTPS for both servers and can forget about it from here on out.

Any Caddy instances that are configured to use the same storage will automatically share those resources and coordinate certificate management as a cluster.
Automatic HTTPS — Caddy Documentation

I think you could probably use the front-facing Caddy instance as an ACME server and point the second one at it to requisition certs from its local CA as an interesting alternative, but that would have a few drawbacks.

1 Like

Thank you Matthew!

I have thought of sharing the certificates mount, but feared for security, so I figured asking here would be best.

Looking again at the part on Caddy’s homepage about multiple Caddy servers working together

Cluster coordination :globe_with_meridians:

Simply configure multiple Caddy instances with the same storage, and they will automatically coordinate certificate management as a fleet and share resources such as keys and OCSP staples

Seems they really are intended to share a certificate storage mount.

Seeing as MacOS seems to favor SMB rather than NFS, I guess I should set up a samba server(?)
I have absolutely zero knowledge about network volumes. I used SSHFS a tiny bit.
How should I approach this?


As for the second option, how would that work? And what would be the drawbacks? If you don’t mind elaborating

Again, thank you :slight_smile:

Yeah, shared storage is the way to cluster Caddy instances. It doesn’t HAVE to be file storage, though; you can use some other kind of remote storage like Redis or S3 with any of the modules listed on this page as long as its configured alike on all clustered Caddy instances: JSON Config Structure - Caddy Documentation

Network file storage is probably the most straightforward option, though. SMB mounts are basically natively supported everywhere, so.

Naturally the security of the TLS assets is important. NFS and SMB have authentication schemes, though, so it’s feasible to reasonably secure them. Running inside your LAN most of the time I’d expect simple username+password to be plenty.

With an Apple computer as the client SMB should be just fine.

You can actually run SMB from Docker. I don’t think it’s commonly recommended, but personally I prefer it. I don’t like running Samba on one of the hosts and configuring a bunch of stuff just to serve a single folder for a purpose like this. There are Samba containers with near-turnkey solutions, have a look at something like this: GitHub - ServerContainers/samba: samba - (ghcr.io/servercontainers/samba) (+ optional zeroconf, wsdd2 & time machine) on alpine [x86 + arm]

SSHFS would probably work too and would be exceptionally secure for this purpose. The TLS assets are not large files and the activity is not particularly latency- or bandwidth-sensitive as certificate maintenance is a very short process carried out relatively infrequently overall.

The second Caddy instance’s certificates would be issued by the first instance’s local certificate authority. That means your browser won’t trust it unless you install the first instance’s root CA cert into the trust store of each client you want to use to browse to it.

Untrusted certs aren’t the biggest issue, but it’s nice to have it fully validated so you’re not met with security interstitials every now and again when browsing to it.

There’s also the argument that invalid certificates makes HTTPS much less useful. HTTPS has two distinct goals: prevent snooping with encryption (which works with untrusted certs) and prevent impostors with a trust mechanism (obviously doesn’t work for self-signed stuff).

Arguably within your LAN the issue of encryption and trust are mitigated by the firewall separating you from the internet, so many would say that HTTPS isn’t particularly necessary and you could probably just have the backend Caddy instance available directly over HTTP without much risk.

This is less of an issue if you simply always browse to the website served by the frontend Caddy instance, which would naturally trust the certs it itself issued for the backend instance to use, and don’t browse to the backend instance within your LAN.

1 Like

Ok so I did understand correctly, as a CA it’d be issuing self-signed certs, which comes with its own host of problems

So my options are SMB, Redis and SSHFS, with security being the strongest on SSHFS
What about complexity? I have only dabbled in SSHFS for a bit, manually mounting, and as such I don’t know how simple it’d be to automate the mount

Since it is all in a docker-based environment, your suggestion of Redis/docker-samba were very intriguing to me. I don’t need to make sure any other system is operating correctly, just a simple container to add to the stack

I’ll try to look at their documentation, but as far as Caddy is concerned, how do I set this up?

  • With SMB I am guessing I’d simply need to change the mount paths to the SMB volume, right?
  • With Redis, from what I glimpsed running into people’s snippets of code online, it seems to be using some redis:// communication protocol to transfer data - How would I set up Caddy to use it?

Also, which paths should I mount?
/data/caddy/certificates of course
What about /data/caddy/acme?

And one last thing, if I share the ACME-challenge path between the Caddy containers, could both potentially generate and use their own certificates?

Sorry, I see you have linked docs that may answer some of my questions

I think, honestly, SSHFS is probably the least complexity. You can enable it by configuring an authorized key on one host and an fstab entry (or systemd automount maybe) on the other. The Arch wiki has some good information for you there.

https://wiki.archlinux.org/title/SSHFS

SMB comes in shortly after if you run Samba in a Docker container. Those containers have environmental variables they use to preconfigure everything, so once you’ve got a Compose file set up, you’re good. An fstab entry on the other host again gets it connected.

Redis involves adding modules to both Caddy instances, but you could expose Redis across your LAN with a very basic password auth and simply have both Caddy instances connect to it. Redis also has a docker container. I’d say it’s also fairly straightforward, but the fact you have to custom build both Caddy instances - while not being particularly difficult at all - does make it slightly more complicated than the other solutions.

Yep. You mount the SMB volume on the host, then you bind mount the SMB folder into your Docker container.

You add the custom module when you build Caddy, and refer to its documentation here for Caddyfile configuration: GitHub - pberkel/caddy-storage-redis

I’d personally just do the whole /data folder.

You might have to elaborate, I’m not sure I understand what you mean by sharing the ACME-challenge paths.

Edit: Do you mean proxying /.well-known/acme-challenge to the second Caddy instance?

That would… Probably work, I think? If you disable TLS-ALPN challenge and override the HTTP->S redirect for that site address.

2 Likes

Yes that’s what I meant, sharing the .well-known/acme-challenge with both instances

Isn’t the HTTP->S redirect already disabled for that specific path, in order for Caddy to automatically issue certificates itself?

That being said, I know almost nothing about ACME challenges, and have only used HTTP-01 so far. Does Caddy automatically use TLS-ALPN-01 + HTTP-01? If so, what’s the benefit?
And why would I need to disable it in such a setup?

Could be cool to set up, even if just as a proof of concept - Both instances working together, yet managing their own separate certificates
I don’t know what sort of benefit could be derived from this honestly, just a thought that came up


It’s interesting you actually need to build a custom image in order to add modules to Caddy. I’d have thought a docker-based solution would simply construct containers for each of those modules, but I guess not everything can, or should, be done that way

Since I don’t really want to set up anything on my machine, and prefer the entire configuration to be easy to migrate using only the git repository made for my server - Would it be a good idea to set up a Samba container on Yeda-Server, and then a client container on Dean-Linux that would mount the smb volume itself, rather than my host machine being configured to do so?

When Caddy is executing its own challenge it handles this route specially.

If Caddy isn’t executing a challenge, it does not have any special handling and treats it like any other request, including a HTTP->S redirect.

Diversifying the kinds of challenges Caddy can use increases its resiliency via likelihood of one of the options available to it working if one fails. For example, TLS-ALPN can’t be proxied except by relatively blind packet forwarding; it has to be negotiated with the first termination of TLS.

If you don’t disable TLS-ALPN, the second Caddy will probably still manage to get a certificate, but it’s likely to bang its head against the wall a few times first trying to get the ACME server to validate TLS-ALPN against the front-facing Caddy. This kind of challenge is “solved” during the early TLS negotiation phase, so it actually happens well before any HTTP stuff like the URI /.well-known could be proxied onwards.

Turning it off will just make it much smoother since Caddy will go straight for HTTP validation, which can be proxied.

Yeah, this goes way deeper than one might think on the surface. Which modules should have images prebuilt for them? Which combinations of modules? Which are popular? Which modules don’t really make sense to include alone but make much more sense paired with some other module? What happens if Caddy or a module updates and breaks on the current version of its counterpart? How do we keep track of all the release cycles of all the modules? How do we standardize image tag naming for groups of modules? And on, and on, and on, and on, and on… A nightmare.

Better to just make it as easy as possible for you to build your own.

You don’t need a client container. If you don’t want to handle mounting the SMB share yourself on the host, Docker can mount it directly as a volume. You can have the Samba server configured in Compose on the host and the volume in Compose on the client, making it all pretty much completely portable.

See https://docs.docker.com/engine/storage/volumes/#create-cifssamba-volumes for details on CIFS (i.e. SMB) volumes.

1 Like

Woah, a tad more complex than the simple nginx configuration I know then lol
That’s nice to know, may I ask why it is done that way? Is there any instance in which there’s a request to such a url path in which it should be treated as a normal one? Also, wouldn’t it be potentially problematic with caching?
Sorry if the questions are a tad dumb

Cool, I know nothing of that challenge, but this makes sense

Not really what I meant. I thought more about having those modules run in their own separate containers, kind of like a k8s cluster works.
Want the caddy storage module? Simply add the caddy-storage image as another service in your compose stack

DOH! How did I forget that?!

Thank you for all the assistance so far, you’ve been incredibly helpful, and pleasant to converse with

I’ll try the SMB plan and report back

Have an awesome day!

It’s all handled by GitHub - caddyserver/certmagic: Automatic HTTPS for any Go program: fully-managed TLS certificate issuance and renewal, which is essentially THE headlining feature of Caddy. Automatic HTTPS you can truly set and forget, whereas nginx and Apache solutions typically involve an out-of-process script like Certbot, and the “turn-key” solutions for those tend to simply be a box with a supervisor process running automations on a timer.

Well, we’ve discussed one such instance already - what if you wanted to reverse-proxy the /.well-known path to an upstream server so it could solve challenges? We don’t want Caddy to intercept that request - we want to handle it like any other HTTP request so we can route it onwards!

It’s just better on principle to make as few assumptions about the possible use cases as we can, because people coming up with reasons to do stuff like this that we might never have thought of before is a pretty common fact of life and good software should take that occurrence into account.

You should never cache this URI. It’s used for lots of reasons other than just ACME and as long as nobody configures it with caching headers, and no clients are explicitly configured to cache the reply, I can’t see how it would be an issue.

Any client asking for a /.well-known URI is looking for the most up-to-date information anyway, so it wouldn’t cache the reply even if the server mistakenly told it to with a cache header. If you mistakenly configure a cache handler in Caddy itself, that won’t be a problem for challenges executed by that Caddy instance because, again, CertMagic intercepts that route when carrying out challenges so the caching handler configured for the site would never execute.

If you’re interested, LetsEncrypt has a really nice, succinct write-up on the main challenge types (HTTP, TLS-ALPN, and DNS) and how they broadly function here: Challenge Types - Let's Encrypt

It’s not a bad read, gives you a good idea of what’s going on in a pretty short little article.

Not really feasible, because these modules extend the Caddy binary itself. We call them plugins sometimes but they’re not really plug-in; the source code is actually included in the build to produce the final result and these modules often benefit from, take advantage of, and manipulate the core in-process functionality of Caddy itself.

No worries! Always glad to help people learn. Not to mention anyone with the same questions as you might stumble across this forum post one day and come away more informed, too.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.