How should I reference another Caddy instance via a Caddy Layer4 instance?

1. The problem I’m having:

I’m running three instances of Caddy across three servers with different applications. For external access, I’ve installed the Caddy OPNsense module on my firewall and am leveraging the Layer4 capabilities to receive external traffic, determine the subdomain (via SNI), and then forward it to the appropriate instance of Caddy within my network.

With everything configured, the L4 Caddy seems to be receiving and parsing the traffic correctly based on the logs. However, there is an issue when trying to visit these services externally - the Caddy L4 instance forwards the traffic and I’m forwarded to just a white blank page. I can see the HTTPS redirect for the subdomain execute, but nothing loads.

With my three instances of Caddy handling TLS, is there another way I should be referencing them when sending traffic from the L4 proxy?

2. Error messages and/or full log output:

No relevant logs that I can see in the internal Caddy instances. Below are logs from the L4 proxy that confirm the handoff is working properly.

2024-11-25T09:12:08-05:00	Debug	caddy	"debug","ts":"2024-11-25T14:12:08Z","logger":"caddy.listeners.layer4","msg":"connection stats","remote":"174.228.234.93:7326","read":872,"written":2897,"duration":0.230515789}	
2024-11-25T09:12:08-05:00	Debug	caddy	"debug","ts":"2024-11-25T14:12:08Z","logger":"layer4.handlers.proxy","msg":"dial upstream","remote":"174.228.234.93:7326","upstream":"192.168.10.1:443"}	
2024-11-25T09:12:08-05:00	Debug	caddy	"debug","ts":"2024-11-25T14:12:08Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"174.228.234.93:7326","matcher":"layer4.matchers.tls","matched":true}	
2024-11-25T09:12:08-05:00	Debug	caddy	"debug","ts":"2024-11-25T14:12:08Z","logger":"layer4.matchers.tls","msg":"matched","remote":"174.228.234.93:7326","server_name":"plan.halp.app"}

3. Caddy version:

I’m using the latest Caddy plugin from the OPNsense repository for the Layer4 proxy (not sure where to find the specific version number.

On the internal instances, they’re all running v2.8.4 and are custom built with Cloudflare and Porkbun DNS modules.

4. How I installed and ran Caddy:

a. System environment:

I’m on the latest version of OPNsense with the official Caddy plugin installed and up-to-date.

On my three servers, I’m running them on Debian via Docker and docker-compose.

b. Command:

docker compoes up caddy

c. Service/unit/compose file:

On the three servers:

  caddy:
    build: /mnt/user/appdata/caddy/.
    container_name: caddy
    restart: unless-stopped
    ports:
      - 443:443
    environment:
      LE_CERT_EMAIL: ${LE_CERT_EMAIL}
      CF_API_TOKEN: ${CF_API_TOKEN}
      PORKBUN_API_KEY: ${PORKBUN_API_KEY}
      PORKBUN_API_SECRET_KEY: ${PORKBUN_API_SECRET_KEY}
    volumes:
      - ${APP}/caddy/Caddyfile:/etc/caddy/Caddyfile
      - ${APP}/caddy/site:/srv
      - ${APP}/caddy/data:/data
      - ${APP}/caddy/config:/config
    networks:
      - caddy

d. My complete Caddy config:

This is a Caddyfile from one of the three servers (192.168.10.1. They are all structured like this and work when using internal redirects.

# ––––––––––––––––––––––––––––––––––––––––––––
#  Global
# ––––––––––––––––––––––––––––––––––––––––––––
{
	email {env.LE_CERT_EMAIL}
}

# ––––––––––––––––––––––––––––––––––––––––––––
#  Snippets
# ––––––––––––––––––––––––––––––––––––––––––––

(headers) {
	encode gzip
}

(porkbun) {
	tls {
		dns porkbun {
			api_key {env.PORKBUN_API_KEY}
			api_secret_key {env.PORKBUN_API_SECRET_KEY}
		}
		resolvers 1.1.1.1
	}
}

# ––––––––––––––––––––––––––––––––––––––––––––
#  Reverse Proxy
# ––––––––––––––––––––––––––––––––––––––––––––

*.halp.app {
	import headers
	import porkbun

	@budget host budget.halp.app
	@authelia host sso.halp.app
	@rallly host plan.halp.app

	handle @budget {
		reverse_proxy budget:5006
	}

	handle @authelia {
		reverse_proxy authelia:9091
	}
	
	handle @rallly {
		reverse_proxy rallly:3000
	}

	handle {
		abort
	}
}

Here is the config for the Layer4 proxy on Caddy OPNsense:

# DO NOT EDIT THIS FILE -- OPNsense auto-generated file


# caddy_user=root

# Global Options
{
	log {
		output net unixgram//var/run/caddy/log.sock {
		}
		format json {
			time_format rfc3339
		}
		level DEBUG
	}

	https_port 443

	servers {
		protocols h1 h2 h3
		listener_wrappers {
			layer4 {
				import /usr/local/etc/caddy/caddy.d/*.layer4listener

				@a742d3fc-178e-4614-82d8-0a259e31c20a tls sni code.halp.app

				route @a742d3fc-178e-4614-82d8-0a259e31c20a {
					proxy tcp/192.168.10.4:443 {
					}
				}
				@418fd65f-f149-4d71-b0ab-5d8877d098d3 tls sni sso.halp.app budget.halp.app plan.halp.app

				route @418fd65f-f149-4d71-b0ab-5d8877d098d3 {
					proxy tcp/192.168.10.1:443 {
					}
				}
				@69515b57-a5d4-4392-ac79-41cd2f524853 tls sni load.halp.app

				route @69515b57-a5d4-4392-ac79-41cd2f524853 {
					proxy tcp/192.168.10.2:443 {
					}
				}
			}
			tls
		}
	}

	layer4 {
		import /usr/local/etc/caddy/caddy.d/*.layer4global
	}

	dynamic_dns {
		provider porkbun {
			api_key <SECRET KEY REDACTED>
			api_secret_key <SECRET KEY REDACTED>
		}
		domains {
			halp.app *
		}
		ip_source interface igb0
		versions ipv4
		update_only
	}

	email <EMAIL ADDRESS REDACTED>
	grace_period 10s
	import /usr/local/etc/caddy/caddy.d/*.global
}

# Reverse Proxy Configuration


# Layer4 default HTTP port
:80 {
}
# Layer4 default HTTPS port
:443 {
}

import /usr/local/etc/caddy/caddy.d/*.conf

5. Links to relevant resources:

Here’s a topic I started on the OPNsense forums, which then directed me here: Caddy Layer4 Configuration

Follow up question - someone on the OPNsense forums suggested that I don’t even need a Layer4 proxy to achieve this, but I can’t seem to find a configuration that works.

I essentially want to route all 443 to my public IP address to Caddy installed on OPNsense, which, based on the subdomain, will then route it to one of my three Caddy instances.

Is there an ideal way to set this up?

Update: I think I’m getting closer to solving the issue. Each of the three servers shares the same wildcard domain (*.halp.app).

When I open a new browser session and navigate to a service on Server 1, it works - but I can’t navigate to services on either of the other two servers successfully.

When I clear cache and open a new browser session and navigate to a service on Server 2, it works - but I can’t navigate to services on either of the other two servers successfully.

And so on.

I think the browser is getting hung up on each server having its own cert for the same wildcard domain? Is there any way around this?

I think at this stage, people who are actively working on layer4 are tracking issues on Github for now. You might get more luck posting on the repo’s issues for now. (This is mostly because I personally don’t have experience with the layer4 plugin)

Just confirming that I was able to resolve this issue by specifying individual subdomains in my Caddyfiles vs a wildcard subdomain. Each of the three servers having their own certificate for the same wildcard subdomain was causing issues with browsers when navigating across services.