Global snippets

1. The problem I’m having:

No issue really, just wondering if there is a sort of “global snippet”.

Say i have the following for Authelia:

(secure) {
	forward_auth {args.0} authelia:9091 {
		uri /api/verify?rd=https://auth.example.com
		copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
	}
}

service1.example.com {
  import secure *
  reverse_proxy service1:80
}

service2.example.com {
  import secure *
  reverse_proxy service2:8000
}

Is there a way have the secure snippet apply to all blocks without having to add it? E.G. I know this doesn’t work but for illustrations sake:

{
  import secure *
}

Thanks

Maybe something along the lines of:

(secure) {
  forward_auth {args[0]} authelia:9091 {
    uri /api/verify?rd=https://auth.dhnode.com
    copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
  }
}

*.example.com {
  import secure *
  service1.example.com {
    reverse_proxy service1:80
  }
}

I don’t know if you can nest blocks like that.

No, but you can do the last block except instead of nesting, use a host matcher:

@service1 host service1.example.com
reverse_proxy @service1 service1:80

And you can generalize this more at scale using the map directive, as a way to map many domains/subdomains to their proxy backends: map (Caddyfile directive) — Caddy Documentation

1 Like

Is this heading in the right direction with the map directive:

(secure) {
	forward_auth {args[0]} authelia:9091 {
		uri /api/verify?rd=https://auth.example.com
		copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
	}
}

*.example.com {
	import secure *

	map {matcher} {forward} {
		"service1.example.com" service1:80
		"service2a.example.com service2b.example.com" service2:8000
	}

	reverse_proxy {forward}
}

I will need to have multiple hostnames for some services as with service2 in the example

1 Like

It’s getting there! The way you declare and then use {forward} in a reverse_proxy is perfect. A few little points to fix:

  • {matcher} isn’t correct in map {matcher} {forward}. The first placeholder you specify is the one you’re mapping from, it needs to be an existing placeholder. The subsequent ones are the ones you’re creating based on whatever the value of the first one is. In your case you’ll want to map from {host} because you want to supply different values based on which host is requested.

  • "service2a.example.com service2b.example.com" isn’t a good value to map from because {host} will never be two space-separated domain names. If you need to map two separate domain names to the same backend, you’ll need multiple lines; one for service2a.example.com service2:8000 and one for service2b.example.com service2:8000.

And one optional extra tip: Since you could possibly get a request for a host you haven’t mapped yet, it’s possible that someone requesting that could get a 502 error from Caddy because it will try to proxy to an empty upstream server.

You can leave that as-is (and let Caddy generate 502 errors whenever that happens, which isn’t an issue other than filling up logs), or you could solve it in one of two ways:

  • You could set a default <some-upstream> map. The default key tells the map directive what the value of {forward} should be if it didn’t match any of the other values. That would have unmapped hosts going to whatever default backend you specified.

  • Alternatively, you could use a matcher to check whether your map produced a valid backend, and if it didn’t, handle it somehow. There may be a more optimal solution, but what I’ve come up with is this, which seems effective:

@unmatched `type({forward}) != string`
handle @unmatched {
  # Generate some kind of response here for unhandled hostnames
}
2 Likes

Thanks for the pointers, it seems to be working except the SSL cert its getting is for example.com not *.example.com.

From what I can read in the docs I thought the block matcher being *.example.com would pull the correct cert? I’ve also seen something about a tls directive but not sure if its needed or how to use it.

This is what I’ve got so far:

(secure) {
	forward_auth {args[0]} authelia:9091 {
		uri /api/verify?rd=https://auth.example.com
		copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
	}
}

*.example.com {
	import secure *

	map {host} {forward} {
		"service1.example.com" service1:80
		"service2a.example.com" service2:8000
		"service2b.example.com" service2:8000
	}

	@unmatched `type({forward}) != string`
	handle @unmatched {
		redir https://www.google.com/
	}

	reverse_proxy {forward}
}

You’re right about that:

If using the Caddyfile, Caddy takes site names literally with regards to the certificate subject names. In other words, a site defined as sub.example.com will cause Caddy to manage a certificate for sub.example.com , and a site defined as *.example.com will cause Caddy to manage a wildcard certificate for *.example.com .

https://caddyserver.com/docs/automatic-https#wildcard-certificates

But there’s one additional bit of nuance down at the end I think you might be missing:

Note: Let’s Encrypt requires the DNS challenge to obtain wildcard certificates.

Since I don’t see any DNS validation configured, I’m assuming that’s the issue.

2 Likes

Alright, so that involves building caddy with the cloudflare plugin (I’m using cloudflare).

Currently using TTeck’s Caddy LXC script, I imagine rebuilding caddy may interfere with the update process.

Do you know of any ways around this?

Looks like they just put it in a Debian LXC with the official Caddy apt repository added, and it looks like the “upgrade script” just runs apt update and apt -y upgrade. That would indeed cause apt to grab a newer-but-not-customised binary whenever new releases come out, so yeah, it’d be a problem.

That said, there is a useful tool you can make use of, here. dpkg-divert essentially lets you specify a custom binary to use over top of the default one, and the package manager won’t replace the custom one.

We’ve got some instructions on how to do that here:

After setting things up this way, you’ll want to run the update script for misc other packages in the LXC, but for Caddy itself you’ll want to shell in and run caddy upgrade which will have the custom Caddy grab an up-to-date replacement for you including your custom modules.

2 Likes

That sounds like a solution, I’m no expert in debian but I’m learning. Would there be a way to “hook” caddy upgrade to run after an apt update/apt upgrade?

Followed the DPkg Post Invoke example here and the APT update example so that caddy upgrade runs after every DPkg operation and every apt update.

Not a good general solution but for my case its fine as the LXC is only running Caddy. This way the cron updater should still work

1 Like

Just one more thing, is there an OR operator for the map values?

No. But again, if two hosts map to to the same upstream, you can just have one line for each host and repeat the upstream for it.

1 Like

So if i have already done the procedure here for a custom build with cloudflare dns, but now i want to add crowdsec, i do it all the same except skip the dpkg-divert?

If you’ve already set up dpkg-divert then you’re already good on that front, it only needs to be done once for each host you need it for.

You can just run caddy add-package github.com/hslatman/caddy-crowdsec-bouncer and you should be away.

1 Like

i’m assuming i couldn’t have run that the first time to add cloudflare dns?

You could’ve run the command, sure, but I figured when you set up the dpkg-divert you probably would’ve just download a custom binary from the website with all the addons you needed at once for that purpose.

Now the custom binary is in place and won’t be replaced by apt you can add, remove, upgrade it etc. without worrying about replacement.

1 Like

Yeah good call thankyou

Rather than use dpkg-divert, you can put your custom caddy binary in /usr/local/bin and use a systemd unit override to change the path to the binary for the service. That’s how I am using the tteck caddy LXC script with a custom caddy binary.

root@caddy-vm:~# cat /etc/systemd/system/caddy.service.d/override.conf 
[Service]
ExecStart=
ExecStart=/usr/local/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=
ExecReload=/usr/local/bin/caddy reload --config /etc/caddy/Caddyfile --force