Caddy Lab / Setup Examples

Like this:

app1.example.com {
	reverse_proxy 192.168.1.10:8001
}

app2.example.com {
	reverse_proxy 192.168.1.10:8002
}

I’ve never heard of this term before :man_shrugging:

It says “connected”, so no. Was your firewall allowing the connection but preventing traffic, or something? :astonished:

You have : in there after the domains. Remove that.

Yes, this uses the incoming request’s Host there.

You can either just explicitly write https://caddytest.cf{uri} or use labels placeholders like https://{labels.1}.{labels.0}{uri} where a label is the segments of the hostname starting from the right.

We have an example in the docs that shows how to do this (but while typing this up I realized there was a small mistake, the . was missing between the label placeholders – fixing it here https://github.com/caddyserver/website/pull/228/commits/7097352b005fa6f7b2bc7653ee03bdb7bbce4b34):

Depends. You might need try_files if your site is JS with a frontend router (react-router or something), and probably should enable encode gzip for compression. You might need CORS headers depending on what you’re doing, etc. It’s all very usecase-specific.

You’re talking about a TCP proxy – Caddy’s default build doesn’t do this, because it’s an HTTP layer proxy.

You can use GitHub - mholt/caddy-l4: Layer 4 (TCP/UDP) app for Caddy if you need a TCP/UDP proxy.

@francislavoie Thanks again…

OOPS… I didn’t ask the question clearly… I meant on the back end receiving proxy connections (maybe from Caddy, but could be any reverse-proxy running on another machine). I’m assuming the machines are connected by a VLAN with no other tenants using the VLAN.

I found this writeup:

Can Caddy be on multiple IP networks simultaneously - One for the public side, and one for the private side?

Imagine the router being 192.168.1.1, and the public side of the frontend being 192.168.1.2. The private side of the front end would be 192.168.0.1 and 192.168.0.2 which would connect to the front ends on 192.168.0.3 and 192.168.0.4. The 192.168.0.0/24 subnet would be in a VLAN, so the likelyhood of anyone being able to snoop would be negligivle, but even if it was possible to snoop (by taking over the switch or compromising the front end), the traffic would still be encrypted by using:

:443 {
tls internal {
on_demand
}

wouldn’t it? I can see a more hardened environment for generating certificates if the environment was a cloud platform with multiple tenants, but in a home environment? Am I missing something?

Thanks… your solution was perfect!

802.1Q tagged VLAN is what most managed switches use. They add a tag to the beginning of the frames to segregate traffic. If the network is configured properly it allows multiple broadcast domains on the same physical network with complete isolation as if they were separate networks. If someone is able to break out of a VLAN, they likey own your whole network. If that happens I don’t think a more robust certificate generation strategy will make much difference

:100: :+1:

I’ll see if I can figure these out… I must say that I’m a bit confused about try_files. I looked at the docs but without more robust examples I don’t get it.

I guess that would mean it would be necessary to build/compile Caddy?

END of reply

I managed to find this:
How to Install and Configure Caddy Web Server with PHP and MariaDB on Ubuntu 20.04
https://www.howtoforge.com/tutorial/ubuntu-caddy-web-server-installation/#step-install-php
and get PHP/MariaDB working – at least the test page displayed properly and it was possible to connect to the database. This is an example of a great tutorial! Easy to follow and it worked!

I went though php_fastcgi (Caddyfile directive) — Caddy Documentation and I must say that I was totally confused by the references to port 9000. Is this an obsolete method of installing PHP-FPM? When I installed PHP using apt, the resulting install used UNIX sockets. There are examples scattered though the Caddy site using :9000, and some using the UNIX sockets method and no background or explanation of when to use what.

I’m just about ready to clean this thread up and move it to the wiki as a tutorial. I assume it uses the same sort of markdown as the forum? Is there an easy way for me to get all the text our, or do I have to go in small bits and reformat. I know some of the posts are locked from editing so I cant open an edit window and scrape.

I’m wondering if you have had any experience with client side certificates? For private self hosted sites they seem like a good idea for access control. IIUC the site won’t even connect unless the client has a valid cert. If the CA is inside your firewall it would be impossible for anyone to generate a cert unless they had already been inside and stolen a cert. Back that up with passwords and you should have pretty good security. Any idea if Androd handles client certs properly?

BTW… do you use any kind of private CA for your home machines? How do you manage the security of your private keys?

I still don’t really know what you mean.

Yes – by default Caddy binds to all interfaces by default, but you can configure that with the bind directive to serve different sites from different interfaces.

Basically it’s “try to see if a file exists for this pattern; if it does, rewrite the URL to that; if not, try the next one (until none are left)”.

I think the best explanation I’ve written about it is in the php_fastcgi docs, since it’s a usecase specific to modern PHP apps:

For JS apps, SPAs (Single-Page Apps), it’s often necessary to do something like try_files {path} /index.html so any request to a path that doesn’t exist on disk instead just serves the index.

Yes. But it’s very easy. Either use the Download Caddy page to grab a custom build, or install Go (super easy, works on every OS) and xcaddy (single static binary, just like Caddy) which automates the process of producing a build Build from source — Caddy Documentation

Note that caddy-l4 does not work with Caddyfile config at this time, you’d need to use JSON config.

It’s not obsolete. It’s done that way a lot of the time. For example in Docker, it’s the easiest way to connect a web server container to the separate FPM container.

What you should use entirely depends on how PHP-FPM was installed. Some distros will use TCP by default, others will use unix sockets. Caddy supports both, you just need to figure out which one to use according to your environment.

The wiki is literally just another topic/thread, but in the Wiki category, and with a flag turned on so anyone can edit the post.

You should be able to just copy (Ctrl+C) the content you wrote and paste it into the editor box in the other topic. It should preserve formatting.

Yep, Caddy supports client certs, see the tls directive.

Caddy doesn’t manage issuance of client certs though, so that’s up to you to manage. Just plug in your root cert into Caddy’s config and it’ll restrict access to clients who present a trusted client cert.

But client certs can be annoying to deal with cause it requires manual steps to install on devices. It makes sense if you’re tech savvy enough and don’t mind the rigamarole.

Well, when you use tls internal, you are using a “private CA”, managed by Caddy, which it uses to issue server certs as needed. Just make sure nobody else has access Caddy’s storage location.

Let me try to explain with this crude Ascii Diagram (forget the other example). BackEnd1/2 are 2 Caddy servers, and FrontEnd is a third server to handle guarding the edge.

 BackEnd 1      |         Front End            |  Router    |
192.168.0.3------192.168.0.2-|
                             |
                             |--192.168.1.2-----192.168.1.1
 BackEnd 2                   |
192.168.0.4------192.168.0.1-|
    VLAN NO OTHER DEVICES 

Front end is bound to 2 interfaces - 1 on the front end and 1 on the back end .

I don’t see any reason for BACKEND1 or BACKEND2 (2 Separate machines or containers) to use anything more complicated than:

:443 { tls internal {
on_demand
}

BackEnd1/2 don’t need direct access to the internet, and 192.168.0.0/24 has only the two backend machines and the back end of the FrontEnd to worry about. Traffic is still encrypted with Caddy’s self signed certs so casual traffic snooping isn’t possible. Am I missing something, or does adding the complexity of Let’s Encrypt add any value? I think it just creates additional attack vectors if something isn’t set up correctly or otherwise goes wrong. Am I missing something?

Another question just occurred to me. How many certs are required by the front end? Can the same cert be used for 192.168.0.1, 192.168.0.2, and 192.168.1.2?

Thnaks for clarifying… If setting up on an unfamiliar distro any easy way to figure out what method is being used other than hoping it’s easy to find in the docs? (I’m sure I’m showing my ignorance here, but the relevent Caddy example is to just run journalctl -u caddy right after startup and all the file paths are right there.

Agreed – just wondering if you’ve ever done it with Android? Does android work properly/

I’ve used certs with OpenVPN on pfSense and they are great. With a UDP server and a TLS Auth cert the server ignores any probe attempts. You need the right key to the get the opportunity to provide a USER key, and then you enter your user password. pfSense has a nice program that exports a cert bundle that is very easy to import into OpenVPN. It would be nice to be able to do that for private web sites run on Caddy.

Okay – what’s running on the frontend though? Another Caddy instance? That’s what I was missing from your explanation.

Probably not for you in this case. And it wouldn’t add any attack vectors, I don’t see how it could.

But a better approach is probably to have the frontend act as the ACME CA for your backends, and do mutual TLS (mTLS) between them. This wiki explains it well, as I think you’ve seen:

The benefit of this is trust, and you’re only managing a single CA instead of a CA per backend, of which you need to copy the root cert to the frontend.

Caddy doesn’t support multi-SAN certs (i.e. multiple names in one cert), so it’ll be a cert per name you’re using.

Check the php-fpm config. It’ll have a listen = line which tells you how it’s running. Or check the systemd service for php-fpm and it should probably tell you as well in its logs.

Yes it works properly.

What I illustrated was a Caddy instatnce and that’s what my original question was aimed at.

Now that you mention it would anything change (using tls internal {on_demand} … } vs using certs from a widely trusted CA) if the backend instances were connected to HAPROXY. If I set servers up behind my firewall, I will likely have to use HAPROXY for the front end because that’s what’s built in to pfSemse. I don’t like the idea of anything coming directly into my network without some sort of gatekeeper.
[q

uote=“francislavoie, post:26, topic:15915”]
But a better approach is probably to have the frontend act as the ACME CA for your backends, and do mutual TLS (mTLS) between them. This wiki explains it well, as I think you’ve seen:

[/quote]
I’m still trying to get my head around this… I may be back to you after I’ve had a bit more time to look at it.
mTLS - is that just the technical term for “using a client certificate” or is there more to it than that?

The only thing I found in the php-fpm config was this comment block:

; Specify the event mechanism FPM will use. The following is available:
; - select     (any POSIX os)
; - poll       (any POSIX os)
; - epoll      (linux >= 2.5.44)
; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)
; - /dev/poll  (Solaris >= 7)
; - port       (Solaris >= 10)

; Default Value: not set (auto detection)
;events.mechanism = epoll

I also ran across this Video and write-up…

Actually, no client certs here. Rather you configure the frontend’s reverse_proxy to trust the upstream’s (backend’s) cert, and that upstream cert happens to be signed by the CA that the same frontend instance manages.

It’s mutual because the backend trusts the frontend as its CA, and the frontend trusts the backend cause it used its CA.

You might be looking at the wrong config file then :thinking:

I don’t think so… From what I understand it involves something called php Pools… the configs that I have in place don’t have much if anything congigured… IIRC there was something thtat said pools wasn’t configured–which matches what I found when I looked.

Pools are basically just a set of workers, child processes that run PHP. You can configure more than one pool in case you’re doing some advanced stuff and you’re either running multiple sites with one server, or need different worker settings depending on the kind of request being made. Large majority of people (especially self-hosters) just need a single pool. The default pool is usually called www.

Are you sure you don’t see something like listen = 127.0.0.1:9000 in any of the php-fpm config files? It’s pretty dependent on how you installed it what the defaults are, like I said, but it definitely should be in there somewhere. Might be somewhere like /etc/php/8.1/fpm/pool.d/www.conf.

This topic was automatically closed after 30 days. New replies are no longer allowed.