Are there more caddyfile revproxy examples I haven't seen?

I didn’t find many examples online so far and I’m just not completely sure on what I have going here. I’d love to see some examples of caddyfiles which are for reverse proxy setups so I can figure this out. Here’s my setup:

  • Ubuntu Server 20.04 bare metal
  • Homemade pfsense machine using ACME and HAproxy
  • Multiple apps running all inside docker containers
  • Would be running Caddy2 via docker as well

So presently I’m using HAproxy to run all my front and backends. I’ve got it setup like this:
app1 = xxx.xxx.xxx.xxx:10000
app1.mydomain.com -->HAproxy frontend -->sends to HAproxy backend for app1 → app1 is accessed

And I have a front and backend for every app. It’s… really complicated and bulky feeling. The author of caddy chimed in and suggested I try it since it would be easier, and so I’m here.

So I’m going to set up caddy in a docker-compose.yml. But once it’s up and running, then what? I create a caddyfile which will have the config? My setup is as described above. I have ddns setup through my pfsense machine and I’ve also got rules which leave 80 and 443 open. I need to create a caddy file which will map out:

  1. app1.mydomain.com = xxx.xxx.xxx.xxx:10000
  2. app2.mydomain.com = xxx.xxx.xxx.xxx:10001
  3. app3.mydomain.com = xxx.xxx.xxx.xxx:10002

And so on and so forth. Can I either be (a) walked through this or (b) linked to a resource which has caddyfile examples such as mine? I’ve got to think my use case is laughably simple for caddy.

Thanks!

Yup, you’re on the right track. It’s as simple as this, in your Caddyfile:

app1.mydomain.com {
    reverse_proxy <container_name>:10001
}

app2.mydomain.com {
    reverse_proxy <container_name>:10002
}

app3.mydomain.com {
    reverse_proxy <container_name>:10003
}

Please be sure to also read the Docker documentation so that you understand all the specifics of running Caddy in Docker (make sure to bind ports 80 and 443 to the host, have volumes for /data and /config): Docker Hub

1 Like

Thank you for the reply. I’m going to play around with this a little. Knowing how much trouble it is to set up HAproxy -and having done it- , if this is all there is to caddy, frankly I’m going to be blown away lol.

1 Like

Yep! It usually is that simple. It depends if you have extra requirements for each app though, but you’ll need to elaborate about what each of your services do for us to know if anything more complex is required.

Maybe you should share a sample of your haproxy config to compare?

Well I’m not quite there yet. I can’t figure out how to set up caddy in docker. I read the docker documentation on it, but it’s not making a lot of sense.

Here is what the docs say:

docker run -d -p 80:80 -p 443:443 \
    -v /site:/usr/share/caddy \
    -v caddy_data:/data \
    -v caddy_config:/config \
    caddy caddy file-server --domain example.com

But when I user composerize, I get:

version: '3.3'
services:
    file-server:
        ports:
            - '80:80'
            - '443:443'
        volumes:
            - '/site:/usr/share/caddy'
            - 'caddy_data:/data'
            - 'caddy_config:/config'
        image: file-server

I know “file-server” is not a valid image name to pull, so this is definitely wrong. Lastly, I’ve no idea where to put my caddyfile. I know, it’s crazy that it’s this simple and I still don’t get it. I know.

caddy is the image, caddy file-server --domain example.com is a command override, if you want to use the cli commands instead of a Caddyfile (not what you want here).

Your docker-compose service would look more like this:

services:
  caddy:
    image: caddy
    restart: unless-stopped
    volumes:
      - caddy_data:/data
      - caddy_config:/config
      - ./Caddyfile:/etc/caddy/Caddyfile
    ports:
      - 80:80
      - 443:443

ok, making a little bit more sense. So where does the actual caddyfile live? You’ve got “./Caddyfile” bind-mounted, but I don’t understand that syntax in this context. “./” is what I use to execute a script in my pwd.

./ in the context of docker-compose just means “mount the Caddyfile file from pwd to /etc/caddy/Caddyfile in the container”. So basically, make a Caddyfile beside your docker-compose.yml before running docker-compose up -d

Thank you for that clarification. OK, I’m ALMOST there. Here’s what pfsense is giving me when I try to access “myhost.mydomain.com
pfsense warning

I can disable rebind protection but I’d rather not. Do I have something with caddy misconfigured?

UPDATE - - - Actually when I access “myhost.mydomain.com” it takes me to my pfsense login and not the app I specified in my Caddyfile.

Compose:

##CADDYTEST
#
  caddy:
    container_name: caddy
    ports:
      - 80:80
      - 443:443
    volumes:
      - /home/mike/docker/caddy_data:/data
      - /home/mike/docker/caddy_config:/config
      - ./Caddyfile:/etc/caddy/Caddyfile
    image: caddy

Caddyfile:

test.mycustomdomain.net {
    reverse_proxy ombi_test:19020
}

Thanks for seeing me through this mate:)

What do your Caddy logs say? You can use docker-compose logs caddy to see what’s going on.

2020/09/04 02:17:20 [ERROR] attempt 1: [test.redacted.net] Obtain: [test.redacted.net] acme: error: 429 :: POST :: https://acme-v02.api.letsencrypt.org/acme/new-order :: urn:ietf:params:acme:error:rateLimited :: Error creating new order :: too many failed authorizations recently: see https://letsencrypt.org/docs/rate-limits/, url:  - retrying in 1m0s (8.379838564s/720h0m0s elapsed)...

I’m vaguely familiar with this from setting up acme on my pfsense machine. Do I just have to wait an hour or something?

From https://letsencrypt.org/docs/rate-limits/:

There is a Failed Validation limit of 5 failures per account, per hostname, per hour. This limit is higher on our staging environment, so you can use that environment to debug connectivity problems. Exceeding the Failed Validations limit is reported with the error message too many failed authorizations recently .

To use the staging endpoint for the moment, add this to the top of your Caddyfile:

{
	acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}

I added -f to the log command, and a couple minutes passed, then it tried again, but now I get:
[test.redacted.net] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from https://test.redacted.net/.well-known/acme-challenge/3A6J0pB_SKww8EV7KFJ0E5rIapOC1-lXOSDfv5h4c3M [96.244.135.235]: “\r\n404 Not Found\r\n\r\n

404 Not Found

\r\n
nginx\r\n”, url:

I don’t think you properly have port 80 and 443 routed to Caddy. Let’s Encrypt is failing to solve the ACME challenge to confirm that you do own that domain, because it can’t reach your server.

I don’t know much about pfsense unfortunately, so I’m not sure what to suggest.

ok, I Added that.

ok, so then let me have a look at pfsense then. I’ll report back - probably tomorrow.

Thank you so much!

There’s a lot of possible issues happening here. I run a pfSense gateway myself, with port forwarding to a server running Caddy, so I might be able to assist. You mentioned rebinding protection, but then said you weren’t getting that any more?

Can you clarify a few things:

  • What do you see when you browse to your domain name from outside your LAN (i.e. from the internet)?
  • Are you still seeing rebind protection errors from pfSense when browsing to your domain name from inside your LAN?
2 Likes

Oh thank god lol. Yeah, I messed with it last night and just came up empty. I’ll be very descriptive.

The rebinding issue: Once I had caddy set up and tried to access my test docker app at “test.mydomain.com” I initially got the rebinding error. I didn’t really know what this was, so I went into pfsense–>settings–>admin and disabled rebinding protection for testing purposes. I then accessed “test.mydomain.com” and it brought me to my pfsense login page. I immediately understood what the rebinding protection was at that point. I also am really uncomfortable knowing that my pfsense login is visible to WAN traffic - very, very disturbing. So that’s the full and updated context of the rebinding thing.

Currently if I browse to my domain name, I am greeted with the rebinding warning(which as I discovered really just means I’m connecting to my gateway).

I am able to access my docker test container internally via 192.168.1.205:19020 which is how I have it set in docker compose.

I can ping the domain and it returns my WAN. I have this test domain set up the same way with namecheap as I have the others which have been working (through haproxy) ping

I also have pfsense–>firewall–>rules set to allow 443 and 80 as WAN pass. Definitely open.

It’s not. There’s a bit of a story here.

You browse to example.com. Your computer resolves that to an IP address. It notes that the resolved IP is not local (i.e. it’s on the internet; it needs to go out through the gateway). Your computer sends the traffic destined for the internet to your gateway, which will know how to send it on the next step towards its final destination.

Your gateway is the pfSense box. It receives the traffic for your computer. The traffic from your computer came in on the LAN interface (take note of this!). pfSense inspects the traffic’s intended destination, with the intent to compare against its routing table to figure out where to forward it on. pfSense instead learns that the destination is its own public IP address! pfSense keeps the traffic and processes it.

As a result of this whole escapade, pfSense has now received HTTP(S) traffic on its LAN interface for processing. Guess what? It hosts its own web server on the LAN interface. Boom - pfSense login (OR, rebinding protection if that’s enabled, which it should be).


So how do you fix this?

There’s two ways. Split DNS is the complicated, but more efficient option. It involves instructing your DNS resolver to return local IP addresses for domains that have hosts inside the LAN.

Split DNS here effectively means “you get one answer for example.com on LAN, and a different answer on WAN”. It would make a request for example.com go straight to your Docker host, instead of out to the internet.

The other option is hairpin NAT, which is not enabled by default on pfSense. Hairpin NAT takes traffic that pfSense receives on the LAN interface and, if the destination IP address is for the WAN, re-feeds the packets through the firewall. This double-processing has minor performance overhead, but means that pfSense will treat your traffic as through it had come from WAN instead of internally. That in turn means that your request should be port forwarded properly.

1 Like

ok, I understand what you say about the local traffic. I’ve never heard of hairpin NAT. I am however using NAT reflection(pure nat) in my settings so that I could access my nextcloud installation locally via my phone without needing to change anything.