Caddy Hardware and Network requirements for reverse proxy

1. Caddy version:

v2.4.6 h1:HGkGICFGvyrodcqOOclHKfvJC0qTU7vny/7FhYp9hNw=

2. How I installed, and run Caddy:

Docker on Raspberry pi 3

a. System environment:

Raspberry pi 3 - Raspbian/Raspberry Pi OS Bullseye

b. Command:

Dockerfile

FROM caddy:builder AS builder

RUN xcaddy build \
    --with github.com/caddy-dns/cloudflare

FROM caddy:latest

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

Build Docker image
sudo docker build -t caddy_cloudflare:1.5 .

Run Caddy
docker compose -f caddy/docker-compose.yml up -d

c. Service/unit/compose file:

version: "3.8"
services:
  caddy:
    image: caddy_cloudflare:1.5
    container_name: caddy
    hostname: caddy
    env_file:
      - ../.env
      # Add CLOUDFLARE_API_TOKEN to secret.env
      #  Token permissions:
      #    (Zone/DNS/EDIT)
      #    (Zone/Zone/Read)
      - ./secret.env
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./data:/data
      - ./config:/config

d. My complete Caddy config:

{
	# General Options
	debug
}

# Externally Accessible

# Sigma services
home.{$DOMAIN} {
  reverse_proxy {$IP_SIGMA}:8123
}

# Omega services
music.{$DOMAIN} {
    reverse_proxy {$IP_OMEGA}:4533
}

watch.{$DOMAIN} {
    reverse_proxy {$IP_OMEGA}:8096
}

read.{$DOMAIN} {
    reverse_proxy {$IP_OMEGA}:5000
}

foundry.{$DOMAIN} {
    reverse_proxy {$IP_OMEGA}:30000
}

quest.{$DOMAIN} {
    reverse_proxy {$IP_OMEGA}:30001
}

audiobook.{$DOMAIN} {
  reverse_proxy {$IP_OMEGA}:13378
}

# Shortcodes
# Sigma Services
http://dashboard,
http://home,
http://portainer,
http://pihole,
# Dream Machine Services
http://unifigui,
http://udm,
# Omega Services
http://music,
http://watch,
http://audiobook,
http://read {
    redir https://{host}.{$DOMAIN_LAN}
}

# Internal
# Wildcard cert for internal services
*.{$DOMAIN_LAN} {
  tls {$EMAIL_ADDRESS} { 
    dns cloudflare {$CLOUDFLARE_API_TOKEN}
  }

  # Sigma services
  @heimdall host dashboard.{$DOMAIN_LAN}
  handle @heimdall {
    reverse_proxy {$IP_SIGMA}:8143 {
      transport http {
        tls_insecure_skip_verify
      }
    }
  }

  @home host home.{$DOMAIN_LAN}
  handle @home {
    reverse_proxy {$IP_SIGMA}:8123
  }

  @portainer host portainer.{$DOMAIN_LAN}
  handle @portainer {
    reverse_proxy {$IP_SIGMA}:9000
  }

  @pihole host pihole.{$DOMAIN_LAN}
  handle @pihole {
    reverse_proxy {$IP_SIGMA}:8080
    redir / /admin
  }

  # Dream Machine services
  @unifigui host unifigui.{$DOMAIN_LAN}
  handle @unifigui {
    reverse_proxy {$IP_UNIFI}:443 {
      transport http {
        tls_insecure_skip_verify
      }
    }
  }

  @udm host udm.{$DOMAIN_LAN}
  handle @udm {
    reverse_proxy {$IP_UNIFI}:443 {
      transport http {
        tls_insecure_skip_verify
      }
    }
  }

  # Omega services
  @navidrome host music.{$DOMAIN_LAN}
  handle @navidrome {
    reverse_proxy {$IP_OMEGA}:4533
  }

  @jellyfin host watch.{$DOMAIN_LAN}
  handle @jellyfin {
    reverse_proxy {$IP_OMEGA}:8096
  }

  @kavita host read.{$DOMAIN_LAN}
  handle @kavita {
    reverse_proxy {$IP_OMEGA}:5000
  }

  @audiobook host audiobook.{$DOMAIN_LAN}
  handle @audiobook {
    reverse_proxy {$IP_OMEGA}:13378
  }

}

3. The problem I’m having:

I would like to split off some of my docker containers into a separate raspberry pi that maintains a stronger uptime than my other raspberry pi as it will host network critical docker containers (caddy, pihole, ddclient).

I wanted to know what hardware requirements i would need to get the best performance out of caddy. I only use it as a reverse proxy to connect services running on a couple home servers.

ddclient runs once every 5 minutes so its low overhead. Pihole is mostly just DNS so the network and hardware requirements are pretty low.

Idk how reverse proxies work exactly, but my intuition would like to believe that it doesn’t require all traffic to be routed through the reverse proxy, just traffic that creates the initial TCP connection between client and server (please correct me if i’m wrong, im mostly just making stuff up).

If the above is the case then i don’t think i would need a super powerful raspberry pi or network speed and could hopefully get away with a raspberry pi 2 which i have lying around ( 900MHz quad-core ARM Cortex-A7 CPU, 1GB RAM, 100Mbit ethernet).

But if caddy is more demanding than that (for instance, if truly all traffic is routed through the caddy pi) then i will purchase a newer raspberry pi 4 with some more RAM and full gigabit ethernet.

4. Error messages and/or full log output:

N/A

5. What I already tried:

N/A

6. Links to relevant resources:

With any kind of proxy, forward or reverse, all network traffic is directed first to the proxy on its way between the client and server.

The behaviour you described is more akin to a peer-to-peer network with a centralized server to help peers find each other, which isn’t how Caddy operates.

In terms of performance, Caddy is plenty lightweight. Web servers themselves aren’t particularly resource-demanding, it’s the scale of the traffic and the type of service that really determines the resources required. Since it’s just a home server setup, your traffic, lets just go ahead and say you’ve got less than a hundred people accessing your server. Caddy will require very little overall resources.

Reverse proxying is also not very intensive at all unless you need responses to be buffered and operated on for some reason. For the most part, Caddy just accepts a request, puts it on hold while it shoots off another one to the backend, takes the response it gets, and finishes up the initial request by handing off the response. It doesn’t do much processing; it just acts as a middleman. With small numbers of traffic… You could probably do this on truly weak hardware. A quad-core with an entire gigabyte of RAM is plenty. I have a number of networking services (self-hosted Tailscale and Zerotier, Wireguard and OpenVPN, with Caddy serving control panels for these) all running on a single-core 512MB RAM VPS. You definitely don’t need new hardware just to run Caddy itself.

The only place you might be hurting is that 100Mbit Ethernet connection. The networking hardware handles almost all of the grunt work, but if you’re transferring large files or streaming media through this reverse proxy, this pipe might be a little tight. Especially if it’s serving over LAN or if your internet pipe is better than 100mbit.

1 Like

thanks! this all makes sense, i didn’t think the RAM/CPU would bottleneck but wanted to include it for posterity.

As for the network, for context right now i’m running caddy on a Raspberry Pi 3 Model B which is only slightly more powerful ( Quad Cortex-A53 @ 1.4GHz, 1GB LPDDR2 SDRAM, Gigabit Ethernet over USB 2.0 (maximum throughput 300 Mbps))

Meaning the ethernet is max 300Mbit. So the network power is 3x stronger than the raspberry pi 2.

My most heavy service is definitely jellyfin but even then i imagine watching a stream is no more than 10Mbit. The other heavy-ish services would be audio streaming and maybe minecraft and foundry.

All in, i might be fine on 100Mbit but i honestly have no clue how tight it will be. Guess i can always test it out and monitor the pis traffic.

No worries!

Of note, Ethernet is still full duplex, so that’s 100Mbit down and 100Mbit up, so having your proxy-on-a-stick won’t halve your throughput.

You’re right about Jellyfin - people watch 1080p YouTube on 20Mbit internet connections, you can absolutely push multiple streams through 100Mbit. You won’t even have trouble pushing 4K. The only problem would be if you want to push multiple streams simultaneously. But then, I start to wonder if the upstream server hardware could push enough streams simultaneously to threaten that pipe.

I’m pretty confident you’ll be good. Give it a shot - worst that happens is that service is a little degraded while you put an order in for better hardware if you want to.

1 Like

Also, make sure to upgrade to the latest version of Caddy. You’re using a pretty old version. The latest is v2.6.4

i think i’m having Deja Vu :sweat_smile:

I’m not sure why the version is out of date. I ran that Dockerfile to create the latest image literally right before i posted this question… Is it possible the Dockerfile didn’t actually get latest? I will try stopping the container, pruning all images, and rebuilding.

1 Like

Definitely prune, Docker can cache the builder and some of its layers sometimes and keep building the same version.

i’m a little worried that even after pruning it’s still building the wrong thing. Here’s some output showing me checking the prune worked docker images and then starting the build:

❯ docker images
REPOSITORY                                 TAG       IMAGE ID       CREATED       SIZE
pihole/pihole                              latest    e76c35b441d9   4 days ago    255MB
zwavejs/zwave-js-ui                        latest    d0dc144dd3e2   5 days ago    413MB
homeassistant/raspberrypi3-homeassistant   stable    8df5dc4bfebe   7 days ago    1.38GB
ghcr.io/linuxserver/heimdall               latest    d62bead6774b   9 days ago    119MB
linuxserver/ddclient                       latest    dbba0b2e589b   2 weeks ago   66.8MB
❯ cd caddy
❯ docker build -t caddy_cloudflare:1.6 .
[+] Building 674.1s (6/8)
 => [builder 1/2] FROM docker.io/library/caddy:builder@sha256:4019b8d6cd1cd0cb32230b90497b85695f353b935c9204a8419def54b47e3c76                                                                        86.4s
 => => resolve docker.io/library/caddy:builder@sha256:4019b8d6cd1cd0cb32230b90497b85695f353b935c9204a8419def54b47e3c76                                                                                 0.3s
 => => sha256:e0ef7240292c9f37fb35db9e3ef814677e891a0fe5cfc94dd867871dae56d73a 8.02kB / 8.02kB                                                                                                         0.0s
 => => sha256:4019b8d6cd1cd0cb32230b90497b85695f353b935c9204a8419def54b47e3c76 1.93kB / 1.93kB                                                                                                         0.0s
 => => sha256:7f0ef6f90065af13eec57b6b1601cc677bae620eff30cb42fee3c3105ff43470 1.79kB / 1.79kB                                                                                                         0.0s
 => => sha256:6fb81ff47bd6d7db0ed86c9b951ad6417ec73ab60af6d22daa604076a902629c 2.87MB / 2.87MB                                                                                                         1.8s
 => => sha256:c1ee22df4e0527c9d317c1729808b4fc333083ad4d5a41645d79beabb64e9415 285.35kB / 285.35kB                                                                                                     1.2s
 => => sha256:0083f2e9af62d9a5b5bcf25401158f3ebb34f9fe9e7e951a52b22e9f6a15f1ed 118.48MB / 118.48MB                                                                                                    25.6s
 => => extracting sha256:6fb81ff47bd6d7db0ed86c9b951ad6417ec73ab60af6d22daa604076a902629c                                                                                                              3.6s
 => => sha256:6506d85184bedd4e64bb66e1357511d9dde4bc32c2c99d3183c2cec84f20e7e5 125B / 125B                                                                                                             2.8s
 => => sha256:2cc4386697a4bb55f0e63b324c4a0b444eec215aff0cffc35d7affc50fff2041 3.72MB / 3.72MB                                                                                                         6.3s
 => => sha256:8720d28c193064fdd202319cb111f8e962c31c7a216b24c44f4c4fccd7e992f5 1.16MB / 1.16MB                                                                                                         7.1s
 => => extracting sha256:c1ee22df4e0527c9d317c1729808b4fc333083ad4d5a41645d79beabb64e9415                                                                                                              1.0s
 => => sha256:10b69f043ff29e0a57695a0aaed9557699009894f310acd79f55b35b10b10e5b 406B / 406B                                                                                                             6.8s
 => => extracting sha256:0083f2e9af62d9a5b5bcf25401158f3ebb34f9fe9e7e951a52b22e9f6a15f1ed                                                                                                             53.0s
 => => extracting sha256:6506d85184bedd4e64bb66e1357511d9dde4bc32c2c99d3183c2cec84f20e7e5                                                                                                              0.0s
 => => extracting sha256:2cc4386697a4bb55f0e63b324c4a0b444eec215aff0cffc35d7affc50fff2041                                                                                                              2.5s
 => => extracting sha256:8720d28c193064fdd202319cb111f8e962c31c7a216b24c44f4c4fccd7e992f5                                                                                                              0.5s
 => => extracting sha256:10b69f043ff29e0a57695a0aaed9557699009894f310acd79f55b35b10b10e5b                                                                                                              0.0s
 => [stage-1 1/2] FROM docker.io/library/caddy:latest@sha256:87cbd356af2e6eef38b41b6ab7e7b0fc142ae97de0bffcc6cea257671823070c                                                                         23.5s
 => => resolve docker.io/library/caddy:latest@sha256:87cbd356af2e6eef38b41b6ab7e7b0fc142ae97de0bffcc6cea257671823070c                                                                                  0.4s
 => => sha256:87cbd356af2e6eef38b41b6ab7e7b0fc142ae97de0bffcc6cea257671823070c 1.93kB / 1.93kB                                                                                                         0.0s
 => => sha256:0a66f97fe167ebcccd1754f2cc7ed8d5f42595f332f60d75e4efd33c5fa880b9 1.16kB / 1.16kB                                                                                                         0.0s
 => => sha256:0940efa30038d45d9b6cbae2aea1235accdd5e653948a6e40e9c3edf6314d4c6 7.81kB / 7.81kB                                                                                                         0.0s
 => => sha256:beefe5ad637c7db32e6afc68103fc4e779630219979216a625338ab55f7d191c 2.42MB / 2.42MB                                                                                                         0.8s
 => => sha256:4ec2ab4edb1f0a14b28a55f050d62d8e54111075639a0fa8f173448814ba0d00 342.59kB / 342.59kB                                                                                                     0.5s
 => => sha256:0580184dd0082f009564a7a3923731f202cbe4bd8b857dbca2c77da8e7434815 7.48kB / 7.48kB                                                                                                         0.3s
 => => sha256:24b2a44f2792add2104a1d53c68f1df95779deaee3ec86717f4fb4efd9aa6aaa 13.59MB / 13.59MB                                                                                                       5.3s
 => => extracting sha256:beefe5ad637c7db32e6afc68103fc4e779630219979216a625338ab55f7d191c                                                                                                              1.4s
 => => extracting sha256:4ec2ab4edb1f0a14b28a55f050d62d8e54111075639a0fa8f173448814ba0d00                                                                                                              1.0s
 => => extracting sha256:0580184dd0082f009564a7a3923731f202cbe4bd8b857dbca2c77da8e7434815                                                                                                              0.0s
 => => extracting sha256:24b2a44f2792add2104a1d53c68f1df95779deaee3ec86717f4fb4efd9aa6aaa                                                                                                             14.3s

this line is worrisome

 => [stage-1 1/2] FROM docker.io/library/caddy:latest@sha256:87cbd356af2e6eef38b41b6ab7e7b0fc142ae97de0bffcc6cea257671823070c                                                                         23.5s

i can’t seem to find any 87cbd3.... hash on the latest caddy docker hub images

i will let it keep building (it takes ~15-20 min to build on my pi) but im worried it’s still going to be old

build actually only took 900 seconds! (i_am_speed.png)

❯ docker exec -it caddy caddy version

v2.6.3 h1:QRVBNIqfpqZ1eJacY44I6eUC1OcxQ8D04EKImzpj7S8=

still not 2.6.4, but close!

I wonder if the dockerhub image just hasn’t yet been updated given 2.6.4 came out 8 hours ago

2 Likes

That is the case, but it’s not why you’re still on 2.6.3. By using the builder, you’re bypassing the binary that comes shipped with the Docker container itself, with latest having been pushed four days ago. (See: https://hub.docker.com/_/caddy/tags)

The builder should be pulling the latest from source to compile on your system, which is what took 900s. :grin:

I did just test on my end, and was able to build v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8= with github.com/caddy-dns/cloudflare and github.com/lucaslorentz/caddy-docker-proxy/v2 without issue.

hmmm, then im not sure why it didnt build 2.6.4

my dockerfile is very simple

FROM caddy:builder AS builder

RUN xcaddy build \
    --with github.com/caddy-dns/cloudflare

FROM caddy:latest

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

Oh, I know why.

My dockerfile reads RUN xcaddy build latest \ rather than RUN xcaddy build \.

Without latest there, xcaddy will build the version specified in an env var built into the supplied image.

that did the trick! after changing my Dockerfile to add that latest keyword, then pruning and rebuilding i’m on 2.6.4

❯ docker exec -it caddy caddy version

v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8=

thanks a ton!!

1 Like

FYI, using latest is not a good idea, in general. It’s fine as a workaround, but it’s dangerous, because you might end up with versions you didn’t expect. For example if Caddy releases a v3 with breaking changes, then if you rebuild, it would fail to run.

The reason it was only building v2.6.3 without it is because the builder image has an environment variable set to the Caddy version at the time that image was tagged.

v2.6.4 was just released, and it requires manual human steps to set up the new image and send it to the Docker Library team to put it through their pipeline to get it published.

2 Likes

Suuuuuuuper minor PSA but you don’t need to specify -it flags for docker exec here because -i makes the session interactive (assigns stdin to the running process) and -t allocates a pseudo-TTY (a terminal interface for control of the process).

Both of these are only necessary for processes you need to keep interacting with - like a shell! For one-off commands like this you can just run docker exec caddy caddy version.

This is good advice, especially in circumstances where a rogue pull could break things. I always advise against putting latest in Compose files, for example, for anything important.

I’ll take devil’s advocate in favour of latest in this specific use case, though - in the Dockerfile - because I only ever come and build Caddy when I want to update it, and there are no mechanisms in place that automatically bring it up after building. That means it’s only being built when I’ve pretty much just gone and looked at the update myself, and even if I was somehow wildly incorrect about what the builder would produce, I’d see the output of the completed build with the wrong version and could immediately remedy that before actually touching a running container.

Only would I ever advise using latest in cases where this kind of manual process is in place, with the direct output multiple steps removed from the actual deployment.

1 Like

thanks @francislavoie, i’ll edit my Dockerfile back to the way it was. I think, for my use case, it’s safer to be out of date than (potentially breaking) cutting-edge.

I guess next time i upgrade caddy i’ll check if the update is one a version behind latest and make a note to wait ~1week and try again (to give time to the builder to update).

OR, here’s a novel idea: I check for updates more than once a year! (gasp). I really ought to set up some cronjob that regularly builds caddy with cloudflare and updates my compose.

I don’t suppose there is a caddy+cloudflare official dockerhub release i could depend on instead?

1 Like

No - we don’t bundle any plugins in official releases because making a “blessed” list of plugins is a can of worms. We don’t have the resources to continually vet that the code in those plugins is safe. The plugins are external because we ask the community to maintain them (even if they might be in the caddy-dns org which is “official”). Including plugins bloats the binary, so keeping them out keeps Caddy as lightweight as possible for most users.

1 Like

this makes sense, it would be a nightmare trying to build all possible combinations of plugins…

Guess i just have to be better about manually updating :stuck_out_tongue:

1 Like

You can set up notifications on github for when releases happen. Go to GitHub - caddyserver/caddy: Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS, click watch in the top-right, choose custom and select “Releases”.

3 Likes

sorry for the late reply!

Perfect i’ll use this to keep me on the ball.

Maybe in the future i’ll try to set up a docker hub automated build to build when the caddy builder image updates… but i’d have to first learn a bit more about dockerhub and automated builds