Hey all,
Shipped CaddyUI v2.4.3 today — a GUI for managing Caddy (proxy hosts, certs, managed DNS across Cloudflare/Namecheap/GoDaddy/DigitalOcean/Hetzner, multi-server push over WG/Tailscale). Sharing a few of the real-world fixes from this cycle in case they’re useful to anyone running similar setups.
Features
-
Proxy Hosts — point domains at upstream services with one-click TLS via Caddy’s automatic HTTPS
-
Redirections — 301/302/307/308 redirect rules across hostnames
-
Advanced Routes — import raw Caddyfile blocks or write JSON directly for anything the UI can’t model
-
Certificates — manage custom PEM/path certificates; expiry alerts via email and/or webhook
-
Multi-server — manage multiple Caddy instances from a single UI; switch with a dropdown. Edge hosts only need Caddy — no CaddyUI container required (see Agent mode)
-
Multi-user — admin and user roles; each user sees and manages only their own proxies
-
Email notifications — SMTP support (STARTTLS / TLS / plain) for cert-expiry and upstream health alerts
-
Upstream health — live health check per proxy; polls Caddy’s own admin API so Docker-internal hostnames work correctly
-
Activity log — every create/edit/delete/sync action is logged with actor and timestamp
-
Snapshots — one-click SQLite database backup; auto-snapshot on sync
-
Import from Caddy — pull your existing live Caddy config into the DB on first run
-
Paste Caddyfile — convert a Caddyfile block into a managed advanced route
-
Dark mode — toggleable, remembers your choice; system preference respected on first visit
-
2FA / TOTP — per-user time-based one-time passwords
-
PWA — installable on desktop and mobile; offline-capable service worker
-
Update notifications — sidebar badge when a newer Docker Hub release is available**
Repo:** https://github.com/X4Applegate/caddyui
Release: https://github.com/X4Applegate/caddyui/releases/tag/v2.4.3
Docker:applegater/caddyui:v2.4.3(multi-arch, scratch base, SBOM + provenance)
What’s in v2.4.3
1. Stop flagging Docker-name backends as “down”
If your proxy target is a Docker service name (status-server:3000, snipeit-app:80, etc.), CaddyUI used to show a red “unreachable” dot because the caddyui container itself usually isn’t on that backend’s Docker network — only Caddy is. It asked Caddy’s /reverse_proxy/upstreams first, but if the upstream wasn’t cached there it fell back to a direct probe that couldn’t resolve the name.
Now: hostnames with no dots → skip the direct probe, render an amber “unknown” with a tooltip explaining why. DNS-resolution errors on public names get the same treatment. Genuinely unreachable public hostnames still go red.
2. Server-online indicator stops flapping over WG
Health poller was marking servers “offline” after a single failed 5 s ping. Over WireGuard that flaps every time rekey drops a packet. Now: 3 consecutive misses + 8 s timeout. Every success resets the counter immediately. Dashboard stays stable.
3. Dashboard source gives you both actions
Click the domain pill → opens the site in a new tab. Click the pencil icon next to it → opens the edit form. Save redirects back to /proxy-hosts, so click-pencil-edit-save-back is a clean loop.
v2.4.x cumulative highlights (context for anyone new)
-
Per-server public IP for managed DNS — editing one server’s IP retargets only that server’s records (no more rewriting every provider record when one IP changes)
-
Startup sync (
CADDYUI_SYNC_ON_START=1) rehydrates Caddy from the DB on everydocker compose restart, with a safety guard that refuses to push when the DB is empty so it can’t wipe an existing Caddyfile -
Firewall docs — the Docker-iptables-bypass gotcha bit me personally. Binding port 2019 to the WG interface (
"10.8.0.2:2019:2019") is the clean fix; UFW rules alone don’t block Docker-published ports
One operational note that’s worth calling out
If you run Caddy in default mode (caddy run --config /etc/caddy/Caddyfile) and push config to it via the admin API from CaddyUI or anything else, pushes don’t survive Caddy container restarts — the static Caddyfile reloads and overwrites them. The fix is --resume:
services:
caddy:
command: caddy run --config /config/caddy/autosave.json --resume --adapter json
That way Caddy persists its current config to autosave.json and reloads from there on restart. Not a CaddyUI bug but a common pothole for multi-host setups using the admin API as a source of truth.
Feedback welcome
Happy to hear what’s missing for your use case. Anything around multi-host orchestration, DNS providers, or observability in particular — that’s where I’ve been focusing.
— Richard