Reverse proxy works in homelab but not in external server

1. The problem I’m having:

OS: FreeBSD 15.0-RELEASE

I have caddy running in a jail and reverse proxying requests to two other jails: dev_web and dev_tile. This all works fine in my proxmox cluster where I have specified ```tls internal``` in the caddy file on each test host. However, on my external facing server, which needs acme tls certs for https, well... things are not working there.

In the examples below, I'll include data for the WORKING example, which is host fbsdhost4, and the NOT_WORKING example, which is coyote.

2. Error messages and/or full log output:

The thing that I notice in the logs below is that the working example (fbsdhost4) has a header tag that is non-zero, whereas the broken example (coyote) has a zero length header…even if i stuff something into the header with -H…

--------------------------------------------------------------
NOT_WORKING/coyote example caddy log
--------------------------------------------------------------
{
  "level": "debug",
  "ts": 1774298649.9785318,
  "logger": "http.handlers.reverse_proxy",
  "msg": "selected upstream",
  "dial": "10.0.0.40:3000",
  "total_upstreams": 1
}
{
  "level": "debug",
  "ts": 1774298649.979109,
  "logger": "http.handlers.reverse_proxy",
  "msg": "upstream roundtrip",
  "upstream": "10.0.0.40:3000",
  "duration": 0.000478814,
  "request": {
    "remote_ip": "159.26.103.199",
    "remote_port": "12876",
    "client_ip": "159.26.103.199",
    "proto": "HTTP/1.1",
    "method": "HEAD",
    "host": "dev.apps.generalwildfireservices.com",
    "uri": "/tiles",
    "headers": {
      "User-Agent": [
        "curl/8.18.0"
      ],
      "Accept": [
        "*/*"
      ],
      "X-Forwarded-For": [
        "159.26.103.199"
      ],
      "X-Forwarded-Proto": [
        "http"
      ],
      "X-Forwarded-Host": [
        "dev.apps.generalwildfireservices.com"
      ],
      "Via": [
        "1.1 Caddy"
      ]
    }
  },
  "headers": {
    "Content-Length": [
      "0"
    ],
    "Vary": [
      "Origin, Access-Control-Request-Method, Access-Control-Request-Headers"
    ],
    "Date": [
      "Mon, 23 Mar 2026 20:44:09 GMT"
    ]
  },
  "status": 404
}
{
  "level": "info",
  "ts": 1774298649.9791985,
  "logger": "http.log.access.log1",
  "msg": "handled request",
  "request": {
    "remote_ip": "159.26.103.199",
    "remote_port": "12876",
    "client_ip": "159.26.103.199",
    "proto": "HTTP/1.1",
    "method": "HEAD",
    "host": "dev.apps.generalwildfireservices.com",
    "uri": "/tiles",
    "headers": {
      "User-Agent": [
        "curl/8.18.0"
      ],
      "Accept": [
        "*/*"
      ]
    }
  },
  "bytes_read": 0,
  "user_id": "",
  "duration": 0.000748617,
  "size": 0,
  "status": 404,
  "resp_headers": {
    "Via": [
      "1.1 Caddy"
    ],
    "Content-Length": [
      "0"
    ],
    "Vary": [
      "Origin, Access-Control-Request-Method, Access-Control-Request-Headers"
    ],
    "Date": [
      "Mon, 23 Mar 2026 20:44:09 GMT"
    ]
  }
}

--------------------------------------------------------------
WORKING/fbsdhost4 example caddy log
--------------------------------------------------------------
{
  "level": "debug",
  "ts": 1774298550.0102189,
  "logger": "http.handlers.reverse_proxy",
  "msg": "selected upstream",
  "dial": "10.0.0.40:3000",
  "total_upstreams": 1
}
{
  "level": "debug",
  "ts": 1774298550.0108442,
  "logger": "http.handlers.reverse_proxy",
  "msg": "upstream roundtrip",
  "upstream": "10.0.0.40:3000",
  "duration": 0.000539967,
  "request": {
    "remote_ip": "192.168.1.93",
    "remote_port": "54172",
    "client_ip": "192.168.1.93",
    "proto": "HTTP/1.1",
    "method": "HEAD",
    "host": "dev.apps.generalwildfireservices.local4",
    "uri": "/tiles",
    "headers": {
      "User-Agent": [
        "curl/8.18.0"
      ],
      "Accept": [
        "*/*"
      ],
      "X-Forwarded-For": [
        "192.168.1.93"
      ],
      "X-Forwarded-Proto": [
        "http"
      ],
      "X-Forwarded-Host": [
        "dev.apps.generalwildfireservices.local4"
      ],
      "Via": [
        "1.1 Caddy"
      ]
    }
  },
  "headers": {
    "Content-Length": [
      "2519"
    ],
    "Vary": [
      "Origin, Access-Control-Request-Method, Access-Control-Request-Headers"
    ],
    "Content-Type": [
      "text/html"
    ],
    "Etag": [
      "\"9d7:69bdcb64\""
    ],
    "Date": [
      "Mon, 23 Mar 2026 20:42:29 GMT"
    ]
  },
  "status": 200
}
{
  "level": "info",
  "ts": 1774298550.010953,
  "logger": "http.log.access.log2",
  "msg": "handled request",
  "request": {
    "remote_ip": "192.168.1.93",
    "remote_port": "54172",
    "client_ip": "192.168.1.93",
    "proto": "HTTP/1.1",
    "method": "HEAD",
    "host": "dev.apps.generalwildfireservices.local4",
    "uri": "/tiles",
    "headers": {
      "User-Agent": [
        "curl/8.18.0"
      ],
      "Accept": [
        "*/*"
      ]
    }
  },
  "bytes_read": 0,
  "user_id": "",
  "duration": 0.000779846,
  "size": 0,
  "status": 200,
  "resp_headers": {
    "Content-Length": [
      "2519"
    ],
    "Vary": [
      "Origin, Access-Control-Request-Method, Access-Control-Request-Headers"
    ],
    "Content-Type": [
      "text/html"
    ],
    "Via": [
      "1.1 Caddy"
    ],
    "Etag": [
      "\"9d7:69bdcb64\""
    ],
    "Date": [
      "Mon, 23 Mar 2026 20:42:29 GMT"
    ]
  }
}

3. Caddy version:

-------------------------------------------------------------- NOT_WORKING/coyote --------------------------------------------------------------

caddy-2.11.2:
	/usr/local/bin/caddy
	/usr/local/etc/caddy/Caddyfile.sample
	/usr/local/etc/rc.d/caddy
	/usr/local/share/licenses/caddy-2.11.2/APACHE20
	/usr/local/share/licenses/caddy-2.11.2/LICENSE
	/usr/local/share/licenses/caddy-2.11.2/catalog.mk
# jexec lb caddy -v
v2.11.2

-------------------------------------------------------------- WORKING/fbsdhost4 example caddy log --------------------------------------------------------------

caddy-2.11.2:
	/usr/local/bin/caddy
	/usr/local/etc/caddy/Caddyfile.sample
	/usr/local/etc/rc.d/caddy
	/usr/local/share/licenses/caddy-2.11.2/APACHE20
	/usr/local/share/licenses/caddy-2.11.2/LICENSE
	/usr/local/share/licenses/caddy-2.11.2/catalog.mk
# jexec lb caddy -v
v2.11.2

4. How I installed and ran Caddy:

pkg install caddy

a. System environment:

-------------------------------------------------------------- NOT_WORKING/coyote --------------------------------------------------------------
# ifconfig | grep inet
	inet 216.106.186.35 netmask 0xffffff00 broadcast 216.106.186.255
# hostname
coyote.generalwildfireservices.com
# uname -a
FreeBSD coyote.generalwildfireservices.com 15.0-RELEASE-p4 FreeBSD 15.0-RELEASE-p4 GENERIC amd64

-------------------------------------------------------------- WORKING/fbsdhost4 --------------------------------------------------------------

# ifconfig | grep inet
	inet 192.168.1.140 netmask 0xffffff00 broadcast 192.168.1.255
# hostname
fbsdhost4.h.net
# uname -a
FreeBSD fbsdhost4.h.net 15.0-RELEASE-p4 FreeBSD 15.0-RELEASE-p4 GENERIC amd64

b. Command:

HOST$ jexec lb
LB$ service caddy start

c. Service/unit/compose file:

# cat /usr/local/etc/rc.d/caddy
#!/bin/sh

# PROVIDE: caddy
# REQUIRE: LOGIN DAEMON NETWORKING
# KEYWORD: shutdown

# To enable caddy:
#
# - Edit /usr/local/etc/caddy/Caddyfile
#   See https://caddyserver.com/docs/
# - Run 'service caddy enable'
#
# Note while Caddy currently defaults to running as root:wheel, it is strongly
# recommended to run the server as an unprivileged user, such as www:www.
#
# - Use security/portacl-rc to enable privileged port binding:
#
#   # pkg install security/portacl-rc
#   # sysrc portacl_users+=www
#   # sysrc portacl_user_www_tcp="http https"
#   # sysrc portacl_user_www_udp="https"
#   # service portacl enable
#   # service portacl start
#
# - Configure caddy to run as www:www
#
#   # sysrc caddy_user=www caddy_group=www
#
# - Note if Caddy has been started as root previously, files in
#   /var/log/caddy, /var/db/caddy, and /var/run/caddy may require their ownership
#   changing manually.

# Optional settings:
# caddy_command (string):     Full path to the caddy binary
# caddy_config (string):      Full path to caddy config file
#                             (/usr/local/etc/caddy/Caddyfile)
# caddy_adapter (string):     Config adapter type (caddyfile)
# caddy_admin (string):       Default administration endpoint
#                             (unix//var/run/caddy/caddy.sock)
# caddy_directory (string):   Root for caddy storage (ACME certs, etc.)
#                             (/var/db/caddy)
# caddy_extra_flags (string): Extra flags passed to caddy start
# caddy_logdir (string):      Where caddy logs are stored
#                             (/var/log/caddy)
# caddy_logfile (string):     Location of process log (${caddy_logdir}/caddy.log)
#                             This is for startup/shutdown/error messages.
#                             To create an access log, see:
#                             https://caddyserver.com/docs/caddyfile/directives/log
# caddy_user (user):          User to run caddy (root)
# caddy_group (group):        Group to run caddy (wheel)
#
# This script will honor XDG_CONFIG_HOME/XDG_DATA_HOME. Caddy will create a
# .../caddy subdir in each of those. By default, they are subdirs of /var/db/caddy.
# See https://caddyserver.com/docs/conventions#data-directory

. /etc/rc.subr

name=caddy
rcvar=caddy_enable
desc="Powerful, enterprise-ready, open source web server with automatic HTTPS written in Go"

load_rc_config $name

# Defaults
: ${caddy_enable:=NO}
: ${caddy_adapter:=caddyfile}
: ${caddy_config:="/usr/local/etc/caddy/Caddyfile"}
: ${caddy_admin:="unix//var/run/${name}/${name}.sock"}
: ${caddy_command:="/usr/local/bin/${name}"}
: ${caddy_directory:=/var/db/caddy}
: ${caddy_extra_flags:=""}
: ${caddy_logdir:="/var/log/${name}"}
: ${caddy_logfile:="${caddy_logdir}/${name}.log"}
: ${caddy_user:="root"}
: ${caddy_group:="wheel"}

# Config and base directories
: ${XDG_CONFIG_HOME:="${caddy_directory}/config"}
: ${XDG_DATA_HOME:="${caddy_directory}/data"}
export XDG_CONFIG_HOME XDG_DATA_HOME

# Default admin interface
export CADDY_ADMIN="${caddy_admin}"

command="${caddy_command}"
pidfile="/var/run/${name}/${name}.pid"

required_files="${caddy_config} ${caddy_command}"

start_precmd="caddy_precmd"
start_cmd="caddy_start"
stop_precmd="caddy_prestop"

# JSON is the native format, so there is no "adapter" for it
if [ "${caddy_adapter}" = "json" ]; then
    caddy_flags="--config ${caddy_config}"
else
    caddy_flags="--config ${caddy_config} --adapter ${caddy_adapter}"
fi

# Extra Commands
extra_commands="configtest reload reloadssl"
configtest_cmd="caddy_execute validate ${caddy_flags}"
reload_cmd="caddy_execute reload ${caddy_flags}"
reloadssl_cmd="caddy_execute reload --force ${caddy_flags}"

caddy_execute()
{
    /usr/bin/su -m "${caddy_user}" -c "${caddy_command} $*"
}

caddy_precmd()
{
    # Create required directories and set permissions
    /usr/bin/install -d -m 755 -o "${caddy_user}" -g "${caddy_group}" ${caddy_directory}
    /usr/bin/install -d -m 700 -o "${caddy_user}" -g "${caddy_group}" ${caddy_directory}/config
    /usr/bin/install -d -m 700 -o "${caddy_user}" -g "${caddy_group}" ${caddy_directory}/data
    /usr/bin/install -d -m 755 -o "${caddy_user}" -g "${caddy_group}" ${caddy_logdir}
    /usr/bin/install -d -m 700 -o "${caddy_user}" -g "${caddy_group}" /var/run/caddy
    if [ -e ${caddy_logfile} ]; then
        /bin/chmod 644 ${caddy_logfile}
        /usr/sbin/chown "${caddy_user}:${caddy_group}" ${caddy_logfile}
    else
        /usr/bin/install -m 644 -o "${caddy_user}" -g "${caddy_group}" /dev/null ${caddy_logfile}
    fi
}

caddy_start()
{
    echo -n "Starting caddy... "
    /usr/bin/su -m ${caddy_user} -c "${caddy_command} start ${caddy_flags} \
        ${caddy_extra_flags} --pidfile ${pidfile}" >> ${caddy_logfile} 2>&1
    if [ $? -eq 0 ] && ps -ax -o pid | grep -q "$(cat ${pidfile})"; then
        echo "done"
        echo "Log: ${caddy_logfile}"
    else
        echo "Error: Caddy failed to start"
        echo "Check the caddy log: ${caddy_logfile}"
    fi
}

caddy_prestop()
{
    local result

    echo -n "Stopping caddy... "

    result="$(caddy_execute stop ${caddy_flags} 2>&1)"
    if [ ${?} -eq 0 ]; then
        echo "done"
        exit 0
    else
        if echo "${result}" | grep -q -e "connection refused" \
            -e "connect: no such file or directory"; then

            echo "admin interface unavailable; using pidfile"
            return 0
        else
            echo "Error: Unable to stop caddy"
            echo "Check the caddy log: ${caddy_logfile}"
            return 1
        fi
    fi
}

run_rc_command "$1"

d. My complete Caddy config:

</p>
--------------------------------------------------------------
NOT_WORKING/coyote
--------------------------------------------------------------
</p>
<pre>

{
	debug
	http_port 8080
	https_port 8443
}

(logging) {
	log {
		output file /var/log/caddy/caddy.log
		format json
	}
}

generalwildfireservices.com:8080 {
	import logging

	root * /config/www

	file_server
}


dev.apps.generalwildfireservices.com:8080 {
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8080
	}
}


stage.apps.generalwildfireservices.com:8080 {
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8080
	}
}


prod.apps.generalwildfireservices.com:8080 {
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8080
	}
}
</pre>
<p>
</p>
--------------------------------------------------------------
WORKING/fbsdhost4
--------------------------------------------------------------
</p>
<pre>
{
	debug
	http_port 8080
	https_port 8443
}

(logging) {
	log {
		output file /var/log/caddy/caddy.log
		format json
	}
}

generalwildfireservices.local4:8080 {
	tls internal
	import logging

	root * /config/www

	file_server
}

generalwildfireservices.local4:8443 {
	tls internal
	import logging

	root * /config/www

	file_server
}

dev.apps.generalwildfireservices.local4:8080 {
	tls internal
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8080
	}
}

dev.apps.generalwildfireservices.local4:8443 {
	tls internal
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8443
	}
}

stage.apps.generalwildfireservices.local4:8080 {
	tls internal
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8080
	}
}

stage.apps.generalwildfireservices.local4:8443 {
	tls internal
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8443
	}
}

prod.apps.generalwildfireservices.local4:8080 {
	tls internal
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8080
	}
}

prod.apps.generalwildfireservices.local4:8443 {
	tls internal
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8443
	}
}
</pre>

5. Links to relevant resources:

Tests that I use to check connectivity

coyote

14:36 $ cat check_coyote.sh
#! /bin/bash
check_url_reachability() {
  response=$(curl -H "Content-Type: text/html" -o /dev/null --max-time 1 --silent --head --write-out '%{http_code}' "$1")
  if [[ "$response" -eq 200 ]]; then
    echo -e "\e[32;40m[$response]: OK : $1\e[0m"
  else
    echo -e "\e[31;40m[$response]: UNREACHABLE : $1\e[0m"
  fi
}
#declare -a coyote=( "generalwildfireservices.com" "dev.apps.generalwildfireservices.com/tiles" "dev.apps.generalwildfireservices.com/app1" "stage.apps.generalwildfireservices.com/tiles" "stage.apps.generalwildfireservices.com/app1" "prod.apps.generalwildfireservices.com/tiles" "prod.apps.generalwildfireservices.com/app1" )
#declare -a coyote=( "http://dev.apps.generalwildfireservices.com/tiles" "https://dev.apps.generalwildfireservices.com/tiles" )
declare -a coyote=( "dev.apps.generalwildfireservices.com/tiles" )
echo "----------------------------------------------------------------"
echo "coyote"
echo "----------------------------------------------------------------"
for i in "${coyote[@]}"
do
   check_url_reachability "$i"
done

fbsdhost4

14:36 $ cat check_fbsdhost4.sh
#! /bin/bash
check_url_reachability() {
  response=$(curl -o /dev/null --max-time 1 --silent --head --write-out '%{http_code}' "$1")
  if [[ "$response" -eq 200 ]]; then
    echo -e "\e[32;40m[$response]: OK : $1\e[0m"
  else
    echo -e "\e[31;40m[$response]: UNREACHABLE : $1\e[0m"
  fi
}
#declare -a fbsdhost4=( "generalwildfireservices.local4" "dev.apps.generalwildfireservices.local4/tiles" "dev.apps.generalwildfireservices.local4/app1" "stage.apps.generalwildfireservices.local4/tiles" "stage.apps.generalwildfireservices.local4/app1" "prod.apps.generalwildfireservices.local4/tiles" "prod.apps.generalwildfireservices.local4/app1" )
declare -a fbsdhost4=( "dev.apps.generalwildfireservices.local4/tiles" )
echo "----------------------------------------------------------------"
echo "fbsdhost4"
echo "----------------------------------------------------------------"
for i in "${fbsdhost4[@]}"
do
   check_url_reachability "$i"
done

my local hosts file

216.106.186.35          coyote
192.168.1.140           fbsdhost4 fbsdhost4.h.net generalwildfireservices.local4 dev.apps.generalwildfireservices.local4 stage.apps.generalwildfireservices.local4 prod.apps.generalwildfireservices.local4

Here’s a full check script that shows that caddy is failing to route to the dev_tile host, but routes to the dev_web no problem. I have checked that this is not a pf.conf issue by using netcat (nc) to verify connectivity between the caddy load balancer lb and the dev_tile jail… pf is allowing traffic between these nodes.

14:42 $ ./check_simple_urls.sh
----------------------------------------------------------------
coyote
----------------------------------------------------------------
[200]: OK : generalwildfireservices.com
[404]: UNREACHABLE : dev.apps.generalwildfireservices.com/tiles
[200]: OK : dev.apps.generalwildfireservices.com/app1
[404]: UNREACHABLE : stage.apps.generalwildfireservices.com/tiles
[200]: OK : stage.apps.generalwildfireservices.com/app1
[404]: UNREACHABLE : prod.apps.generalwildfireservices.com/tiles
[200]: OK : prod.apps.generalwildfireservices.com/app1
----------------------------------------------------------------
fbsdhost4
----------------------------------------------------------------
[200]: OK : generalwildfireservices.local4
[200]: OK : dev.apps.generalwildfireservices.local4/tiles
[200]: OK : dev.apps.generalwildfireservices.local4/app1
[200]: OK : stage.apps.generalwildfireservices.local4/tiles
[200]: OK : stage.apps.generalwildfireservices.local4/app1
[200]: OK : prod.apps.generalwildfireservices.local4/tiles
[200]: OK : prod.apps.generalwildfireservices.local4/app1

So the tests show partial connectivity. Here’s the source for the tests:

14:42 $ cat ./check_simple_urls.sh
#! /bin/bash
check_url_reachability() {
  response=$(curl -o /dev/null --max-time 1 --silent --head --write-out '%{http_code}' "$1")

  if [[ "$response" -eq 200 ]]; then
    echo -e "\e[32;40m[$response]: OK : $1\e[0m"
  else
    echo -e "\e[31;40m[$response]: UNREACHABLE : $1\e[0m"
  fi
}

declare -a coyote=( "generalwildfireservices.com" "dev.apps.generalwildfireservices.com/tiles" "dev.apps.generalwildfireservices.com/app1" "stage.apps.generalwildfireservices.com/tiles" "stage.apps.generalwildfireservices.com/app1" "prod.apps.generalwildfireservices.com/tiles" "prod.apps.generalwildfireservices.com/app1" )

echo "----------------------------------------------------------------"
echo "coyote"
echo "----------------------------------------------------------------"
for i in "${coyote[@]}"
do
   check_url_reachability "$i"
done

declare -a fbsdhost4=( "generalwildfireservices.local4" "dev.apps.generalwildfireservices.local4/tiles" "dev.apps.generalwildfireservices.local4/app1" "stage.apps.generalwildfireservices.local4/tiles" "stage.apps.generalwildfireservices.local4/app1" "prod.apps.generalwildfireservices.local4/tiles" "prod.apps.generalwildfireservices.local4/app1" )

echo "----------------------------------------------------------------"
echo "fbsdhost4"
echo "----------------------------------------------------------------"
for i in "${fbsdhost4[@]}"
do
   check_url_reachability "$i"
done

You can use any port for tls internal or the DNS-01 challenge. If you want to use the HTTP-01 or TLS-ALPN-01 challenge, you need ports 80 and 443 respectively.

More details:

@timelordx Thank you for pointing this out (and for the links)!

I chose high ports so I wouldn’t have to run caddy as root. However, I could try using portacls so that caddy can bind to low ports as a non-root user.

I’ll configure pf on my host to redirect external traffic (ports 80 and 443) to ports 80 and 443 on the load balancer jail running caddy.

I’ll report back tomorrow.

Thx again!

You can still use high ports on Caddy and just port forward 80 and 443 on your router/firewall to those Caddy ports. That’s what I’m doing at home too.

@timelordx

I updated my Caddyfile to:

  • use the letsencrypt staging environment: Staging Environment - Let's Encrypt
  • simplify the redirection
  • stick with firewall listening to low ports and redirecting to caddy on high ports

pf.conf (firewall)

rdr pass on $ext_if inet proto tcp from any to ($ext_if) port 80 -> $load_balancer port 8080
rdr pass on $ext_if inet proto tcp from any to ($ext_if) port 443 -> $load_balancer port 8443

Caddyfile

{
        http_port 8080
        https_port 8443
        acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
...
dev.apps.generalwildfireservices.com {
	import logging

	route /tiles* {
		reverse_proxy 10.0.0.40:3000
	}
	route /app1* {
		reverse_proxy 10.0.0.30:8080
	}
}

Everything is working fine now. I think the root error was that I blew through my rate limits on letsencrypt while I was developing this load balancer. This happened b/c I was developing this in a FreeBSD jail, and every time I cycled the jail, it had to re-request the acme creds. I could see in the logs that letsencrypt was refusing me any further creds for some time period.

So the useful fixes are:

  • mount external state (directory) into the load balancer jail for storing acme creds. In FreeBSD this means mounting a directory from the HOST as a nullfs mount within the jail.
  • use the lets encrypt staging url until my account resets it’s rate limits
  • simplify the Caddyfile. b/c I specify the http(s)_port (s) as 8080/8443, then I can just specify the site, without the port, in the site block / site address field.

Thanks again @timelordx!

1 Like