Date & time in access log file name

1. The problem I’m having:

I’m trying to enable Enhanced Health reporting in AWS’s ElasticBeanstak

It’s dead simple, basically it requires (see docs here)

  1. The access log produced in a particular format (time"path"status"duration"x-forwarded-for)
  2. The access log stored under a particular file name (/var/log/nginx/healthd/application.log.YYYY-MM-DD-hh)

I was able to produce the log format, but it looks like it’s impossible to use variables in the log file name in order to save the file under that date-hour format.

Caddy doesn’t seem to support it directly and log rotation is done by GitHub - natefinch/lumberjack: lumberjack is a log rolling package for Go - which apparently doesn’t support custom file name pattern. V3 does, but that’s not released yet.

I’m starting this topic to see if anyone figured out how to do it, even if it meant some weird workaround.

2. Error messages and/or full log output:

None. I’m not reporting a bug, or at least I didn’t even get to a point of seeing errors.

3. Caddy version:

v2.8.4 h1:q3pe0wpBj1OcHFZ3n/1nl4V4bxBrYoSoab7rL9BMYNk=

4. How I installed and ran Caddy:

Via Docker

a. System environment:

Dockerised Alpine Linux

b. Command:

N/A

c. Service/unit/compose file:

FROM caddy:2-builder-alpine AS caddy_builder

RUN xcaddy build \
	--with github.com/dunglas/mercure \
	--with github.com/dunglas/mercure/caddy \
	--with github.com/dunglas/vulcain \
	--with github.com/dunglas/vulcain/caddy \
	--with github.com/silinternational/certmagic-storage-dynamodb/v3 \
	--with github.com/caddyserver/transform-encoder

FROM caddy:2-alpine AS caddy

WORKDIR /srv/app

RUN apk add nss-tools

COPY --from=dunglas/mercure:v0.11 /srv/public /srv/mercure-assets/
COPY --from=caddy_builder /usr/bin/caddy /usr/bin/caddy
COPY docker/caddy/Caddyfile /etc/caddy/Caddyfile

d. My complete Caddy config:

the log elastic_beanstalk section is (IMO the only) relevant:

{
	{$DEBUG}

	storage dynamodb {$CADDY_SSL_STORAGE_TABLE:workboard-caddy-certmagic} {
		aws_endpoint {$CADDY_SSL_STORAGE_ENDPOINT:https://dynamodb.ap-southeast-2.amazonaws.com}
		aws_region {$CADDY_SSL_STORAGE_REGION:ap-southeast-2}
	}

	storage_clean_interval 32d

	# Set $ACME_EMAIL to an email address to allow public SSL certificates
	email {$ACME_EMAIL:support@alexander.com.au}

	storage_clean_interval 32d
}

{$SERVER_NAME}

@silent {
	path /health-check
	path /health-check/*
	path /site.webmanifest
}

log {
    output file /var/log/nginx/healthd/application.test.log {
		roll_uncompressed
		roll_size 1024MiB
	}

	format transform `{ts}"{request>uri}"{status}"{duration}"{duration}"{request>headers>X-Forwarded-For>[0]:request>remote_ip}` {
		time_format unix_milli_float
		duration_format ms
	}
}

log elastic_beanstalk {
	no_hostname
	output file /var/log/nginx/healthd/application.log {
		roll_disabled
	}
	format transform `{ts}"{request>uri}"{status}"{duration}"{duration}"{request>headers>X-Forwarded-For>[0]:request>remote_ip}` {
		time_format unix_milli_float
		duration_format seconds
	}
}

# Silenced URLs that are passed to PHP
log_skip @silent
# Other URLs we don't care about when it comes to access logging
log_skip /build/*
log_skip /bundles/*
log_skip /robots.txt
log_skip /favicon.ico

route {
	root * /srv/app/public
	mercure {
		# Transport to use (default to Bolt)
		transport_url {$MERCURE_TRANSPORT_URL:bolt:///data/mercure.db}
		# Publisher JWT key
		publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
		# Subscriber JWT key
		subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
		# Enable the subscription API (double-check that it's what you want)
		subscriptions
		# Extra directives
		{$MERCURE_EXTRA_DIRECTIVES}
	}
	vulcain
	push

	php_fastcgi @silent unix//var/run/php/php-fpm.sock {
		index index-silent.php
		trusted_proxies {$CADDY_TRUSTED_PROXIES:private_ranges}
	}

	php_fastcgi unix//var/run/php/php-fpm.sock
	encode zstd gzip
	file_server
}

handle_errors {
	# Error 502 occurs when PHP-FPM is not ready to accept connections. This could happen because it either crashed,
	# or has not started yet. At the moment we do not differentiate between the two
	@502 expression `{http.error.status_code} == 502`
	rewrite @502 /static/errors/{http.error.status_code}.html

	# General server errors
	@5xx expression `{http.error.status_code} != 502 && {http.error.status_code} >= 500 && {http.error.status_code} <= 599`
	rewrite @5xx /static/errors/5xx.html

	file_server
}

5. Links to relevant resources:

I wonder if you couldn’t just Logstash it - don’t need a full log-aggregating ELK stack.

Using Logstash’s file ingestion along with its file output to “proxy” Caddy’s logs to another file for AWS to then ingest - pretty sure you could even leave Caddy’s structured output unaltered and use Logstash to do the output transformation, too, removing a plugin from your Caddy build if you wanted.

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-file.html#plugins-outputs-file-path

2 Likes

Yeah… frankly we’re hoping someone forks and upgrades/maintains lumberjack. We don’t have the bandwidth to take on that burden, but we would be glad to support anyone willing to pursue that.

1 Like