Logging Questions: two loggers and var in log filename

1. Caddy version:

v2.6.2 h1:wKoFIxpmOJLGl3QXoo6PNbYvGW4xLEgo32GPBEjWL8o=

2. How I installed, and run Caddy:

a. System environment:

Debian install of Caddy via Caddy’s repo, update-alternatives method used to swap over to xcaddy build with caddy-dns/cloudflare, caddyserver/transform-encoder and gamalan/caddy-tlsredis

b. Command:

systemctl start caddy-api

c. Service/unit/compose file:

Standard provided by repo, with the following modifications via systemctl edit caddy-api

# /etc/systemd/system/caddy-api.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/caddy run --environ --resume --config /etc/caddy/Caddyfile

EnvironmentFile=/etc/caddy/envfile.env

d. My complete Caddy config:

{
	#debug
	admin 0.0.0.0:2019

	default_sni {env.CADDY_SVR_PUB_FQDN}

	email certadmin@boldcity.tech
	acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
	acme_dns cloudflare {env.CADDY_CF_API_TOKEN}
}

(logging_json) {
	log {
		level INFO
		output file /srv/data-log/caddy/json/{vars.loghost}.json {
			roll_size 10MB
			roll_keep 10
		}
		format console
	}
}

(logging_combinedvhost) {
	log {
		output file /srv/data-log/caddy/clf/{vars.loghost}.log {
			roll_size 10MB
			roll_keep 30
		}
		# Apache Combined Log Format, modified to include the vhost as the first 'column'; standard name is 'combinedv'.
		# TODO: Fix combinedv request port; We add the request's host correctly but fake the port, as I haven't yet figured out how to get the real port number
		format transform `{request>host}:443 {request>remote_ip} - {request>user_id} [{ts}] "{request>method} {request>uri} {request>proto}" {status} {size} "{request>headers>Referer>[0]}" "{request>headers>User-Agent>[0]}"` {
			time_format "02/Jan/2006:15:04:05 -0700"
		}
	}
}

(conf_proxy_up_headers) {
	# Tell the upstream backend server what Frontend proxied the request
	header_up x-feserver-ip {env.CADDY_SVR_INT_IP}
	header_up X-feserver {env.CADDY_SVR_INT_HOSTNAME}

	# TODO: Remove legacy X-Forwarded-By once all backends are migrated to new header field.
	header_up X-Forwarded-By {env.CADDY_SVR_INT_HOSTNAME}

	# The the upstream backend server the real source IP of the request.
	# Caddy automatically adds the following headers: X-Forwarded-Proto, X-Forwarded-Host, X-Forwarded-For 
	header_up X-Real-IP {remote_host}

	# Remove the 'Server' header from the upstream server's response
	header_down -Server
}

(conf_headers_standard) {
	# Tell the client browser the (public) name of the frontend server
	header x-feserver {env.CADDY_SVR_PUB_HOSTNAME}
}

(test-1.potts.in) {
	reverse_proxy {
		to http://upstream-test:81
		flush_interval -1

		import conf_proxy_up_headers
	}
}

test-1.potts.in:80 {
	vars loghost "test-1.potts.in"

	import logging_json
	import logging_combinedvhost

	import conf_headers_standard

	import test-1.potts.in
}

d. Simplified Caddy config

This is a simplified version of the above config for testing (and easier human parsing)

test-1.potts.in:80 {
	vars loghost "test-1.potts.in"

	log {
		level INFO
		output file /srv/data-log/caddy/json/{vars.loghost}.json
		format console
	}
	
	log {
		output file /srv/data-log/caddy/clf/{vars.loghost}.log
		# Apache Combined Log Format, modified to include the vhost as the first 'column'; standard name is 'combinedv'.
		# TODO: Fix combinedv request port; We add the request's host correctly but fake the port, as I haven't yet figured out how to get the real port number
		format transform `{request>host}:443 {request>remote_ip} - {request>user_id} [{ts}] "{request>method} {request>uri} {request>proto}" {status} {size} "{request>headers>Referer>[0]}" "{request>headers>User-Agent>[0]}"` {
			time_format "02/Jan/2006:15:04:05 -0700"
		}
	}

	header x-feserver {env.CADDY_SVR_PUB_HOSTNAME}

	root * /usr/share/caddy
    file_server
}


3. The problem I’m having:

I have two problems/goals:

1: Log twice

To help ease my migration from the legacy Apache CombinedV log format, I would like to log to both the Caddy json format AND combinedv.
If I have both snippets included, the last one in the config wins.
I know there are ways to convert the json format to combinedv but my current logging workflow (GoAccess & fail2ban, among other things) likes to read the file realtime.
Is it possible to log twice?

2: Variable in log’s file name

As I have a bunch of sites and I love simplifying things, I am taking full advantage of snippets. The top config shows what I am doing, but since I want to have different log files for each site, I can’t easily use snippets as I need to customize the log’s file name for each site.

If I try to set a variable to the filename I want, caddy adapt works fine but the resulting json config throws an error.

{"error":"loading new config: setting up custom log 'log0': loading log writer module: loading module 'file': provision caddy.logging.writers.file: invalid filename for log file: unrecognized placeholder {http.vars.loghost}"}

What’s the best way to dynamically change the filename of the log file when referenced in a site block?
Is it possible to put a variable in the filename somehow or perhaps another method?
Is there a pre-defined variable in the context of a site block that contains the site’s address?

Thanks so much!

The log directive is not an HTTP handler. It doesn’t get run on every request as part of the HTTP middleware pipeline. That means that you can’t use any HTTP placeholders, including vars, in its configuration.

Having two different log configurations for one site is kinda tricky. The problem is that the way it was designed, the HTTP server will only log once, to a particular logger name. See the adapted JSON config, in the HTTP server, there’s logs > logger_names which is a map of hostname to logger name, and this is set to one value, not an array or whatever, for each hostname.

Logger configuration is in its own top-level section of the config, i.e. logging. When you use the log directive within a site, it creates a new logger with an incremental ID in that top-level config. Those get an include which references the logger name matching the default_logger_name it set up.

If you use log twice inside of a site, then the second logger’s name will overwrite the first one. So you’ll see log1 there.

So technically, it’s a bug to allow the log directive twice within a site block currently, we should probably reject that config. This is the first time I’ve seen someone try it, which is why it didn’t come up earlier.

The way you would have to configure Caddy to make this work, is to use the log directive just once, but then use the log global option to make another logger which include’s that log name, e.g. http.log.access.log0. Then, both loggers which include that log name will be able to process it.

Obviously if you have multiple sites, then this can be somewhat “error prone” in the sense that the generated log name is not necessarily consistent when using the Caddyfile. If you hand-craft the JSON, then you don’t need to worry about that problem.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.