Using a different zap driver core

Hi!

I would like to use GitHub - blendle/zapdriver: Blazing fast, Zap-based Stackdriver logging. as the main formatter for my json format logs. I am a bit of a loss on how to start here, are there any pointers on how to do that in the caddy module system?

I’m not sure I understand what your goal is. Can’t you use a custom encoder instead? What do you get from swapping out the core?

We haven’t implemented modularity for the core, we haven’t seen the need for it yet. It might be possible though, but we’ll need clarification on why it makes sense.

My idea is here to make the log output compatible with Google cloud Stackdriver logging. I already have all my caddy logging in Stackdriver logs, but the analysis of the Google cloud console is much better if you conform to their formats. But I have just read on the format feature of caddy, maybe I can solve this that way.

Is it possible to both set the msg key to message and rename fields at the same time?

        log {
                format json {
                        message_key "message"
                }
                format filter {
                        wrap json
                        fields {
                                request rename "httpRequest"
                        }
                }
                output stderr
        }

I appear to be able to either set the msg key or rename request, but not both.

You have to nest them:

	log {
		format filter {
			wrap json {
				message_key "message"
			}
			fields {
				request rename "httpRequest"
			}
		}
		output stderr
	}
1 Like

Thank you! That works nicely!

1 Like

One more question, is it possible to rename a field so the result is nested?

method rename httpRequest>requestMethod

This does not work, but I may be missing a fine point.

Oh sorry, that was a bad copy and paste, it should be:

status rename httpRequest>status

No, fields can’t be moved around like that with filter. (And FWIW that doesn’t make sense because status is a property of the response, not of the request).

You could write your own encoder module if you want to do whatever kinds of transformations you want.

Well, I did not come up with that spec, that was Google with their cloud infrastructure. I will probably just skip the transforms that caddy offers and do some additional filtering in the docker plugin I use to forward logs to Google Cloud. Even with just msg transformed to message it is much better than anything else I have used for logging.

There are 2 community modules that may help you. The json-select module may be able to restructure the log JSON

If you want more control or flexibility over the structuring, take inspiration from the elastic-encoder, which has similar goal but for Elasticsearch.

1 Like

Thanks for the pointer, the first one is actually very promising and gets me halfway where I want to be. I may need to fork this and add the capability to combine json fields. This is needed for example with the ip numbers and port numbers in the log, caddy has them as separate fields, Google Cloud has them as one field seperated with colon. Similarly Google wants to see a complete URL as one of the log fields.