How to prevent processing same log twice when writing log aggreagator

I want to write a log aggregator which sends data to the database by process the access.log file however i have a questions as new logs get appended to the same log file and now my tool process the log file 4 times a day how will i prevent process the same data which is processed already ?

Is there way to empty the access.log after processed or what are the best practices ?

You could use the net writer log (Caddyfile directive) — Caddy Documentation to do your own processing logic at the end of a pipe, instead of using a log file.

I’ve never tried to process logs this way. You could probably truncate the file as your processor ingests it. That might be fine.

I’m not sure there are really established best practices for this (for Caddy anyway). There might be “Linux best practices” for this, but it’s not an area I’ve personally explored.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.