I want to write a log aggregator which sends data to the database by process the access.log file however i have a questions as new logs get appended to the same log file and now my tool process the log file 4 times a day how will i prevent process the same data which is processed already ?
Is there way to empty the access.log after processed or what are the best practices ?
I’ve never tried to process logs this way. You could probably truncate the file as your processor ingests it. That might be fine.
I’m not sure there are really established best practices for this (for Caddy anyway). There might be “Linux best practices” for this, but it’s not an area I’ve personally explored.