My issue stems from (I think) docker logs filling up my container. After a couple weeks of decently heavy logging from Caddy, my docker container was sitting at 12GB+. By deleting it and recreating it, that 12GB+ went away and started fresh at zero.
I’m looking to get info about how to best do logging when running on docker. I’d like to use the log directive to pump logs to a mounted volume, and rotate the logs as needed to prevent the runaway space usage, but I don’t think that the log directive impacts what happens in Docker. For example, above I have specified format single_field common_log, yet my docker logs output shows regular logging in json format.
Can we turn off the HTTP console logging to docker logs? It’d be useful to keep that for application-level logging (renewing LetsEncrypt certs, etc) but I’d like to offload my HTTP logging from there.
I think it’s the logs that would normally show in the docker log output. These are taking up space over days/weeks, etc. Yesterday I got an alert that my docker host was nearing capacity. When I looked, /var/lib/docker/containers/[container_id]/[container_id]-json.log (the same container ID as my caddy container) was sitting at 12GB of usage.
I have my regular logs working, using the existing output specification in the config. This works great, and I have my expected logs in my caddy.log file in my volume (mapped to my host as usual). My issue is that json.log in /var/lib/docker/containers gets wildly huge fairly quickly with all the json logging.
EDIT: I confirmed this is what I think it is, as by using tail or cat it shows all the Caddy JSON logging populating as seen in docker logs command.
Good to know, thanks. Will remove it from my config.
Yeah, that would do it. Debug will add a lot of logging for each request. Not necessary in production. Most helpful when playing with the Caddyfile or testing some edgecase.