Caddy memory usage over 300MB... any clue?

I am using Caddy as a reverse proxy for a local gunicorn server which is running a Django application. I noticed that Caddy was consuming a high amount of memory (at one point, it got to over 300MB RSS before I decided to swap it out for nginx temporarily).

Some relevant details:

  • Besides Caddy and my Django application server, no other services run on this server.
  • The server is a 1GB VPS from Vultr.
  • The site is hosted behind Cloudflare and the domain I use is Cloudflare Argo-enabled.
  • I noticed a lot (over 3,000) of open TCP connections to Caddy from Cloudflare’s servers.
  • The site is fairly high-traffic (a maximum of 5 requests per second or so).

This is my Caddyfile: {
    tls {
        dns cloudflare
    proxy / {
} {
    tls {
        dns cloudflare

I’m going to try to use pprof to see if I can gain a bit more insight. I’ll keep this running for a while and I’ll report results later.

Which version of Caddy are you running?


timeouts {
   idle 5m

in your Caddyfile and see what happens. This will probably be the default setting in the next release.

I’m using Caddy 0.10.4.

I’ve added the timeouts directive as you requested - I’ll probably have results back in a day or two.

This isn’t the first time I’ve heard of Cloudflare holding onto lots of connections. Not really sure why… but anyway, see if that helps and let me know!

We’re a bit early, but my Caddy server is doing very well so far. Cloudflare now only keeps open about 100-200 connections and the Caddy process’s RSS and heap usage are stable.

Diving off into speculation land: One of the features that Cloudflare Argo bills as part of Argo is persistent connections. If that’s the thing opening up thousands of connections to my origin, that might very well indicate an issue with Cloudflare.

Unfortunately, I don’t manage this site completely - the CloudFlare account, domain, and other assets are owned and managed by the client I am managing everything on behalf of.

Looking forward to Caddy 0.10.5 with these changes!

1 Like

What’s the benefit of keeping thousands of idle connections open, I wonder? They say:

This results in fewer open connections using server resources.

But it seems like the opposite.

We run Caddy as a front door to a Python based service. We had some similar issues with connections remaining open. It turned out to be gunicorn. Switching to uwsgi helped a lot. Might be worth looking at the back side of the proxy rather than front.

1 Like

My problem was with too many connections to my frontend (Caddy).

That being said, I did see a high amount of connections from Caddy to my gunicorn server because I used sync workers, which don’t support keep-alive connections. However, switching to async workers (which do support HTTP keep-alives) gave me undesirable application response time spikes (despite my application being mostly I/O bound), so I decided to nix the idea for the moment.

I will look into using an async worker and/or uWSGI at a later time, but for the moment, my needs are met adequately by gunicorn’s sync worker type.

Long-term, I don’t see this being a major issue as I will have to scale my application server out to multiple servers. Given my excellent experience with Caddy so far, it’s given that when I move to this stage, I will be using Caddy for load balancing and SSL termination. This is also when I would re-evaluate my usage of gunicorn.

1 Like

How do I check that? I am using uvicorn, and when I make requests from a client behind Cloudfare’s WARP VPN, caddy’s memory usage grows and does not subside. (The memory goes up by around 200MB per 10 requests!)