Caddy prematurely closes connections to reverse proxies when using HTTP/1

1. Output of caddy version:

v2.6.2 h1:wKoFIxpmOJLGl3QXoo6PNbYvGW4xLEgo32GPBEjWL8o=

2. How I run Caddy:

a. System environment:

Ubuntu 20.04 on amd64

b. Command:

caddy run --config Caddyfile

d. My complete Caddy config:

(Domain names and paths have been altered to preserve my privacy, otherwise the Caddyfile is shared as-is.) {
    handle_path /api/* {
        reverse_proxy {
            header_down -Server

    handle {
        root * /app/public

3. The problem I’m having:

I have an API server running on port 2000, fronted by Caddy under the /api path, as shown above. The API server only supports HTTP/1. Whenever it receives a POST request, it sends a 200 OK with Transfer-Encoding: chunked, and streams its response bit by bit, in delays of 2-3 seconds.

When I make requests to Caddy over HTTP/2, all requests to the API server work as expected. However, when I use HTTP/1 to make requests, I’ll sometimes get a truncated response that only contains the first chunk produced by the API server, but not the rest.

This can be easily reproduced with a script like the one below. Approximately 1 in 50 requests fail this way, but when I remove --http1.1 and let curl default to HTTP/2, all requests complete successfully.

expected_output = '...'
for i in {1..200}; do
    curl --http1.1 -sSi -X POST --data '...' > output.txt
    if ! grep -Fq "$expected_output" output.txt; then
       echo "Received truncated output"

Since there are no obvious issues with the API server, I took a packet capture between Caddy and the API server, and discovered that the failing request was being prematurely closed, which is shown in the red rectangle.

Specifically, the API server had just sent the first chunk of data. However, Caddy proceeded to acknowledge the received data, which is fine, but then it also proceeds to request a closure of the TCP connection. The FIN, ACK is reciprocated by the app server, and the connection is closed.

How can I prevent this behaviour, since it’s clearly unexpected?

4. Error messages and/or full log output:

I’m not sure if this information regarding how Caddy handles connections can be logged at all. If there’s another way to get this info, I’d love to know.

5. What I already tried:

Please see the previous sections.

6. Links to relevant resources:


1 Like

I’ve tested this further, and it seems like this also happens when the reverse proxy (app server) uses Unix sockets.

Hmm. You might want to set flush_interval -1 to ensure no buffering occurs:

I tried this, as well as playing around with the keepalive options under transport http, but it still happens.

Should I file a bug report on Github (because Caddy shouldn’t be terminating the connection on partial responses), and is there any other information needed from what I’ve mentioned above?


Yeah, please file a bug report. This isn’t behaviour I’ve seen before.

Unless this was you who opened this one? Looks related:

This topic was automatically closed after 30 days. New replies are no longer allowed.