Unexpeted code 200 instead of 413

1. The problem I’m having:

I want caddy to always read the complete POST body, I doubt it does.
I tried to use a simple POST endpoint and sent arbitrary amounts of data to that endpoint. No matter how big the request was, caddy returned a 204 within several milliseconds:

{
    ### Global options
    #   no admin interface
    admin off

    #   log to stdout for systemd or Docker
    log {
        output stdout
        format console
        level debug
    }

    debug
}

# -------------------------------------------------
# HTTP listener
# -------------------------------------------------
http:// {
    request_body  {
        max_size 3KiB
    }

    handle /upload {
        respond "" 204
    }

    handle {
        respond "Not found" 404
    }
}

Now I try to pass the request through a reverse proxy. I’ve put the request_body directive to many places but I cannot get caddy to respond with 413 on too large input. I use curl for my tests:

$ time curl -ki -H 'Content-type: application/octet-stream' --data-binary @verylargebin --noproxy \* http://10.100.200.32:10080/upload
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Connection: close
Content-Length: 0
Date: Sun, 08 Feb 2026 14:56:05 GMT
Server: Caddy
Via: 1.1 Caddy


real	0m0,044s
user	0m0,013s
sys	    0m0,031s

verylargebin is a 15MB file, using 1GB results in an out of memory error.
My actual goal is to measure the time required for valid HTTP POST requests. Any idea where I am wrong? I would happily skip the reverse proxy if there is an easier way.

2. Error messages and/or full log output:

2026/02/08 14:50:35.997	DEBUG	http.handlers.reverse_proxy	selected upstream	{"dial": "127.0.0.1:60080", "total_upstreams": 1}
2026/02/08 14:50:35.997	DEBUG	http.handlers.reverse_proxy	upstream roundtrip	{"upstream": "127.0.0.1:60080", "duration": 0.000463496, "request": {"remote_ip": "10.100.200.32", "remote_port": "50320", "client_ip": "10.100.200.32", "proto": "HTTP/1.1", "method": "POST", "host": "10.100.200.32:10080", "uri": "/upload", "headers": {"Content-Type": ["application/octet-stream"], "Content-Length": ["15728640"], "Expect": ["100-continue"], "X-Forwarded-For": ["10.100.200.32"], "X-Forwarded-Host": ["10.100.200.32:10080"], "Via": ["1.1 Caddy"], "User-Agent": ["curl/8.5.0"], "Accept": ["*/*"], "X-Forwarded-Proto": ["http"]}}, "headers": {"Server": ["Caddy"], "Date": ["Sun, 08 Feb 2026 14:50:35 GMT"], "Content-Length": ["0"]}, "status": 200}

3. Caddy version:

$ caddy --version
v2.10.2 h1:g/gTYjGMD0dec+UgMw8SnfmJ3I9+M2TdvoRL/Ovu6U8=

4. How I installed and ran Caddy:

docker run --rm \
    -p 10080:80\
    --user "$(id -u):$(id -g)" \
    -v $(pwd)/conf:/config \
    -v $(pwd)/conf:/etc/caddy \
    -v $(pwd)/data:/data \
    -v $(pwd)/certs:/etc/caddy/certs:ro \
    -v $(pwd)/large.bin:/srv/static/large.bin:ro \
    -e 'CORS_ORIGINS=https://localhost,https://[::1]' \
    -e 'MAX_CONNS=20' \
    -e 'MAX_UPLOAD_SIZE=2097152' \
    caddy:2

a. System environment:

Docker version 27.2.1

b. Command:

caddy run --config /etc/caddy/Caddyfile --adapter caddyfile

c. Service/unit/compose file:

d. My complete Caddy config:

{
    ### Global options
    #   no admin interface
    admin off

    #   log to stdout for systemd or Docker
    log {
        output stdout
        format console
        level debug
    }

    debug
}

#####   upload size limiter <= NOT GLOBALLY
####@upload-api path /upload

# -------------------------------------------------
# internal reverse proxy endpoint
# -------------------------------------------------
http://127.0.0.1:60080 {
    request_body {
        max_size 3KiB
    }
    handle /upload {
        header +X-Reverse-Proxy-Handler yes
        respond "Read the file" 200
        #import upload
    }

    handle {
        respond "Not found" 404
    }
}

# -------------------------------------------------
# HTTP listener
# -------------------------------------------------
http:// {
    request_body  {
        max_size 3KiB
    }

    handle /upload {
        #request_body {
        #    max_size 3KiB
        #}
        reverse_proxy http://127.0.0.1:60080
    }

    handle {
        respond "Not found" 404
    }
}

5. Links to relevant resources:

Now I once encountered 413 with this configuration:

    ### Global options
    #   no admin interface
    admin off

    #   log to stdout for systemd or Docker
    log {
        output stdout
        format console
        level debug
    }

    debug
}

# -------------------------------------------------
# internal reverse proxy endpoint
# -------------------------------------------------
http://127.0.0.1:60080 {
    handle /upload {
        header +X-Reverse-Proxy-Handler yes
        respond "" 204
    }

    handle {
        respond "Not found" 404
    }
}

# -------------------------------------------------
# HTTP listener
# -------------------------------------------------
http:// {
    request_body  {
        max_size 3KiB
    }

    handle /upload {
        reverse_proxy http://127.0.0.1:60080
    }

    handle {
        respond "Not found" 404
    }
}

These were my log lines:

2026/02/08 15:27:58.762	DEBUG	http.handlers.reverse_proxy	selected upstream	{"dial": "127.0.0.1:60080", "total_upstreams": 1}
2026/02/08 15:27:58.762	DEBUG	http.handlers.reverse_proxy	upstream roundtrip	{"upstream": "127.0.0.1:60080", "duration": 0.000581973, "request": {"remote_ip": "10.100.200.32", "remote_port": "48884", "client_ip": "10.100.200.32", "proto": "HTTP/1.1", "method": "POST", "host": "10.100.200.32:10080", "uri": "/upload", "headers": {"Content-Length": ["524288000"], "X-Forwarded-For": ["10.100.200.32"], "X-Forwarded-Proto": ["http"], "X-Forwarded-Host": ["10.100.200.32:10080"], "Via": ["1.1 Caddy"], "Content-Type": ["application/octet-stream"], "Expect": ["100-continue"], "User-Agent": ["curl/8.5.0"], "Accept": ["*/*"]}}, "error": "readfrom tcp 127.0.0.1:55172->127.0.0.1:60080: {id=fmftd5rvb} requestbody.errorWrapper.Read (requestbody.go:117): HTTP 413: http: request body too large"}
2026/02/08 15:27:58.763	DEBUG	http.log.error	http: request body too large	{"request": {"remote_ip": "10.100.200.32", "remote_port": "48884", "client_ip": "10.100.200.32", "proto": "HTTP/1.1", "method": "POST", "host": "10.100.200.32:10080", "uri": "/upload", "headers": {"User-Agent": ["curl/8.5.0"], "Accept": ["*/*"], "Content-Type": ["application/octet-stream"], "Content-Length": ["524288000"], "Expect": ["100-continue"]}}, "duration": 0.000724988, "status": 413, "err_id": "fmftd5rvb", "err_trace": "requestbody.errorWrapper.Read (requestbody.go:117)"}

Executing the same curl command again I get 200 responses :thinking:

2026/02/08 15:28:03.198	DEBUG	http.handlers.reverse_proxy	selected upstream	{"dial": "127.0.0.1:60080", "total_upstreams": 1}
2026/02/08 15:28:03.198	DEBUG	http.handlers.reverse_proxy	upstream roundtrip	{"upstream": "127.0.0.1:60080", "duration": 0.000291989, "request": {"remote_ip": "10.100.200.32", "remote_port": "56002", "client_ip": "10.100.200.32", "proto": "HTTP/1.1", "method": "POST", "host": "10.100.200.32:10080", "uri": "/upload", "headers": {"X-Forwarded-Host": ["10.100.200.32:10080"], "User-Agent": ["curl/8.5.0"], "Accept": ["*/*"], "Content-Length": ["524288000"], "X-Forwarded-For": ["10.100.200.32"], "Via": ["1.1 Caddy"], "Content-Type": ["application/octet-stream"], "Expect": ["100-continue"], "X-Forwarded-Proto": ["http"]}}, "headers": {"Server": ["Caddy"], "Date": ["Sun, 08 Feb 2026 15:28:03 GMT"], "Content-Length": ["0"]}, "status": 200}
2026/02/08 15:28:33.206	DEBUG	http.handlers.reverse_proxy	selected upstream	{"dial": "127.0.0.1:60080", "total_upstreams": 1}
2026/02/08 15:28:33.207	DEBUG	http.handlers.reverse_proxy	upstream roundtrip	{"upstream": "127.0.0.1:60080", "duration": 0.000264885, "request": {"remote_ip": "10.100.200.32", "remote_port": "57820", "client_ip": "10.100.200.32", "proto": "HTTP/1.1", "method": "POST", "host": "10.100.200.32:10080", "uri": "/upload", "headers": {"Content-Length": ["524288000"], "Expect": ["100-continue"], "X-Forwarded-Host": ["10.100.200.32:10080"], "Via": ["1.1 Caddy"], "User-Agent": ["curl/8.5.0"], "Content-Type": ["application/octet-stream"], "X-Forwarded-For": ["10.100.200.32"], "X-Forwarded-Proto": ["http"], "Accept": ["*/*"]}}, "headers": {"Server": ["Caddy"], "Date": ["Sun, 08 Feb 2026 15:28:33 GMT"], "Content-Length": ["0"]}, "status": 200}

It’s not even a 204 as configured…

Finally I have a working setup. If I use a very tiny “external” webserver process I can see 413. This is my Caddyfile:

{
    ### Global options
    #   no admin interface
    admin off

    #   log to stdout for systemd or Docker
    log {
        output stdout
        format console
        level debug
    }

    debug
}

# -------------------------------------------------
# HTTP listener
# -------------------------------------------------
http:// {
    @upload {
        path /upload
        method POST
    }

    handle @upload {
        request_body {
            max_size 3KiB
        }
        reverse_proxy http://10.197.40.117:60080

    handle {
        respond "Not found" 404
    }
}

This is my tiny webserver:

from aiohttp import web
import argparse

# Upload-Endpunkt
async def upload(request):
    if request.method != "POST":
        return web.Response(status=404, text="Not Found")

    total = 0
    with open("/dev/null", "wb") as devnull:
        async for chunk in request.content.iter_chunked(65536):
            if not chunk:
                break
            devnull.write(chunk)
            _bytes = len(chunk)
            total += _bytes
            debug(f"[sink] sunk {_bytes} bytes")

    debug(f"[sink] sunk {total} bytes in total")
    return web.Response(status=200, text="OK")

# Catch-all: 404
async def not_found(request):
    return web.Response(status=404, text="Not Found")

def parse_args():
    p = argparse.ArgumentParser(description="Tiny upload sink")
    p.add_argument("-H", "--host", default="localhost", help="Host/interface to bind to")
    p.add_argument("-p", "--port", type=int, default=10080, help="Port to listen on")
    p.add_argument("-d", "--debug", action="store_true", help="Switch on debug output")
    return p.parse_args()

def debug(s):
    if _debug:
        print(s)
    return

app = web.Application()
app.router.add_post("/upload", upload)
app.router.add_route('*', '/{tail:.*}', not_found)
_debug = 0

if __name__ == "__main__":
    args = parse_args()
    _debug = args.debug
    web.run_app(app, host=args.host, port=args.port)

I start it using python3 my-sink.py -p 60080 -H 10.197.40.117 -d.
curl tells me

curl -i -H 'Content-type: application/octet-stream' --data-binary @verylargebin --noproxy \* http://10.197.40.117:10080/upload
HTTP/1.1 100 Continue

HTTP/1.1 413 Request Entity Too Large
Connection: close
Server: Caddy
Date: Thu, 19 Feb 2026 13:18:36 GMT
Content-Length: 0

This is what I wanted. But I want it to be better. With this small webserver there is more monitoring required, I’d like to have a “caddy only” approach. So I changed my Caddyfile to:

{
    ### Global options
    #   no admin interface
    admin off

    #   log to stdout for systemd or Docker
    log {
        output stdout
        format console
        level debug
    }

    debug
}

##### -------------------------------------------------
##### internal reverse proxy endpoint
##### -------------------------------------------------
http://127.0.0.1:50080 {
    @upload {
        path /upload
        method POST
    }

    handle @upload {
        respond "" 204
    }

    handle {
        respond "Not found" 404
    }
}
# -------------------------------------------------
# HTTP listener
# -------------------------------------------------
http:// {
    #request_body  {
    #    max_size 3KiB
    #}

    @upload {
        path /upload
        method POST
    }

    handle @upload {
        request_body {
            max_size 3KiB
        }
        reverse_proxy http://127.0.0.1:50080
    }

    handle {
        respond "Not found" 404
    }
}

My idea is to proxy to another caddy endpoint. This time curl tells me:

curl -i -H 'Content-type: application/octet-stream' --data-binary @verylargebin --noproxy \* http://10.197.40.117:10080/upload
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Connection: close
Content-Length: 0
Date: Thu, 19 Feb 2026 13:22:53 GMT
Server: Caddy
Via: 1.1 Caddy

And I don’t understand what is the difference. Here my caddy output:

{"level":"info","ts":1771507380.2701485,"msg":"maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined"}
{"level":"info","ts":1771507380.2702658,"msg":"GOMEMLIMIT is updated","package":"github.com/KimMachineGun/automemlimit/memlimit","GOMEMLIMIT":14765945241,"previous":9223372036854775807}
{"level":"info","ts":1771507380.27029,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1771507380.271444,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1771507380.2714515,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1771507380.2717884,"msg":"redirected default logger","from":"stderr","to":"stdout"}
2026/02/19 13:23:00.271	WARN	admin	admin endpoint disabled
2026/02/19 13:23:00.271	WARN	http.auto_https	server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server	{"server_name": "srv1", "http_port": 80}
2026/02/19 13:23:00.271	INFO	tls.cache.maintenance	started background certificate maintenance	{"cache": "0xc00050f000"}
2026/02/19 13:23:00.271	DEBUG	http.auto_https	adjusted config	{"tls": {"automation":{"policies":[{}]}}, "http": {"servers":{"srv0":{"listen":[":50080"],"routes":[{"handle":[{"handler":"subroute","routes":[{"group":"group4","handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"static_response","status_code":204}]}]}],"match":[{"method":["POST"],"path":["/upload"]}]},{"group":"group4","handle":[{"handler":"subroute","routes":[{"handle":[{"body":"Not found","handler":"static_response","status_code":404}]}]}]}]}],"terminal":true}],"automatic_https":{"skip":["127.0.0.1"]}},"srv1":{"listen":[":80"],"routes":[{"group":"group6","handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"request_body","max_size":3072},{"handler":"reverse_proxy","upstreams":[{"dial":"127.0.0.1:50080"}]}]}]}]},{"group":"group6","handle":[{"handler":"subroute","routes":[{"handle":[{"body":"Not found","handler":"static_response","status_code":404}]}]}]}],"automatic_https":{"disable":true}}}}}
2026/02/19 13:23:00.272	DEBUG	http	starting server loop	{"address": "[::]:50080", "tls": false, "http3": false}
2026/02/19 13:23:00.272	WARN	http	HTTP/2 skipped because it requires TLS	{"network": "tcp", "addr": ":50080"}
2026/02/19 13:23:00.272	WARN	http	HTTP/3 skipped because it requires TLS	{"network": "tcp", "addr": ":50080"}
{"level":"info","ts":1771507380.272923,"msg":"serving initial configuration"}
2026/02/19 13:23:00.272	INFO	http.log	server running	{"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2026/02/19 13:23:00.272	DEBUG	http	starting server loop	{"address": "[::]:80", "tls": false, "http3": false}
2026/02/19 13:23:00.272	WARN	http	HTTP/2 skipped because it requires TLS	{"network": "tcp", "addr": ":80"}
2026/02/19 13:23:00.272	WARN	http	HTTP/3 skipped because it requires TLS	{"network": "tcp", "addr": ":80"}
2026/02/19 13:23:00.272	INFO	http.log	server running	{"name": "srv1", "protocols": ["h1", "h2", "h3"]}
2026/02/19 13:23:00.272	DEBUG	events	event	{"name": "started", "id": "7b2a01ca-5720-447f-8ded-c5ef3448e171", "origin": "", "data": null}
2026/02/19 13:23:00.272	INFO	autosaved config (load with --resume flag)	{"file": "/config/caddy/autosave.json"}
2026/02/19 13:23:00.273	INFO	tls	storage cleaning happened too recently; skipping for now	{"storage": "FileStorage:/data/caddy", "instance": "2e1a987d-f825-4f5a-a773-0e694abd2e7a", "try_again": "2026/02/20 13:23:00.273", "try_again_in": 86399.999999688}
2026/02/19 13:23:00.273	INFO	tls	finished cleaning storage units
2026/02/19 13:25:29.092	DEBUG	http.handlers.reverse_proxy	selected upstream	{"dial": "127.0.0.1:50080", "total_upstreams": 1}
2026/02/19 13:25:29.093	DEBUG	http.handlers.reverse_proxy	upstream roundtrip	{"upstream": "127.0.0.1:50080", "duration": 0.000344471, "request": {"remote_ip": "10.197.40.117", "remote_port": "57972", "client_ip": "10.197.40.117", "proto": "HTTP/1.1", "method": "POST", "host": "10.197.40.117:10080", "uri": "/upload", "headers": {"Accept": ["*/*"], "X-Forwarded-Proto": ["http"], "X-Forwarded-Host": ["10.197.40.117:10080"], "Content-Length": ["524288000"], "User-Agent": ["curl/8.5.0"], "Content-Type": ["application/octet-stream"], "X-Forwarded-For": ["10.197.40.117"], "Via": ["1.1 Caddy"], "Expect": ["100-continue"]}}, "headers": {"Content-Length": ["0"], "Server": ["Caddy"], "Date": ["Thu, 19 Feb 2026 13:25:29 GMT"]}, "status": 200}

Yeah I don’t think it does. The request body is a stream, there would need to be something that tries to consume the stream for it to do anything. You can do that with something like respond "Body: {http.request.body}" which would reflect the request body back into the response body, at least consuming it. This uses an inefficient request buffer though btw instead of streaming it into the response because it needs to make a string that it can concatenate etc. But it should at least let you confirm that the “first caddy site → reverseproxy → second caddy site” config setup should return 413 in that case. I assume that’s the problem with your testing.

3 Likes

Thx @francislavoie,

I understand, it cannot work the way I thought. So I’ll stick with a mini reverse proxy implementation.