Retries for POST requests with body

1. The problem I’m having:

Automatically retry POST requests (and all other HTTP verbs). In my testing GET requests are being correctly retried with no failures, but POST requests are not. POST requests are being properly retried if they don’t have a POST body, but with a body they are not. I believe the problem is related to buffering the post request bodies, but I’m not sure how to make that happen. Any help would be greatly appreciated. Thank you!

2. Error messages and/or full log output:

When the backend is down:

{"level":"error","ts":1713028574.890797,"logger":"http.log.error","msg":"readfrom tcp> http: invalid Read on closed Body","request":{"remote_ip":"","remote_port":"51773","client_ip":"","proto":"HTTP/1.1","method":"POST","host":"localhost","uri":"/dev/readiness-probe/","headers":{"User-Agent":["my-k6-user-agent"],"Content-Length":["31"]}},"duration":8.032010087,"status":502,"err_id":"m4he864dc","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}

3. Caddy version:

2.7.6-alpine (docker)

4. How I installed and ran Caddy:

Docker compose

a. System environment:

Docker compose (Docker for Mac)

b. Command:

Docker compose

c. Service/unit/compose file:

    image: caddy:2.7.6-alpine
    container_name: breww-web
      - app:/app
      - ./_build/dev/caddy/Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
      - "80:80"
      - app
      - app-tier

d. My complete Caddy config:

	auto_https off

:80 {
	root * /app
	encode gzip
	header -Server

	reverse_proxy {
		to app:8000

		lb_retries 8
		lb_policy least_conn
		lb_try_interval 1s  # Enough for KeyDB to have replicated a POST requests response cache
		lb_try_duration 8s  # Keep less than the cache time in IdempotencyMiddleware
		lb_retry_match {

		header_up Host {}
		header_up X-Real-IP {}
		header_up X-Forwarded-For {http.request.header.x-forwarded-for}
		header_up X-Caddy-Request-ID {http.request.uuid}  # Used by IdempotencyMiddleware to return the original response again without processing it multiple times

		# I've tried adding buffer_requests here, which doesn't break config, but doesn't solve the problem
		# buffer_requests

5. Links to relevant resources:

Thank you for an amazing package and for your help!

That’s not supported currently, because the entire request body would need to be buffered (buffering with the current implementation is chunked) and that is not done by default. The retry logic would need to be able to have access to the buffer and be able to rewind it before a retry.

Typically, POST retries are unsafe because the point at which the upstream app may have triggered an error may be after having already performed some write operation.

The better thing to do would be to implement retries at the client-side, where fancier logic could be implemented based on the error response.

Remove this stuff, this clobbers Caddy’s default header handling. Let Caddy do the right thing. reverse_proxy (Caddyfile directive) — Caddy Documentation


Ah ok, thanks.

I appreciate that retrying POSTs is usually unsafe, but we’ve (in theory) implemented protection for this by leveraging header_up X-Caddy-Request-ID {http.request.uuid} to ensure that a cached response will be returned when a duplicate X-Caddy-Request-ID turns up within a short time of the original.

Will do! Not quite sure where they came from, to be honest, probably a bad example file I found online when we first started using Caddy.

1 Like

It’s probably not impossible to implement, but it’s not been a commonly requested feature so it doesn’t exist yet. If you’re willing to dig into the code, you could take a shot at it.


After investigating to see what it would take to implement this, I found out that the proxy is already capable of this, but I had to patch a bug by changing half a line of code first. Should work now.

Let me know if it’s still not working!

1 Like