How can chunked requests be implemented in the Iris plugin for Caddy?

1. The problem I’m having:

Hello,

After developing an Iris plugin for Caddy, I routed requests to Iris once the iris.Application was built, with code that looks something like this:

type IrisPlugin struct {
	App      *iris.Application
}

func (p *IrisPlugin) CaddyModule() caddy.ModuleInfo {
	return caddy.ModuleInfo{
		ID:  "http.handlers.iris_plugin",
		New: func() caddy.Module { return new(IrisPlugin) },
	}
}

func (p *IrisPlugin) Provision(ctx caddy.Context) error {
	p.App = iris.New()
	app.Get("/test", func(ctx iris.Context) {
		ctx.ContentType("text/html")
		ctx.Header("Transfer-Encoding", "chunked")
		i := 0
		ints := []int{1, 2, 3, 5, 7, 9, 11, 13, 15, 17, 23, 29}
		// Send the response in chunks and wait for half a second between each chunk,
		// until connection close.
		err := ctx.StreamWriter(func(w io.Writer) error {
			ctx.Writef("Message number %d<br>", ints[i])
			time.Sleep(500 * time.Millisecond) // simulate delay.
			if i == len(ints)-1 {
				return errors.New("done") // ends the loop.
			}
			i++
			return nil // continue write
		})

		if err != errors.New("done") {
			// Test it by canceling the request before the stream ends:
			// [ERRO] $DATETIME stream: context canceled.
			ctx.Application().Logger().Errorf("stream: %v", err)
		}
	})
	p.App.Build()
}

func (p *IrisPlugin) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {
	p.App.ServeHTTP(w, r)
	return next.ServeHTTP(w, r)
}

// other codes...

This allows Iris to handle requests. However, the issue arises when an endpoint in Iris returns a chunked response, which doesn’t seem to take effect. How should I implement the ServeHTTP method for the Caddy plugin to address this?
(By the way, the above router can implement block transfer when the iris web server is started separately)

2. Error messages and/or full log output:

{"level":"info","ts":1701142604.1817586,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"192.168.1.35","remote_port":"63868","client_ip":"192.168.1.35","proto":"HTTP/1.1","method":"GET","host":"192.168.1.180:40100","uri":"/test","headers":{"User-Agent":["curl/7.82.0"],"Accept":["*/*"]}},"bytes_read":0,"user_id":"","duration":6.013202284,"size":246,"status":200,"resp_headers":{"Transfer-Encoding":["chunked"],"Server":["Caddy"],"Content-Type":["text/html; charset=utf-8"]}}

3. Caddy version:

v2.7.5

4. How I installed and ran Caddy:

a. System environment:

debian11, arm64.

b. Command:

caddy run --config /test/caddy.json

c. My complete Caddy config:

{
  "admin": {
    "listen": ":2019"
  },
  "logging": {
    "logs": {
      "default": {
        "writer": {
          "filename": "/test/log/caddy.log",
          "output": "file",
          "roll": true,
          "roll_gzip": true,
          "roll_keep": 10,
          "roll_keep_days": 7,
          "roll_local_time": true,
          "roll_size_mb": 10
        },
        "level": "DEBUG"
      }
    }
  },
  "storage": {
    "module": "file_system",
    "root": "/test/caddy"
  },
  "apps": {
    "http": {
      "http_port": 40100,
      "https_port": 443,
      "servers": {
        "srv0": {
          "listen": [
            ":40100"
          ],
          "routes": [
            {
              "group": "test",
              "match": [
                {
                  "path": [
                    "*"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "group": "test-sub",
                      "handle": [
                        {
                          "handler": "iris_plugin"
                        }
                      ],
                      "match": [
                        {
                          "path": [
                            "/test/*"
                          ]
                        }
                      ],
                      "terminal": true
                    }
                  ]
                }
              ],
              "terminal": true
            }
          ],
          "logs": {
            "should_log_credentials": true
          }
        }
      }
    }
  }
}

5. Links to relevant resources:

I think you need to flush after every write to make it go out to the client, otherwise it gets buffered.

The StreamWriter has flushed after every write.

func (ctx *Context) StreamWriter(writer func(w io.Writer) error) error {
	cancelCtx := ctx.Request().Context()
	notifyClosed := cancelCtx.Done()

	for {
		select {
		// response writer forced to close, exit.
		case <-notifyClosed:
			return cancelCtx.Err()
		default:
			if err := writer(ctx.writer); err != nil {
				return err
			}
			ctx.writer.Flush()
		}
	}
}

Then I dunno :man_shrugging: it’s not a Caddy-specific question, there’s not really anything special going on here.

Ok, I’ll find another way. Thanks for your reply!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.