Slow reverse proxy compared to apache

I am finding reverse proxy significantly slower compared to apache. Here is how I am testing:

Apache (running on 81 and 444):

<VirtualHost *:444>
    ServerAdmin dmd@3e.org
    ServerName caddy.3e.org
        ProxyPreserveHost On
        RequestHeader set X-Forwarded-Proto "https"
    ProxyPass / http://127.0.0.1:9604/ Keepalive=On
    ProxyPassReverse / http://127.0.0.1:9604/
    <Proxy *>
        Order deny,allow
        Allow from all
    </Proxy>
SSLCertificateFile /etc/letsencrypt/live/caddy.3e.org/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/caddy.3e.org/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>

Caddy (running on default 80 and 443):

caddy.3e.org {
    reverse_proxy http://127.0.0.1:9604
}

I serve a test page with a simple python script:

admin@caddy:~/mir$ cat serve.py
import http.server
import socketserver

class ThreadedHTTPServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
    """ handle requests """

class Handler(http.server.SimpleHTTPRequestHandler):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, directory=".", **kwargs)

if __name__ == "__main__":
    server_address = ("", 9604)
    httpd = ThreadedHTTPServer(server_address, Handler)
    try:
        httpd.serve_forever()
    except KeyboardInterrupt:
        pass
    httpd.server_close()

Then, loading the page in Chrome dev tools with “Disable cache” enabled, using the Caddy server I typically get 2x the time to Finish as Apache.

The most obvious difference I see is that Caddy is serving all the images in a single connection:

whereas Apache seems to be using multiple:

I am guessing there is a simple fix for this.
I have the test server up right now, if you want to try. Use the URLs:

caddy: Midjourney v4
apache: https://caddy.3e.org:444/midjourney-v4/

System environment:


$ caddy version
2.6.2

$ uname -a
Linux caddy 6.1.0-18-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux

$ cat /etc/debian_version
12.5

Installed from the debian package, using the package’s standard systemd unit to start.

Hmm, I’m not seeing that here.

Caddy:

Apache:

Caddy is faster from start to finish :thinking:

(Browser cache is disabled)

Maybe I am measuring wrong?

Based on our discussion on Discord, we’re seeing that Apache is serving all image assets over HTTP/1.1. H2 and H3 are being used for external resources like CSS and JS files from CDNs that support those protocols. That explains why the browser is opening multiple connections (the server doesn’t control that) – as HTTP/2 and HTTP/3, which Caddy enables natively and by default, supports streaming so multiple connections aren’t necessary.

I’m not sure why I’m seeing the opposite results as you, but I am yet to be convinced it is a server problem specifically at this point. It could be – of course. But I am not sure that picture is painted quite yet.

Summarizing here for future searchers:

  • US west coast, US mountain west, and Germany don’t see any significant disparity.
  • US east and US southeast do see the disparity you’re describing.

I think, somehow, the phenomenon is regional!

EDIT: Welp, apparently one city in the US west is also seeing what you describe, but it’s very likely related to external factors:

  • HTTP/2 streams transfer much more data over a single connection than HTTP/1, which uses multiple small connections.
  • HTTP/3 is also used with Caddy, which is over UDP instead of TCP. Being new and highly complex, its implementation is continuously being optimized, so it could be a mix between that and the fact that some network middleboxes don’t like UDP much and deprioritize UDP traffic.
  • The web host (AWS) could be throttling large connections (e.g. multimedia streams) from certain remotes, and favoring smaller, shorter-lived connections, to enhance throughput for multiple customers on shared resources.

I’d be curious what results you get by self-hosting your page and running the test locally.

Confirming that

{
    servers :443 {
        protocols h1
    }
}

fixes the issue for me.

Is there any way to make “protocols h1” apply to only a single domain rather than globally to every domain I serve with caddy?

(Guessing no because by the time caddy knows what domain’s being asked for, it’s too late?)

If TLS is being used, Caddy usually knows the domain before HTTP because of the ServerName in the TLS handshake. However, whether the Host header of subsequent HTTP requests and the ServerName of the initial TLS handshake are the same is a matter of question. This is how domain fronting is accomplished and can be a handy privacy feature.

While enabling protocols on the server is a server-wide config option, TLS’ ALPN can be used to negotiate the protocol per-connection at the application layer, such as http/1.1 or h2.

It is theoretically possible, then; but I don’t think we’ve had a feature request for this and I’m not sure the actual real-world benefits. (I understand if it could act as a band-aid to this problem, but I’m still not 100% sure that’s the best way to solve it when it comes to code changes and complexity in the server…)

One option you could do right now is serve that domain on a separate HTTP server (implies a different socket) and configure the protocol for only that one.

I think you can configure alpn already. See Http3 Demo | 萌え豚's Blog from @WeidiDeng, you can see the use of the tls directive with the alpn option to specify HTTP versions.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.