High latency connection yields 502 error on graphical browsers, but works in curl/wget

1. The problem I’m having:

TLDR; Multiple reverse proxies, 100% work using curl, 95% work on graphical browsers except one proxy on a high-latency connection

Greetings. I am using Caddy to simplify the viewing of remote sensors that are deployed in the field and all connected via a private network. The sensors serve data over a simple HTTP interface. All of the sensors are identical (same firmware version) and the only operational difference is one connection has a high latency (see pings below). Because I’m using a private internal domain I’m using Caddy’s root CA cert (which is amazing) that is installed on all my machines, either in the Windows/Linux trust store or directly in the browser. I know the root cert works because I get valid certs on all of my sensors’ Web interfaces.

The high latency reverse proxy yields a 502 error on graphical Web browsers, but works using curl. The other sensors’ Web interfaces works fine in graphical browsers. The Caddyfile code blocks for the various sensors are identical except for the IP and hostname. I have tried Chrome and Firefox both on Windows and Linux machines with identical results (502 on the lone sensor), even after clearing out all the stored cache/cookies/etc. Given the high latency, as part of my troubleshooting in my Caddyfile I did try increasing the dial_timeout in the one sensor’s reverse proxy block with no results.

From within the Caddy Docker container, I can verify that I can ping, traceroute, and retrieve the Web page for the high-latency sensor, so I know the remote sensor is accessible from the Caddy container:

traceroute 10.47.6.43
traceroute to 10.47.6.43 (10.47.6.43), 30 hops max, 46 byte packets
 1  172.19.0.1 (172.19.0.1)  0.020 ms  0.020 ms  0.010 ms
 2  192.168.42.1 (192.168.42.1)  0.684 ms  0.699 ms  0.598 ms
 3  10.47.6.43 (10.47.6.43)  852.562 ms  814.856 ms  705.498 ms
ping 10.47.6.43

As you can see the latency for this one sensor is fairly high. This is due to the connection type.

64 bytes from 10.47.6.43: seq=0 ttl=62 time=681.270 ms
64 bytes from 10.47.6.43: seq=1 ttl=62 time=1313.156 ms
64 bytes from 10.47.6.43: seq=2 ttl=62 time=910.075 ms
64 bytes from 10.47.6.43: seq=3 ttl=62 time=747.996 ms
64 bytes from 10.47.6.43: seq=4 ttl=62 time=658.463 ms
64 bytes from 10.47.6.43: seq=5 ttl=62 time=677.466 ms
64 bytes from 10.47.6.43: seq=6 ttl=62 time=880.528 ms
64 bytes from 10.47.6.43: seq=7 ttl=62 time=694.905 ms
^C
--- 10.47.6.43 ping statistics ---
9 packets transmitted, 8 packets received, 11% packet loss
round-trip min/avg/max = 658.463/820.482/1313.156 ms
wget -S -O /dev/null http://10.47.6.43

curl wasn’t included in the Caddy Docker image, so I used wget. The headers match what I see using curl on other environments.

Connecting to 10.47.6.43 (10.47.6.43:80)
  HTTP/1.1 200 OK
  Content-Type: text/html
  Accept-Ranges: bytes
  ETag: "2603125874"
  Last-Modified: Sun, 26 May 2024 11:39:35 GMT
  X-XSS-Protection: 1; mode=block
  X-Frame-Options: DENY
  Content-Length: 7507
  Connection: close
  Date: Thu, 30 May 2024 17:27:51 GMT
  Server: lighttpd

Additionally, DNS hostnames resolve to the IP of the machine running Caddy, whether it’s on the same VM (host), within the Caddy container, or on another machine.

2. Error messages and/or full log output:

This is the output in the Caddy log (via docker logs -f caddy) when trying to access the site via a graphical browser.

{
  "level": "error",
  "ts": 1717082973.411772,
  "logger": "http.log.error",
  "msg": "EOF",
  "request": {
    "remote_ip": "192.168.42.99",
    "remote_port": "58892",
    "client_ip": "192.168.42.99",
    "proto": "HTTP/2.0",
    "method": "GET",
    "host": "kalgoorlie.sensors.alpaca.lan",
    "uri": "/",
    "headers": {
      "Sec-Ch-Ua-Mobile": [
        "?0"
      ],
      "Upgrade-Insecure-Requests": [
        "1"
      ],
      "Priority": [
        "u=0, i"
      ],
      "Sec-Fetch-Site": [
        "same-site"
      ],
      "Sec-Fetch-Mode": [
        "navigate"
      ],
      "Sec-Fetch-Dest": [
        "document"
      ],
      "Accept": [
        "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"
      ],
      "User-Agent": [
        "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36"
      ],
      "Sec-Fetch-User": [
        "?1"
      ],
      "Accept-Encoding": [
        "gzip, deflate, br, zstd"
      ],
      "Accept-Language": [
        "en-US,en;q=0.9"
      ],
      "Cache-Control": [
        "max-age=0"
      ],
      "Sec-Ch-Ua": [
        "\"Google Chrome\";v=\"125\", \"Chromium\";v=\"125\", \"Not.A/Brand\";v=\"24\""
      ],
      "Sec-Ch-Ua-Platform": [
        "\"Windows\""
      ]
    },
    "tls": {
      "resumed": false,
      "version": 772,
      "cipher_suite": 4867,
      "proto": "h2",
      "server_name": "kalgoorlie.sensors.alpaca.lan"
    }
  },
  "duration": 61.815510792,
  "status": 502,
  "err_id": "he0fiucwx",
  "err_trace": "reverseproxy.statusError (reverseproxy.go:1269)"
}

When using curl on any of my sensors (including the high-latency one) I receive this output (the actual HTML is truncated). With the exception of the hostname, the curl output on the high latency sensor is line-for-line the same as my other sensors.

❯ curl -vL https://kalgoorlie.sensors.alpaca.lan
*   Trying 192.168.42.20:443...
* Connected to kalgoorlie.sensors.alpaca.lan (192.168.42.20) port 443 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_CHACHA20_POLY1305_SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: [NONE]
*  start date: May 30 11:16:17 2024 GMT
*  expire date: May 30 23:16:17 2024 GMT
*  subjectAltName: host "kalgoorlie.sensors.alpaca.lan" matched cert's "kalgoorlie.sensors.alpaca.lan"
*  issuer: CN=Caddy Local Authority - ECC Intermediate
*  SSL certificate verify ok.
* using HTTP/2
* h2h3 [:method: GET]
* h2h3 [:path: /]
* h2h3 [:scheme: https]
* h2h3 [:authority: kalgoorlie.sensors.alpaca.lan]
* h2h3 [user-agent: curl/7.88.1]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x555aec21ec80)
> GET / HTTP/2
> Host: kalgoorlie.sensors.alpaca.lan
> user-agent: curl/7.88.1
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 200
< accept-ranges: bytes
< alt-svc: h3=":443"; ma=2592000
< content-type: text/html
< date: Thu, 30 May 2024 18:25:41 GMT
< etag: "2603125874"
< last-modified: Sun, 26 May 2024 11:39:35 GMT
< server: Caddy
< server: lighttpd
< x-frame-options: DENY
< x-xss-protection: 1; mode=block
< content-length: 7507
<
<!DOCTYPE html>
<html>
  ....
</html>
* Connection #0 to host kalgoorlie.sensors.alpaca.lan left intact

3. Caddy version:

v2.8.0 h1:7ZCvB9R7qBsEydqBkYCOHaMNrDEF/fj0ZouySV2D474=

4. How I installed and ran Caddy:

a. System environment:

Debian 12, using Docker version 26.1.3, build b72abbb

b. Command:

docker compose pull
docker compose up --build -d

c. Service/unit/compose file:

services:
  caddy:
    image: caddy:2.8
    container_name: caddy
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - /home/willis/docker/caddy/Caddyfile:/etc/caddy/Caddyfile
      - /home/willis/docker/caddy/site:/srv
      - /home/willis/docker/caddy/data:/data
      - /home/willis/docker/caddy/config:/config

d. My complete Caddy config:

melbourne.sensors.alpaca.lan {
	reverse_proxy http://10.33.75.88
	tls internal
}

darwin.sensors.alpaca.lan {
	reverse_proxy http://10.132.87.49
	tls internal
}

kalgoorlie.sensors.alpaca.lan {
	reverse_proxy http://10.47.6.43 {
		transport http {
			dial_timeout 30s
		}
	}
	tls internal
}

Thank you!

I don’t think anything can be done here. If the connection is that slow, then the problem you need to solve is finding a way to speed up that connection.

Browsers tend to have timeouts. The browser might be giving up because it doesn’t receive enough data in time to think the connection is still open, so it kills it off and because of that Caddy has to give up as well, closing the connection to the upstream.

1 Like

Hi,

Thanks for the response! Unfortunately the latency is due to it being a satellite-based connection, and we currently don’t have the ability to change this.

It looks like by default Firefox has a 90s http timeout, and Chrome is 5 minutes (at least it was in 2017).

If I connect to the end device directly by its IP it loads fine always in any browser (in about 5-7 seconds), but putting Caddy in between is where the issue arises. Caddy is so fast I can’t imagine that adding a few ms of latency for the proxy would exceed any of the standard browser timeouts.

Thank you!

If you add the debug global option & the trace_logs option in reverse_proxy, Caddy can produce a lot more logs. It should show lots of detail as to what the proxy is doing (individual copies/writes) so we can see what the traffic looks like and where it dies.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.