victor
(Victor)
March 20, 2024, 1:03am
1
Can the layer4 plug-in be paired with the Caddyfile with an import block?
The Caddyfile would be the main config, and the import directive would import the layer 4 json file.
1 Like
Howdy @victor !
The caddy-l4
module doesnāt support the Caddyfile, which is a bit of a bummer.
There is another extension that allows limited configuration of caddy-l4
via Caddyfile global options: caddy-ext/layer4 at master Ā· RussellLuo/caddy-ext Ā· GitHub
Beyond that, a large amount of special handling in the core Caddyfile adapter makes including Layer 4 support prohibitively difficult. If an effort could be made to port caddy-l4
into the official repository, this would be simpler, but that is an undertaking that requires skilled Caddy developers and contributors with the time and ability to prioritise that over other work, and I donāt believe there are any concrete plans for it in the works right now.
Your best bet, currently, is to write your Caddyfile as normal and then caddy adapt
it to JSON to merge with your caddy-l4
configuration before deploying it.
1 Like
victor
(Victor)
March 20, 2024, 3:23am
3
Thanks. Iāll give this a try. Iām basically looking to proxy UDP traffic to a webrtc server.
victor
(Victor)
March 20, 2024, 4:52am
4
This works beautifully!
But I notice that caddy is only listening on TCP. Does it auto listen on UDP or what?
victor
(Victor)
March 20, 2024, 5:42am
5
I opted for the plugin. Works excellent as of right now. My webrtc stream is working fine with SSL and everything.
1 Like
victor
(Victor)
March 20, 2024, 5:52am
6
Still wondering about thisā¦
matt
(Matt Holt)
March 20, 2024, 2:50pm
7
Without extra plugins, Caddy serves HTTP, and HTTP is over TCP, except HTTP/3 which is over UDP (but uses TCP to inform clients that UDP is available).
If youāre using the L4 plugin then you need to specify whether you want to listen on TCP or UDP since it operates at a lower layer.
victor
(Victor)
March 20, 2024, 5:21pm
8
Understood.
I tried both TCP (default it seems) and UDP. For some reason the UDP listener closes within a few seconds. TCP works great.
Not finding anything in the logs either.
As soon as a connection is make via UDP, it stops listening.
matt
(Matt Holt)
March 20, 2024, 9:22pm
9
The UDP proxy is currently a bit broken. I wonder if this patch fixes it for you!
mholt:master
ā jtackaberry:udp-server-overhaul
opened 12:00AM - 14 Aug 23 UTC
Following up from #140, the UDP server implementation has two fatal flaws:
1.ā¦ UDP buffers were too small, resulting in truncated data being forwarded to upstreams for datagrams exceeding 1024 bytes (the buffer size)
2. A goroutine was created per-packet that called the proxy handler but never terminated
1 is an easy enough fix. 2 is addressed by this PR in the following way:
* UDP "connections" are tracked in a map called `udpConns` (keyed on downstream client addr:port) for the given UDP server port
* When a new packet is received by a UDP server, the `udpConns` map is checked for that remote `addr:port`.
* If it doesn't exist:
* a new `packetConn` is created for that downstream and a goroutine is created to call the long-running proxy handler
* `packetConn` has been updated to include additional fields to allow for communication with the UDP server
* Notably, there's a channel for `packetConn` to read new packets from the server for this particular downstream, and there's a channel for it to communicate back to the UDP server that the connection has been closed.
* (This latter case allows the UDP server loop to both add and remove elements from the `udpConns` map, which avoids the need to acquire a mutex for each packet.)
* If it does exist:
* the packet is send to the `packetConn`'s packet channel
* `packetConn` now has the concept of an idle timeout. If no new packets are received from the downstream within a period of time (currently hardcoded) then the connection is considered closed. Any pending `Read()` call is immediately returned with io.EOF, which in turn causes the upstream connections to be closed, which allows the handler (and the per-connection goroutine which called it) to terminate.
I've tested this with HTTP/3 (with QUIC-enabled nginx and curl). Problem 1 stopped the current master branch dead in its tracks, since it couldn't get past the initial handshake (due to the first packet being 1200 bytes, which got truncated by the 1024 byte buffer).
Even after fixing that in master, the QUIC handshake didn't complete because each new packet from the downstream would be forwarded to the upstream from a different source port, so the QUIC server side saw them as unrelated packets.
With this PR, curl is able to fetch via caddy to the nginx upstream, both short requests as well as bulk downloads. Performance isn't brilliant, it must be said: with all 3 things (curl, caddy, nginx) running on the same box, caddy runs around 200-220% CPU in order to bottleneck nginx, which has a single thread bottleneck so runs at 98%+. The nginx bottleneck caps effective throughput to about 70-80MB/s. Disappointing results from QUIC there, but not Caddy's fault. :)
But it *works*, which is an already an improvement.
Pending problems:
1. Channel buffer lengths are hardcoded and need some further consideration
2. UDP connection idle timeout is currently hardcoded, but should probably be configurable per server
3. Existing UDP connections are borked by config reloads
The last problem is the biggest obstacle right now, and I'm not really sure how to fix it. Here's what happens:
* `Handler.proxy()` is diligently doing `io.Copy()` for upstream->downstream and downstream->upstream (in separate goroutines)
* When config is reloaded, thanks to #132, the server UDP socket is closed, which allows the UDP server loop to terminate
* But because the original UDP server socket is closed, the `io.Copy()` invocations both immediately abort with "use of closed network connection"
* Consequently the QUIC transfer abruptly stalls out and the client needs to reconnect
This isn't a problem for TCP, because we get a separate socket representing the TCP connection that's independent from the listen socket. This isn't the case for UDP, where incoming data from all downstreams is always received over the same socket, so any in-flight handlers proxying data between upstream/downstream depend on this socket continuing to exist.
This is a consequence of how config reloading is done in general within Caddy, so I think I'll need to depend on your suggestions at this point. It's the main thing keeping this PR in draft status -- although even with that problem, the code in the PR still significantly improves UDP proxying behavior with caddy-l4.
victor
(Victor)
March 20, 2024, 10:25pm
10
Does the plug-in mentioned above build with this patch?
matt
(Matt Holt)
March 20, 2024, 10:53pm
11
Oh, Iām not sure about that. Probably though? No external config changes AFAIK.
victor
(Victor)
March 20, 2024, 11:10pm
12
Ok so it does automatically build with the l4-plugin, but how would I specify to include that pull request.
Build with --with github.com/mholt/caddy-l4=github.com/jtackaberry/caddy-l4@udp-server-overhaul
Basically =
replaces the package with the fork, and @
specifies which branch to use on the fork.
victor
(Victor)
March 21, 2024, 1:30am
14
Built successfully. UDP listener now stays on, but something must still be amiss because itās not proxying the traffic. When I port forward directly to the server that is serving the UDP traffic, it works. But once I go through caddy-l4, it doesnāt.
system
(system)
Closed
April 20, 2024, 1:30am
15
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.