Caddy, ipv6, quic and aws


(Sudarsh) #1

I’m on an ews instance that doesn’t have any external IPv6 address.

Things are working as they used to (in the nginx era).

But I observe that Caddy seems to be listening only to the IPv6 443 port (see my caddy file at the bottom)

~# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      647/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1034/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      1034/sshd
tcp6       0      0 :::443                  :::*                    LISTEN      11097/caddy
udp        0      0 127.0.0.53:53           0.0.0.0:*                           647/systemd-resolve
udp        0      0 172.26.5.250:68         0.0.0.0:*                           626/systemd-network
udp6       0      0 :::443                  :::*                                11097/caddy

Now, these are the actual connections:

ESTAB    0    0     [::ffff:172.26.5.250]:https    [::ffff:117.246.90.203]:23110
ESTAB    0    0     [::ffff:172.26.5.250]:https    [::ffff:103.211.55.137]:36866
ESTAB    0    0     [::ffff:172.26.5.250]:https    [::ffff:106.203.67.117]:11468
ESTAB    0    0     [::ffff:172.26.5.250]:https    [::ffff:106.67.5.195]:14949
ESTAB    0    0     [::ffff:172.26.5.250]:https    [::ffff:223.182.81.121]:5542
ESTAB    0    3478    [::ffff:172.26.5.250]:https    [::ffff:43.243.175.190]:52120
ESTAB    0    0     [::ffff:172.26.5.250]:https    [::ffff:182.69.171.248]:54289
ESTAB    0    0     [::ffff:172.26.5.250]:https     [::ffff:223.176.38.30]:45754
ESTAB    0    0     [::ffff:172.26.5.250]:https     [::ffff:103.85.127.89]:34561
ESTAB    0    0     [::ffff:172.26.5.250]:https     [::ffff:45.64.236.248]:41798
ESTAB    0    0     [::ffff:172.26.5.250]:https    [::ffff:45.125.69.62]:56922
ESTAB    0    0     [::ffff:172.26.5.250]:https     [::ffff:103.92.113.98]:60008
ESTAB    0    0     [::ffff:172.26.5.250]:https     [::ffff:42.106.29.133]:1765

As you can see, the server seems to be converting the local IPv4 address to IPv6. It’s also converting the external IPv4 into IPv6 to serve it to caddy.

Is this the expected behavior? Nginx used to listen on 172.26.5.250:443 rather than [::ffff:172.26.5.250]:443 like caddy does. And of course it used to show connections directly to the IPv4 address of the client.

What exactly is happening here?

My caddyfile

example.com:443 {
    tls /etc/ssl/caddy/fullchain.pem /etc/ssl/caddy/privkey.pem
    root /var/www/wordpress
    gzip
    bind 0.0.0.0

#  rewrite {
#    if {path} ends_with /
#    r ^/(.*)/$
#    to /{1}
#  }

#header (.png|.jpg|.css|.js)$ {
#  Cache-Control "public, max-age=11116000"
#}


rewrite {
        if {path} not_match ^\/wp-admin
        to {path} {path}/ /wp-content/cache/supercache/{host}{uri}/index-https.html /index.php?_url={uri}
        }

    fastcgi / 106.3.61.104:9000 php
}

Can someone also look at the commented out header block? Does path take wildcards like that?

PS: Is there any way to find out how much of the traffic is going out as UDP and how much is going out as TCP from the Linux server? (I just wanted to know if anyone’s getting Quic traffic at all).


(Matthew Fay) #2

Hi @elos42,

Gotta admit, that IPv6 thing is new to me. What’s your distro, how do you run Caddy?

Short answer: Nope. The expires plugin can do something similar for the Expires header, though; because it generates the expiration time for each request, it’ll function very similar to a simple Cache-Control header.

I use Netdata on my linux hosts, which can differentiate traffic by the communication protocol.

Surprisingly a lot of the common CLI network graphing tools don’t seem to differentiate TCP/UDP, since they generally just monitor the entire interface(s). I’m sure one must exist, but I’m not aware of it.


(Sudarsh) #3

I am on Stretch. Found iptraf to get details of traffic by protocol.

On the header part, the header I want is no-index… any url with a specifc string in it has to have a no-index header. Unfortunately, there seems to be no easy way to do it.


(Matthew Fay) #4

There’s only one way I can think of, at present, and it gets really really hacky if you’re already doing rewriting.

You rewrite to prefix the URI if you find the string, then proxy it locally to itself to strip that prefix again. You can apply the header based on the URI prefix or you can have the other Caddy listener add that header. Yeah, like I said, it’s hacky, hahaha. This should communicate the concept:

example.com {
  root /path/to/html

  # If requested URI has a specific string, prefix it
  rewrite {
    if {uri} has "string"
    to /append-header{uri}
  }

  # Header can be applied here or on the proxy upstream
  # header /append-header X-My-Header "Header content"

  # Proxy based on the prefixed URI
  proxy /append-header localhost:2015 {
    without /append-header
  }
}

# Receive proxied requests in order to apply
# the header and preserve the URI so that our
# URI tricks remain transparent to the client
localhost:2015 {
  root /path/to/html
  internal /
  header / X-My-Header "Header content"
}

(Sudarsh) #5

Thanks. Not sure if it will conflict with the other rewrites I plan to have. I might keep the US proxy on Nginx, cause that’s where Google comes calling, and Google still checks url structures that were in use 5 years ago (which is where the whole problem starts). We moved from an old permalink structure to the current one about five years ago.


(Matthew Fay) #6

Gotcha. Are you issuing 301s for those URLs that Google is checking?


(Sudarsh) #7

Indeed. I have 6 regex based rules, all dealing with some kind of legacy url structure used over the past 10 years.


(Sudarsh) #8

I’ve already moved the India proxy to caddy and enabled quic on it. That handles 80% of the traffic anyway. From iptraf, it looks like some people are getting quic traffic, though they initially start off with an http/2 request. At least, the caddy logs only show http/2 requests. But the iptraf thing shows 20% of the traffic is via UDP.