security

Breaking the DDoS Attack Loop With Rate Limiting – Dark Reading


Distributed denial-of-service (DDoS) attacks are growing in frequency and sophistication thanks to the number of attack tools available for a couple of dollars on the Dark Web and criminal marketplaces. A range of organizations were victims in 2022, from the Port of London Authority to Ukraine’s national postal service.

Security leaders are already combating DDoS attacks by monitoring network traffic patterns, implementing firewalls, and using content delivery networks (CDNs) to distribute traffic across multiple servers. But putting more security controls in place can also result in more DDoS false positives — legitimate traffic that’s not part of an attack but still require analysts to take steps to mitigate before it causes service disruptions and brand damage.

Rate limiting is often considered the best method for efficient DDoS mitigation: URI-specific rate limiting prevents 47% of DDoS attacks, according to our State of Application Security Q4 2022 report. However, the reality is that few engineering leaders know how to use it effectively. Here’s how to employ rate limiting effectively while avoiding false positives.

Understand Expected Network Traffic and Vulnerabilities

Engineering leaders often find it difficult to implement rate limiting as a DDoS mitigation tool because they don’t know what thresholds to set. The first step is to answer the following questions:

  • How many users visit your application every minute?
  • How many report/dashboard actions can your application handle? Is that the same for a reset password page?
  • Since server load on dashboards tends to be high, could a lower rate limit block legitimate users trying to access a cheaper resource like a profile page, for example?
Readers Also Like:  A Ransomware Group Claims To Have Breached 'All Sony Systems' - Slashdot

Going over 100 requests in one minute on a login page could be enough to take the server down, while a product page might have no trouble handling 300 requests in a minute. That’s why it is useful to know the threshold of network traffic for each URL within each application.

Network monitoring tools, log files, and buffer capacity can help teams develop accurate baseline network traffic models and manage incoming and outgoing data flow. Suppose you ran a Christmas holiday campaign over 30 days, and the request limit was 300 per minute. To clearly understand the expected network traffic, the security and DevOps teams need to know two things: How many requests were made each minute on average? And if there were 480 requests in one minute, does the team get an alert to check if it was legitimate traffic?

Having granular details on IP, host, domain, and URI vulnerabilities means teams can act more quickly to thwart DDoS attacks.

Numerous security teams have been surprised to receive alerts about attacks targeting their human resource management systems, not just consumer-facing business websites. It is vital to be aware of all the potential applications targeted by DDoS attacks to reduce false alarms.

Implement Custom Rate Limits on Various Parameters

Security teams want round-the-clock application availability and are relying on managed services to get more value from DDoS mitigation software. In-built DDoS scrubbers help security leaders go beyond static rate limits and customize rules based on the behavior of inbound traffic received by host, IP, URL, and geography.

Readers Also Like:  RSA Preview: MeriTalk Research, and Big Fed Tech Leaders ... - MeriTalk

So, what should cybersecurity teams know about rate limits?

  • Never do rate limits on the domain level (e.g., acme.com). Hundreds of URLs get added to the domain, which lowers the per-page requests needed to trigger the rate limit. This can cause unnecessary blocking of legitimate requests or, if you compensate by raising the rate limits overall, allow too many malicious requests to pass through.
  • Set rate limits on the URL (e.g., acme.com/login) to control which customers can access a particular URL or set of URLs. Cybersecurity teams can set rate limits differently for each URL, and a server may block requests if the number exceeds the rate limit.
  • Customize the rate of requests on a session level (the time logged in) to detect unusual behavior that may indicate malicious activity and thus prevent servers from being overwhelmed. For example, if a user opens acme.com 100 times a minute, that’s not normal behavior.
  • Monitor rate limits at an IP level to limit the number of requests or connections from a particular IP address. IP blacklisting — adding known malicious actors or sources to a blacklist — makes it easier for website owners to block traffic from IP addresses known to be involved in DDoS attacks.
  • Implement geographical rate limiting. Security leaders need to quickly examine IP address reputations and geolocation data to verify the source of traffic. As a best practice, I would recommend teams implement geo-fencing as a standard for all local applications.

By using the above methods, application owners end up setting more granular rate limits by using system recommendations based on user behavior. This, in conjunction with using DDoS mitigation mechanisms such as tarpitting and CAPTCHA before blocking requests, can minimize false positives to the maximum extent possible.

Readers Also Like:  E-News | New Data Classification Policy available for public comment - WVU ENews

Cybersecurity decision-makers must take a multilayered approach to protection by having a clear understanding of network traffic patterns and using fully managed platforms to set rate limits for threat intelligence.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.