Micro Ddos mitigation and sudden high access number in resource-limited web applications

Asked

Viewed 127 times

4

It is extensive documentation of how to mitigate (proactively reduce impact while it occurs) denial of service attacks in web applications. People usually cite services like Cloudflare or else as per their application on servers like Amazon EC2 and take charge.

This theoretical question questions alternatives to problems on a much smaller scale, no option to use a more complex solution but with the advantage of the problem being simpler and for a one-off event, where the sysadmin or developer is seeing it happen.

Situation A: Micro Ddos (small-scale denial of service attack)

A typical example of this type of attack (only part that matters, the rest has been filtered before)

  • There are few attacking Ips and they do not change. Text logs show them

Situation B: Sudden high access

For some reason, your website gets popular when you’re quoted by someone famous or on television, it goes through the following:

  • Hundreds of people access a few pages. Most just the home page and a second page, both of which only display information and do nothing special on the server side

Common

In both cases, the site stays down and is shut down by the shared hosting company or your company’s small server for high CPU usage. The cost of generating and not caching pages is higher than demand. Neither is there a network problem, because there is enough bandwidth to meet the demand, but your application can generate something up to 15-25req/s.

Assuming you don’t have root access, you couldn’t install new modules or change your operating system firewall. Also could not migrate the site to another server, for financial or time reasons, since both situations will be on time and last no more than 1 to 3 hours.

In addition to the language used in your application, you could also use any other tool that an ordinary user would have, such as access to . htaccess on an Apache server and IIS web.config.

How, creatively, situation like these two could be solved?

1 answer

4

To situation 1 is the easiest to get around.

Create a wrapper to monitor all accesses to your application (Servlet Filter via Java, HTTP Module/Filter case ASP.NET), account for source Ips, set a maximum access margin and a ban period if this margin is exceeded.

To situation 2 It’s a little more complicated, but still upbeat. You mentioned that the pages are informative and with little processing in the backend - basically HR/LW (high read, low write).

Homework

We can assume that basic performance points are covered (static content like CSS, JS and images is set to be curled).

In advance you can run stress tests to determine possible bottlenecks (database, renders, shared resources). You will eliminate 90% of possible saturation points.

Before the crisis

Depending on how your application is implemented, you can define temporary storage layers. From the most basic to the most impactful:

  • Shared objects stored in memory
  • Local storage of resources instead of remote
  • Rendered HTML storage if the objects have not been updated

During the crisis

You cached everything you could, synchronizing as little as possible; yet your server achieved 100% processing. A possible solution:

  • Take a snapshot (rendered HTML) of the offending pages. Let a Filter/Module be prepared to intercept all calls to these pages and to output the captured snapshots as a result. If there is any content that needs to be dynamic (login information, for example) make it run in an IFRAME or be loaded via AJAX.

Several services use this mechanics - for example, Newegg Flash (http://www.neweggflash.com/) or Dealextreme (http://dx.com/). You may notice that the page is initially loaded without the user’s credentials, even though it is Signed in.

  • +1, in particular by citing the snapshot strategy. I get the impression that most developers also known from servers They’re proposing the same strategy. It’s complicated, but I think most programmers have no notion of the genius that this approach can have and turn even a website into a shared hosting meet a demand close to that of a dedicated server in a crisis situation.

  • 1

    @Emersonrochaluiz, I had a hands-on experience of using snapshots for real-time substitution, and its analysis is correct: A saturated server had its load of the webserver + database set reduced to 4-6% of the pre-switch load.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.