Denial of service with stress test

Asked

Viewed 756 times

18

First of all, a stress test, second this reply from @guiandmag is:

Stress testing consists of subjecting the software to extreme situations. Basically, the stress test is based on testing the software limits and evaluating its behavior. Thus, it is evaluated up to when the software may be required and what failures (if any) arise from the test.

I’ve always used the Jmeter to test my applications. I have always pushed hard and to get what I needed. However, yesterday I decided to test other systems/sites out there, and to my "surprise", many of them fell.
More specifically, what I did was an HTTP request of 100 users every 0.2 seconds for a given URL.

My first question is, can this be considered an attack of denial of service? Taking into account that after success or failure of the tests I did not insist again (I know it does not justify, but perhaps touches the heart of you rsrs).

But my real doubt is: The falling system is infra or programming failure?

Let’s take into consideration that you can easily block multiple requests by firewall or other available tools. And also we can check the IP’s by the application and perform the comparison (the @lbotinelly in that reply).

With this comes my doubt. We must ensure this security, leave for the below or the best option would be both?

When trying to do this test with large websites such as Google pro example, my IP is blocked for a certain period, as security. I don’t think you can tell if this is the system or the bottom, but this is the best approach to treating this kind of "attack"?


For those who don’t know, this tutorial explains what Jmeter is and its functionalities.

  • 2

    @Randrande, I consider security a very broad concept. And responsibility is joint. Where there may be failure there must be a method of protection. The software must be secure when it takes responsibility for the use of the port. The software also has specifications, as users per second max. If the below help, it gets better, don’t you think? Security is everyone.

  • 2

    @Andremesquita I fully agree with you. Security failure is something that can cause enormous harm to everyone involved. The more security the better. Of course, within the established conditions for the system. "No bunker is needed to store a bird".

  • 2

    Falling into a Ddos attack is Infra’s limit yes, but applications that consume more memory or process "help" to happen faster :/ - But there is no way to prevent attacks with programming (server-side web language) efficiently, it is best to block attacks via some kind of Firewall, as it does not even "execute anything". I cannot explain in detail because I do not understand much, for example detecting robotic attacks from Ips, such as short intervals. This is usually a tool on the bottom and usually not directly on your server (please correct me if I have confused something).

  • 1

    @Guilhermenascimento did not confuse anything. A firewall + IPS help not only that, but many other things. There are other things like load balancing, which splits the requests to more servers and so on. Now the term "efficiently" is a bit relative. You can do something in the system to "try" to soften. Whether it is efficient or not depends a lot on functionality. But the meaning of the question is this. Thanks for the comment.

  • 2

    Thank you, just one more note, I focused on answering this further "The falling system is infra or programming failure?" taking attacks into consideration. Just one more note, an application with poor performance can also "take down the server" (actually crash) because it can consume a lot of Apache processes and Childs for example.

  • @Guilhermenascimento I really asked many questions. But it is because I thought it would be strange to open many questions about the same thing. On this issue of systems, I went through this recently. A system couldn’t handle 100 users. The reason was what you mentioned.

  • 1

    A Ddos attack doesn’t see the application on the server. The attack just jams the network. It doesn’t matter if the application running on the server is optimized or not. It also doesn’t matter if it has a firewall or not. Ddos attack basically clogs the communication network. A block for these attacks should be in a layer above, in the server links provider. Basically there is no way to prevent such attacks if you are not on the top layer of the network. Some confuse Ddos attack with Force Ruth, for example. A Ddos is very different and simple. It is merely thousands of simultaneous and continuous connections.

  • @Danielomine Wouldn’t you like to elaborate a response? You added some interesting points in your comment.

  • A denial of service for software implemented over more robust application layers is simply the network layer. Applications written more specifically or created for specific purposes outside the common infrastructure may suffer denial of service in the application layer if they manage resources incorrectly. One of the problems is that the common software scenario trains people to situations than is most used.

  • I find it impossible to answer in this small space. The subject is broad and complex. You can write a book that easily goes over 200 pages and yet doesn’t go into too much detail. In fact, there are several publications of the genre. Therefore, I vote as ample. Even if we can respond more summarily, it would become "based on opinions", which is also reason to close. And we might also have too generic answers, which leads to a misunderstanding by those who don’t even understand the basics.

  • 1

    I would agree if he didn’t provide a scenario and information about what he did. The question is not about what is or how to make an attack is about the classification and responsibility of roles in the provided scenario. To address more than that in the question is that it would generalize.

Show 6 more comments

1 answer

7


There are several concepts there and involve broad topics that do not have a very clear or common set of definitions. But, let’s try to elucidate the real problem that seems to me more of definition.

First, running a test tool, telnet, network, monitoring, ping or any service without the authorization of the responsible team and that takes the service from the air, regardless of the time, is an attack, no matter the motivation. You were not aware of the SLA of the contracted services, nor what routines the environment is running at that time and the level of contractual availability (including fines) is measured by the availability of the service. A test can only be considered a test if there is a behavior to be tested and knowledge of who operates the environment, since by definition, a test is a controlled action. Even security researchers only perform actions or disclose data, under authorization.

Second, stress, overload and safety tests, again, by definition are actions with controlled scope. The technical responsibility for the failure should rest with the team that should implement the specification that generates the reason for the test. For example, you have as a non-functional requirement of a project, meet an x rate of tcp connections per second. This type of requirement is made based on the forecast number of users of the operation/product. And the test, again, with objective and predictable definition generated by demand (which served as the basis for equipment purchase, cluster, redundancy, disk speed, etc.) is the reason for the execution of the tool. To know if the test meets the specifications. Therefore, if it was specified that the Ddos load would be corrected via firewall, fault of the infra, if it was via code, fault of Dev.

Third, if you work with operations with incremental product/software processes, which have no requirements in this respect (should? right?), usually the team’s role definitions and the most trivial solutions are the answer. But I would consider the fault of both teams. Because someone should have checked, in serious environments the redundancy implementation is something standard for services whether it will be in the software or hardware layer is just a matter of cost-effectiveness.

The environment’s own Recovery procedures are regularly tested for internal audit and stability check/working status of the recovery. The type of recovery depends on what was specified in the operating procedures. So guilt, by itself defined in the roles and plans of action, is already defined.

If still, no one thought that could suffer an attack, it’s everyone’s. Even because, discuss guilt about something that has not been defined and find a culprit (a team or person) besides being an inconsistency, does not help at all to solve the problem and still creates discord.

This is much more transparent in projects with formal specification or projects with real-time requirements or formal software proofs, where it is necessary to prove that the software meets the requirements. In environments where this is not so 'deep' in culture, it sometimes goes unnoticed.

You can find more information in references to formal methods and software engineering in a general way, in addition to the security books themselves, but, "there is no silver bullet".

A good discussion about software flaws can be found in the book How to Break Codes (http://www.buscape.com.br/como-quebrar-codigos-a-arte-de-explorar-e-proteger-software-greg-hoglund-gary-mcgraw-8534615462)

You can check standardized material on security and vulnerability patterns and attacks here (https://www.owasp.org/index.php/Main_Page)

And an introduction and overview of the need to use formal methods can be found on the link: http://www.ufpa.br/cdesouza/teaching/es/carla_metodos_formais.pdf

Browser other questions tagged

You are not signed in. Login or sign up in order to post.