Allocating to the heap is so bad? If so, why would we use it?

Asked

Viewed 148 times

-2

Whenever I deal with languages that are able to allocate in the heap, I hear advice that this is slow and should be avoided. I’ve read several responses that talk about heap and stack, but have never shown things like benchmarks or how allocating to one or the other can affect a program’s performance (such as a game).

But in practice, allocating in the heap is so bad? And if so, then why would we use it?

That’s not a "what’s the difference between stack and heap?" question. What I want to know is how slow the heap is in practice, and if so when it would be used.

  • I was curious where/who you hear/heard this advice.

  • In addition to the duplicate suggested above, there is also that one and that one

  • 2

    Yes, it is slow yes, if you compare it to the stack, but the stack is very limited, and usually its allocation is automatic, when a function enters, it allocates space for its local variables(stack frame) when it leaves, it removes this stack frame, and everything gets lost(if entering another function soon after), the heap is much larger, and lives longer. That’s what you got, take it or leave it

  • 1

    In Java I never had to care or understand this difference, especially with automatic garbage collection. Objects in general (including arrays) go to the heap, what goes in the stack are only small things (local variables and references) and is on account of the compiler, I just need to declare and use. The JVM is currently a bullet and the biggest impact is by programmer errors. In C as I recall there was a difference, you could allocate an array in the stack but it was more limited in size. And if you allocated with malloc to store on a pointer you allocated on the heap and it could be bigger.

  • "How slow it is in practice" depends on the implementation (language, hardware, etc), and as the question has tags from 3 different languages, this makes it broad. About "when to use", the other answers already linked explain. Therefore, unless it is made clear which specific point was not covered previously by the other questions (and leave the more specific question, with only one language, or else [tag:language-independent]), I keep my closing vote.

  • 2

    AP wants to understand the difference in speed between allocating to one or the other in itself. But it failed to understand, as explained in the answers, that the advantage of the stack is not useful in most cases and the heap allocation becomes a necessary evil. If I understand the concept, the stack allocation by nature has a short scope and limited capacity by the way it is done. More extensive data that needs to have a more dynamic life cycle (such as the mentioned case of enemies in a game) necessarily have to be allocated more dynamically...

  • 1

    ...and that by the way it is made is less efficient. In short, it is like asking, "if airplanes are faster, why don’t we just ride in them?". It is a scenario in which slower forms have more common use cases and are equally necessary.

  • 2

    It makes no sense to benchmark, the scenario is more important than that, it can reach hundreds or in some cases thousands of times more, but it can be almost the same case, including the use of GC helps in this, the answers below are right in part, but do not clarify how things really are and can fool less specialized and inattentive people to detail.

Show 3 more comments

1 answer

-2

Because the stack stores static data sequentially, it does not require deep control of where data is allocated to then allocate in empty location and release, it is always on top of the stack that allocates and displaces, but the size of the stacked data is pre-stackeddetermined according to the stacked function calls and their codes, so there is a flexible control of the amount of data, each scope has its predetermined occupations.

In other words, stack has advantages and disadvantages that create greater adaptations to certain types of project than others, and the same goes for heap, which consumes greater performance in order to control the amount of allocated data, If fooling is still spending performance controlling the allocated data that are referenced and those that are not (turned garbage) to then set a time to make the collection and actually run it. Unfortunately I did not find information that give good notion of these costs, I would need experiments "backyard" even and if I do not forget I do later.

When you need dynamicity of data size, for example when going to work with a matrix structure with varying possible sizes, the best is to allocate so that the correct size is determined without excess data occupation and if you need realoca with different size and even desaloca when it is no longer necessary.

Also, when creating and deleting objects frequently, such as when in a digital game you activate random respawn of enemies and collection materials, this is done dynamically.

On the other hand, when the data is predefined, as a global variable used by the whole code (type of linear congruent generator state for drawing numbers) or function call that calculates simple closed formulas, the static allocation is more appropriate.

Languages that abuse dynamic allocation are usually object-oriented and greatly facilitate the development of programs that are heavily based on these mechanisms and require little performance, no problem weigh too much by not working with very heavy algorithms and absurd data masses.

It is different from when there is demand of performance as, for example, to each frame one must calculate an entire state of a digital game, this yes requires languages that enable good performance in running and maximum use of static data or that the game is really simple and light even with "abuses".

Browser other questions tagged

You are not signed in. Login or sign up in order to post.