Should I free up all the allocated memory when finishing a program?

Asked

Viewed 1,024 times

24

It is commonly accepted that when I allocate a block of memory I am responsible for releasing it. This is particularly true when programming based on RAII. However the following program works perfectly:

int main() {
    int* ptr = new int[99999];
    return 0;
}

Here the system should be able to release everything the process has allocated.

What is considered good practice here? Should I always release the memory I allocate? Is there a problem/advantage in leaving this work to the system?

  • If the Dollynho Programmer says that you can leave it allocated yes when the program is terminated, then it is because you HAVE to de-locate when the program terminates XD

3 answers

20


One point to consider is that code that nowadays is a complete program can eventually be refactored to become a feature of another larger program. If the original program didn’t care about releasing the resources (why the OS would do it for it), now it will start generating a Leak in the larger program each time its functionality is executed. Many hours of debugging will be lost until you identify the source of the problem, and even more so by refactoring the original code to behave properly.

  • 3

    That’s the right answer. It’s not good practice, even if the OS helps you avoid bigger problems. At some point, you will have to solve a memory leak so.

  • If the price to pay for quick completion is the breakdown of modularity, it is clearly too expensive. That point I had not taken into account.

  • 2

    @Guilhermebernal When you have more than 4 GB allocated, you may not think that modularity breaking is too expensive.

8

I do not consider it good practice to let the system release all the resources that have been allocated by the programme. I must remember that memory is just one of the resources that the system should manage and if, the only resource that your program uses is memory only, which I find unlikely in medium or large programs, still I do not consider a good practice.

The function exit, according to the answer given by the author himself, it should be used with caution and one should know about its side effects. One of the problems with using it is:

  • Creation of multiple output points of a program, which can make debugging difficult;
  • It can make it difficult to read a programme, basically it is multiple drop for the end of the programme;
  • Does not release resources clearly and cleanly.

In fact, I call attention to the fact that calling the function exit is different than calling instruction return at the end of the programme main. While the instruction return carries out the "stacking (unwind stack)", which allows calling all destructors of the local objects, the function exit does not, which can cause logic errors, not easily detectable in a debug.

I also point out that the idea of letting the OS take care of releasing resources only happens in "modern" Oses. In Sos, as the Freertos type is full resposability of the programmer to release the allocated resources.

Again, I return to the question that memory is just one of the managed resources. Modern Oses release all resources allocated by the program at the close of the process, regardless of the state they are in (if you have to flush a file to save the data, this will not happen at the close of the process). The author, in his reply, suggests creating callbacks or a messaging system to assist the process. I believe that this only complicates the logic of the system. I find it a very low price to let the system do the automatic destruction of all objects in a clear, clean and safe way, even if it means removing the memory from the swap. And visualizing current hardware systems, where memory is a resource that can be considered abundant, does not justify all this logic the most.

  • 3

    As always, everything has its context. memória é um recurso que pode ser considerado abundante is not true for embedded development, as in mobile phones. Still, I agree with all the rest of your answer.

7

In any modern operating system the virtual memory pages feature. In this model each process owns an entire address space and the system has the responsibility to map the address that the process wants to read to the address that memory really is at. This gives the system several useful optimization opportunities, such as placing unused pages on disk (swap), rescuing them when required, or applying copy-on-write (Cow) where two processes share the same page.

This way the system already knows exactly which memory belongs to each process. And releasing that memory is as simple as marking it as a junk page that can be reused in another process. That is, for any modern computer, finishing a process means releasing all your memory.

Considering a large program, there are several problems when you press "close". The destructor of all objects will be executed in sequence, causing a half-dead memory pile to be accessed and read from the disk (in case of a swap) and causing visible slowness to the end user. You’re wasting the lifetime of your hard drive while wasting user time, all wrong. This just to free resources that will already be released anyway by the system, and more efficiently (frees pages instead of releasing object by object, and does not need to read data from disk).

If this is good practice, it is questionable. Object orientation design expects that everything that is built will at some point be destroyed. Then there may happen to be some important logic going on in the destructor, like saving settings on disk. In this case merely calling exit(0) can be harmful. Another way would be to have a Boolean global variable that is true when the process is about to close and have each destructor check this variable, skipping unnecessary memory releases. But here there is little advantage because a lot of memory will need to be read anyway (we return to swap) and the logic of destruction becomes complicated and non-trivial.

A solution may be to have a notification mechanism that sends some kind of signal to all objects that need to do something more at program termination. It could be a list of callbacks for example. This is probably the most efficient way not to completely break the object-oriented structure.

The big problem is that tools that detect Leak will accuse a lot of problems in your program. Hence it is interesting to have two terminating paths, one dislocating everything (for debug) and the other going straight to the exit(0). Both should be equally tested, of course. But the second need not go through the Leak detector.

But remember that there are resources other than memory. The inter-procedural communication mechanisms of System V (available in some *Nix, including linux), for example, are permanent and survive the termination of the by-design process. For them it is essential to release.

All this discussion is only valid for languages where you programmer does the memory handling. For languages with garbage collector usually there won’t be much choice.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.