Problem of stopping can be solved in practice?

Asked

Viewed 651 times

16

The halting problem can be explained as: given a program and an input for it, it will complete its execution and return a response or will enter an infinite cycle?

This has been proved undecidable by Turing, unquestionable. The point is, I have come up with a simple way to solve such a problem.

If at each program run state I dump a complete memory and register, I will have program status at any time. From there just get if there was any state repetition. As all the program can do is transition to the next state based only on the previous state, if the state repeated then it is immediate infer that the program is entering an infinite loop.

Memory is finite, so there is a finite number of states. Any program that does not finish will eventually repeat a state. I’m not saying that this is viable, after all, in a memory of four gigs we have at least two to the tune of 400 billion states. I don’t even know how many digits that has.

My question is: what is wrong? Clearly the problem is undecidable. Where does this method fail?

  • Related: https://answall.com/q/276648/64969

3 answers

13

In fact, your solution can’t handle simple cases. For, any program that alters its state without going back is not detected as a possible cause of infinite loop.

Imagine the following program in Java:

public static void main(String[] args) {
    LinkedList<Integer> l = new LinkedList<>();
    while (true) {
      l.add(123);
    }
}

Every time the dump is made, it will be in a new state. Being in a new state, your algorithm check if it has entered any previously known state goes down. Nothing done. Removing memory limitations and considering variable/infinite address, the linked list would grow to infinity without worrying about anything else in this world.

I believe your heuristics would enter a state following the static analysis of the code. If it is simply provided program black box, maybe not much more than your automaton in Assembly-simile format, then we have a bigger problem. The more basic is the way the program is represented, the closer to the turing machine, the more difficult the static analysis will be.

Another heuristic (perhaps even more efficient) that computer service providers use is to provide a timeout for his execution. If you are dealing with a day-to-day problem, most likely you will be dealing with a P class problem or else some heuristic to bring a satisfactory even imperfect answer, so you need to run fast.

More complex issues (such as graph isomorphism or even predicting the metastable structure of a protein given its amino acid sequence) actually require greater computational power and possibly the heuristic timeout would give false positives to "eternal execution". This measure of security is provided in high-performance computing environments such as the SLURM execution scheduler, to prevent someone unintentionally passing an unsolved case and taking on the entire computational power of a research core. This heuristic is also applied in communication between two different machines, because hypothetically you can provide an input that would cause the other machine to run endlessly, through the reading and writing timeout of the socket.

9


The idea is interesting, but the halting problem says that given a program whichever and an entrance to the same... If you are limiting your program to running in a finite memory environment (be 4GB), then you are necessarily restricting the scope of the original problem. For the program to be really arbitrary you need to remove this constraint, which causes your finite solution (although not feasible, since its number has more than a billion digits) not to apply to the general case.

Now, your question is whether for a specific case of a program it can be solved "in practice". In practice, you don’t have where to store all the memory dumps that your solution needs, and you don’t have the (almost) infinite time that it requires as well :)

  • The problem is no longer undecidable but remains absolutely intractable

8

If at each processor instruction, you dump the 4 Gb of memory (as well as caches, disk and loggers), any programs that are running will eventually repeat a state and you can prove this with the dump. After all, if memory has a finite size, then the amount of existing states is finite. The halting problem applies to cases where memory is infinite, but this case is the one with finite memory.

The halting problem is (theoretically) solvable in cases where memory is finite doing exactly what you propose. However, it is necessary to look at this "theoretically" with a lot of attention.

In the worst case, you would have to go through all 2n memory states until it repeats, where n is the number of bits existing in the memory as a whole. If n is a sufficiently small value (for some device with a few bytes of total memory), until it is feasible. For a typical computer of a few gigabytes of memory, this is totally unfeasible. If we only consider the 4 Gb of memory and disregard the disks, the number of existing states would be 234.359.738.368 (4 Gb = 34,359,738,368 bits). This results in a number with more than 10 billion decimal digits!

Each bit existing memory bending the amount of states possible and consequently doubles the time and energy consumption needed to traverse all these states. With each byte of memory added, the number of states increases by 256 times. Now, imagine having 34,359,738,368 bits, each doubling the amount of possible states (that’s 4 Gb, and look at this is just the main memory of a computer already considered limited to the present day, plus we’re disregarding the disks).

You just can’t walk all these states because it will take a long time and take a lot of energy. Thermodynamic physics will impose barriers before you can finish this. Soon, one of these things happens before:

  1. The computer is destroyed by forces outside of it.

  2. The universe ends before you go through all the states. I don’t know if it would be with Big Crunch, Big Rip, Big Freeze, Big Bounce, Heat Death or anything else. But whatever the case, it would not be good for you. This implies in hypothesis 1 above.

  3. You consume all the energy in the universe in order to create and store enough memory dumps (or even if it’s just to keep the computer running through all these states). This implies in hypothesis 2 above which in turn implies hypothesis 1.

Thus, it is not difficult to see that with a sufficient amount of memory, one of the three things enumerated above would occur (and occur quite early!) in this process.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.