What are the main differences between a quantum computer and a conventional computer?

Asked

Viewed 2,607 times

27

They say quantum computing will revolutionize computation if it is implemented successfully. Why is that? What are the main differences between a quantum computer and a conventional computer that would make the former much faster?

  • 1

    I can’t talk about the differences, but one of the biggest impacts that quantum computing would bring would be about security: not only does it open up new possibilities that do not exist today, but it also makes obsolete various techniques on which current systems depend enormously, such as those employed by SSL/TLS (either RSA or Elliptic Curves). Some problems previously considered NP-Hard - as factorization - could be solved in polynomial time given a sufficient number of qbits. Such a scenario would require a upgrade on a large scale for the Internet to remain reasonably secure.

  • @mgibsonbr you could explain why/how with quantum computing these NP-hard problems could be solved in polynomial time?

  • But the encryption algorithms would also be quantum ...

  • It would be interesting to search for articles in Google preferred articles in English, for that they serve. Mt are good. Because here Oce vera mt think and do not think say or n say, in the articles the authors prove the pq that and that. ctza reading a two or three articles you already have ctza how it works

  • By doing this you can answer this question yourself by quoting text fonts(links) that you read.

  • 1

    Carlos, I wouldn’t know to say this in general, but in the specific case of factorization there is the shor’s algorithm. The relationship between the NP and BQP classes is an open problem. @Motta Yes, there is a lot of research on the subject post-cryptography-quantum, including "normal" algorithms that are resistant to quantum computation (by the way, hashes and symmetric cryptography are already, for the most part). As I mentioned in my first comment, the problem is of a practical nature: the cost of a upgrade massive.

  • 2

    @mgibson: factoring is NOT an NP-hard problem! Exact complexity is still an open problem but the consensus is that the factorization problem probably has intermediate complexity, above P but below NP-complete. Furthermore, no complete NP problem is known to date that is solved in polynomial time by a quantum algorithm and there is evidence that this may be impossible. take a look at the table in pag 6 of this pdf

  • @hugomg I see... In college I was told it was NP-hard, but that was more than 10 years ago, and in any case it may have been my teachers' fault (or the problem may have been reclassified in the meantime, I don’t know). As for the NP-complete, I agree with everything. P.S. The link you passed seems broken...

  • 1

    It was not reclassified no, it’s just that this is a very common mistake that people actually make... As for the link, we missed a "www." at the beginning. ops.

Show 4 more comments

1 answer

17


In a conventional computer, a given data (i.e. a "bit") is always in a single state - 0 or 1. In a quantum computer, this same data (called "qubit") can be in any overlapping of these two states, until the moment it is measured (collapsing to a single value). This occurs according to the laws of quantum mechanics, which includes things like the interweaving (which is described as one of the bases of quantum computing, although I don’t know exactly what his role is).

What makes a quantum computer faster than a conventional one is therefore its ability to progress from one probabilistic state to another probabilistic state without the need to enumerate all the possibilities involved. To understand this, it helps to know the complexity class BPP:

Bounded-error Probabilistic Polynomial time

A problem is said to be in BPP if there is an algorithm for it with the following properties:

  • A random component (i.e. the algorithm can use random data or make random decisions);
  • Polynomial run time relative to input size, worst case;
  • The answer ("yes" or "no") has at least a 2/3 chance of being correct.
    • Strictly speaking, wrong answers are admitted, provided that the chance of them is lower than the right answers (i.e. the value 2/3 can be replaced by another).

Every problem in P is in BPP, however there are problems in BPP that are not yet known if it is in P. The number of these problems has been decreasing (e.g.: until recently, there was no polynomial algorithm to say whether a number is or is not prime, only a probabilistic algorithm), which leads to the conjecture that P = BPP, but there are still cases.

Note: in addition to the BPP class, there are other situations where a probabilistic algorithm is desirable. For example, one cannot guarantee to solve an NP-complete problem in polynomial time, but one could - in an exhaustive search - employ probabilities to reduce the search space. In a conventional computer, this brings no benefit however...

Okay, and how does a probabilistic algorithm work, since there is a great possibility of error? Simply run it several times in order to reduce the margin of error. If in the first round there is 1/3 chance of error, after the second there is (1/3)*(1/3) = 1/9, after the third 1/27 and so on. So just take the results of several rounds and check which one occurs most often, and take it as the right result.

Combinations

A single bit can only be in two states, 0 or 1. Two bits can be in four, 00, 01, 10 or 11, three bits into eight, and so on. That means if you’ve left the state 100 and applied some probabilistic operation, there is a 1 in 8 chance of reaching any of the following states. After several operations in sequence, the number of possibilities increases enormously. This means that a very large number of rounds need to be run in order to achieve a result with an acceptable level of confidence.

In the case of quantum computation, on the other hand, the superposition of states itself holds the probabilities from which a given state has been reached. So, from the state 100 reaches the state ??? after the first operation, the ??? after the second, and so on, until all operations are executed. Only at the end, when the result is read (i.e. the qubits are measured) is a set of concrete values - type 010 - will be obtained.

Explaining in another way, if in the conventional computer 100 and arrived at 101, the next stage will be based on the 101. If the first stage was repeated 20 times and 17 of them reached the 101, then the second stage will be performed 17 times with the state 101 as a basis (and 3 times from another base). Let’s say that the second stage produced 011 on 15 of these repetitions.

Already in the quantum computer the result of the first step will be ???, with 85% chance of being 101, and the result of the second stage will be ???, with at least 75% chance of being 011 (85% * 88.2%). All in one run! While the conventional computer would have to run a high number of times to achieve the same result with a similar level of confidence (and more often to reduce the margin of error to an acceptable value - something that both, conventional and quantum, have to do anyway).

So in answering the question, what makes the quantum computer so much faster is the possibility that, in a single sequence of operations, if you get the same result that a conventional computer would get if it ran a much larger set of operations.

It is a case similar to that of graphics acceleration cards, where operations involving vectors, matrices, etc are performed directly in hardware circuits, where the same operations to be executed on the CPU would require a much larger set of operations on simpler data types (i.e. numbers), involving loops. In that case the stopover of optimization is much larger, but in principle everything a quantum computer does could be simulated on a conventional computer (the Quantum Turing machine remains theoretically equivalent to the Turing machine).

Browser other questions tagged

You are not signed in. Login or sign up in order to post.