A Canada-based company claims to have built the first prototype quantum computer and attracted US$60 million in venture capital. Dr. Mae-Wan Ho investigates
Building a quantum computer that could solve problems much faster than the classical computer or are intractable to it has become the holy grail of a new breed of quantum information technologists [1, 2] (Quantum Computer? Is It Alive? , ISISNews 11/12; The Quantum Information Revolution, SiS 22).
Some start-up companies have even begun to attract serious venture capital. One of them, D-wave Systems [3, 4] (see Box 1), announced a potentially commercially viable quantum computer prototype on 19 January 2007.This was followed by public demonstrations in February and November, at which the baby computer, with 16 and then 28 quantum bits, was ostensibly solving problems that included a sudoku puzzle and picture recognition.
Box 1
The company was founded by Haig Farris (former chair of board), Geordie Rose (CTO and former CEO), Bob Wiens (former CFO), and Alexandre Zagoskin (former Chief Scientist). Farris taught a course on entrepreneurship at the University of British Columbia (UBC) where Rose obtained his Ph..D., and Zagoskin was a postdoctoral fellow. The company, named after their first superconductor qubit designs, operated as an offshoot from UBC, and funded academic research in quantum computing. It collaborated with several other universities and institutions in Europe and the United States. In January 2008, D-wave announced it had secured US$17 million from Dublin (Ireland) based International Investment and Underwriters in addition to its already existing portfolio of investments.
However, many of D-Wave’s claims are hotly disputed by researchers within the academic community. One notable critic, Scott Aaronson, runs a blog [5] that also hosts many postings from other scientists, and it does a lot to identify the problems and credibility gaps in the company’s claims. In response to fierce criticisms, a representative of D-Wave admitted that the sudoku puzzle was essentially pre-digested by a classical computer into bite-size (possibly trivial) chunks before it was fed into the quantum computer. The most serious question of all is whether the prototype is really a quantum computer. There is genuine interest in getting to the bottom of the science and technology, but also a collective anxiety that if D-Wave’s claims turn out not to be true, that would damage the reputation of the field and discourage both public and private funding.
So let’s try and unravel some of the mysteries and magic of quantum computation.
Quantum computation is computation based on the quantum mechanical effects of superposition and entanglement [1, 2, 6, 7] (see Quantum World Coming, and How Not to Collapse the Wave Function, SiS 22). Quantum superposition is the simultaneous co-existence of often contrary states such as the proverbial Schrõdinger’s cat being dead or alive, and the spin of an elementary particle being up or down. Quantum entanglement is when different objects or particles behave as one system no matter how far apart they are, and measuring the state of one of them simultaneously determines the state of the other.
Quantum computation operates on the quantum bit or ‘qubit’ instead of the ordinary bit in classical computing. While the ordinary bit is a simple binary 1 or 0, the qubit can hold 1, 0, or crucially, a quantum superposition of 1 and 0. In fact, it can hold anything up to an infinite number of values in superposition [1, 2. 8]. Quantum computers can in theory do computations that are impossible with classical computers or achieve exponential speedup in solving certain problems (see Box 2).
Box 2
The main difference between quantum and classical computation lies in the distinction between bits and qubits. In a classical computer operating on three bits, the state of the computer at any time is a probability distribution over the 23 = 8 different three-bit strings 000, 001, …111, described by 8 nonnegative numbers, say, a, b, c, d, e, f, g, h, adding up to 1.
The state of a three-qubit quantum computer is similarly described by an 8-dimensional vector (a, b, c, d, e, f, g, h) called a wavefunction. But instead of adding up to 1, it is the sum of the squares of the coefficients, |a|2 + |b|2 + |c|2 + …. |h|2, which must add up to 1. Moreover, the coefficients are complex numbers that could be negative as well as positive, allowing for cancellation (interference) between different computational paths.
If one measures the three qubits, then one will observe a three-bit string. The probability of measuring a string will equal the squared magnitude of that string’s coefficient. Thus a measurement of the quantum state with coefficients (a, b,…,h) gives the classical probability distribution (|a|2 + |b|2 + |c|2 + …. |h|2). And we say that the quantum state “collapses” to a classical state.
For computing in either the classical or quantum case, the system must be initialized, for example, into the all zeros string, i.e., (1, 0, 0, 0, 0, 0, 0, 0) or |000›
In classical computing, the system evolves stochastically, with the probabilities adding up to one at all times. In quantum computing, on the other hand, other operations are allowed in ‘unitary matrices’, effectively rotations (that keep the sum of the squares adding up to 1). Exactly what unitary matrices can be applied depend on the physics of the quantum device. Consequently, as rotations can be undone, quantum computations are reversible. Technically, quantum operations can be probabilistic combinations of unitary matrices, so it really does generalize and extend the power of classical computation enormously.
Finally, after the run, the result needs to be read. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three bit string. In quantum computation, we measure the three-qubit state, which is equivalent to collapsing the quantum down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state) followed by sampling from that distribution. This destroys the original quantum state.
Many quantum algorithms will only give the correct answer with a certain probability. But by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.
One feat that is beyond the classical computer is to factorize large numbers with many digits into their ‘primes’. For example, the number15 is the product of two prime numbers, 3 and 5, neither of which can be further factorized into a product of numbers other than itself and 1. The best that the classical computer can do is limited to factorizing numbers that are products of no more than two 300-digit prime numbers. In comparison, a quantum computer could efficiently solve this problem using a special (Shor’s) algorithm. A second class of hard problems is searching a random, large database with n possibilities without any clue as to what the right answer might be. A quantum computer will solve that in a time proportional to the square root of n, while it would take a classical computer a time proportional to (n+1)/2 on average to find the right answer.
There is no shortage of candidate materials for making quantum computers [1, 2, 8]; in principle, anything that can be prepared in the quantum states of superposition and entanglement. These include superconductors, trapped ions, optical lattices, quantum dots, nuclear magnetic resonance (NMR) on molecules in solution, solid state NMR, electrons on helium, molecular magnet, fullerenne-based Electron Spin Resonance, and so on.
There are practical difficulties in building a quantum computer. One major problem is keeping the components of the computer in a coherent state, so superposition and entanglement can survive for as long as it takes to solve the problem. For most of the candidates, the slightest interaction with the external world would cause the system to lose coherence.
In addition, the material used must be scalable, i.e., it must be possible
to increase the number of qubits for the computer to be really useful; the qubits must also be initializable to an arbitrary starting state; and they must be read easily, so that the answer can be obtained from the computer.
It so happens that the adiabatic quantum computation model has all the required properties of quantum computation, and is also easiest to implement. That’s what D-Wave claims to have done [4].
They have built the world’s first adiabatic quantum computer out of superconductors, and it is the only one that’s scalable, to “thousands of qubits before the end of 2008”, as promised in July this year [9].
Adiabatic quantum computation (AQC) relies on the adiabatic theorem to do calculations [10, 11], and operates differently from standard quantum computation (see Box 2). In AQC, one must construct a complex Hamiltonian (an energy function) whose ground state (minimum energy level) describes the solution to the problem of interest. Next, prepare a system with a simple Hamiltonian and initialize that to its ground state. Finally, the simple Hamiltonian is adiabatically (slowly and without heat exchange) evolved to the complex Hamiltonian. According to the adiabatic theorem, the system remains in the ground state, so at the end, the state of the system describes the solution to the problem. AQC is known to be a universal model of quantum computation [12], i.e., it can do everything that can be done with standard quantum computation.
AQC is a possible method to get around the problem of quantum decoherence. As the quantum system is in the ground state, interference from the outside world cannot make it move to a lower state. If the energy of the outside world, i.e., the temperature of the bath, is kept lower than the minimum energy gap gm between the ground state and the next higher energy level, the system has a proportionally lower probability of going to a higher energy state. Thus the system can stay coherent for as long as needed.
In practice, as the Hamiltonian is gradually changed, quantum behaviour occurs when multiple qubits are close to a tipping point. It is exactly as this point when the ground state gets arbitrarily close to a first (excited) energy level that a slight disturbance could take the system out of the ground state and ruin the calculation. Trying to perform the calculation more quickly increases the external energy, and scaling up the number of qubits makes the minimum energy gap at the tipping points smaller. In order for the evolution to remain adiabatic throughout, the total time tf required scales as 1/ gm 2 [13], which means that the bigger the computer, the longer it would take to solve a problem.
The main assumption of the adiabatic theorem is that a well-defined minimum energy gap gm exists. But in a more realistic situation with decoherence influences from the environment, the energy levels of the qubit register in the quantum computer tend to be smeared out by environmental noise into a continuous band W that allows incoherent tunnelling between the two qubit states. As the broadening typically increases with the number the qubits while the minimum gap gm decreases, the realistic large-scale system eventually fall into the incoherent regime where the broadened band becomes much larger than the minimum energy gap, i.e., W>> gm. This means that the adiabatic theorem will no longer apply. Another difficulty is that as environmental temperature T cannot be reduced indefinitely, it will inevitably be larger than the minimum gap gm at some point in scaling up. So, according to the latest publication from D-Wave Systems [14], “a new way of understanding the AQC performance becomes necessary.”
Their investigation shows that AQC in the incoherent regime where the minimum energy gap gm is much smaller than W and T, the probability of getting the right result varies from 1 to ½ for slow evolution. The required computational time in the global scheme is independent of decoherence and temperature T. So even if the large T reduces the probability of getting a correct result to ½, one only needs to repeat the computation on average two times.
The snag is that is that it does not give optimal performance that coherent AQC offers, and for some problems such as the search of a random database, it takes just as long as a classical computer.
Seth Lloyd at the Massachusetts Institute of Technology (MIT) in California, USA, one of the pioneers in quantum computing, and a collaborator in research with D-Wave, is sanguine about AQC because it is a “particularly physical way of trying to solve hard problems.” He is referring to the fact that electrons tend to inhabit the lowest energy level (ground state), especially at low temperatures. And that’s what AQC is based on.
The most interesting aspect of adiabatic quantum computation is that no one knows for sure whether it works in practice.” Lloyd writes in Technology Review MIT [15]. “It may be that for any meaningful problem, the system would have to ooze so slowly that it would take the age of the universe to return an answer. Conversely it may be that even the hardest problem will succumb to an adiabatic quantum computer. Despite the concerted attention of a bevy of physicists and mathematicians, the question of whether adiabatic quantum computing works remains open….” .
In 2002, Lloyd and his graduate student created a design for an AQC based on the superconducting technology that had inspired D-Wave. “At that point, things got interesting”.
Lloyd pointed out that D-Wave has raised about $60 million in funding from venture capitalists, and that as a private company, “it is responsible primarily to its investors rather than to the scientific community.” So it was not surprising that in announcing its success in building an adiabatic quantum computer, the company focussed on commercial applications rather than scientific details. He complained that the press release “provided no device specification that would allow the scientific accuracy of its claims to be assessed. “It seemed possible that the computer was simply finding solutions by being cooled down to its ground state, a fairly dull and not-so-quantum mechanical process, rather than performing the more subtle adiabatic procedure..”
The company’s defence is that such details have to be kept commercially confidential [5], presumably to protect the investments.
Lloyd suggests for a start that D-Wave should be able at some point during the adiabatic computing process to demonstrate that the qubits are indeed in a superposition of both 1 and 0 [15].
Scott Aaronson is even less convinced; he says that [16] “all D-Wave has produced is an extremely expensive and inefficient classical computer with a grand total of 28 bits.” Also he is critical of D-wave’s cavalier claim that its machine will solve ‘NP-complete’ problems that are intractable for classical computers, the best known example of which is finding the shortest route for a travelling salesman visiting a number of cities. Almost all computer scientists also believe such problems cannot be efficiently solved using a quantum computer. Aaronson believes the D-Wave scientists are deceiving themselves, and the company pretty much “a red-flag factory”.
Both Aaronson and Lloyd believe that quantum computers are possible in principle, and that D-Wave’s approach may even get there, but hasn’t done so yet.
I wonder if the problem isn’t deeper than just finding the right hardware for making the quantum computer. Nobel laureate quantum physicist Richard Feynman first proposed in 1981 that only a quantum computer would be able to simulate quantum processes [15], and as Seth Lloyd points out, the advantage of adiabatic quantum computing is that it uses a particularly physical, potentially quantum, process in solving problems. But can it really simulate physical reality? That was the original motivating question for Turing for the classical computer as it was for Feynman in the case of the quantum computer [1].
Gerald Milburn writing in 1998 [17] asserted that “the physical world is a quantum world”, which makes “a quantum computer not only possible, but inevitable.” He said it might take decades or perhaps a century, but, “a commercially viable quantum computer is a certainty.” Milburn and others basically believe that not only will the quantum computer be able to simulate reality it will be part of the fabric of reality.
I agree that the physical world is a quantum world [18] Quantum Phases and Quantum Coherence, SiS 22) but doubt whether quantum computers as generally conceived is possible, much less inevitable [1, 2].
To my mind, the perfect quantum computer already exists, it is the quantum coherent living organism [19] (see The Rainbow and the Worm, The Physics of Organisms), where astronomical numbers of parallel distributed quantum processing are taking place simultaneously at all times [1]. Take the elementary process of a protein folding into shape, a difficult problem even for the fastest classical computer. It takes about 300 years for a computer to simulate a small peptide of 23 amino-acid residues to fold into shape. By running simulations at the same time on some 140 000 individual computers around the world, researchers took just over three weeks [20]. The protein itself folds to perfection in several microseconds.
The problem of building a realistic quantum computer is at least two-fold. First, there is no quantum algorithm or physical hardware that would simulate the organism apart from the physical, chemical wet ware of the organism itself in all its glorious complexity and molecular diversity. Second, if a perfect quantum computer could be built, it would be radically uncontrollable, like a living organism. It may just give “forty-two” as the answer to the meaning of life, the universe, and everything.
Article first published 02/09/08
Got something to say about this page? Comment