"However, realizing the full promise of quantum computing (e.g. Shor’s algorithm for factoring) still requires technical leaps to engineer fault-tolerant logical qubits"
This is NOT how the media has chosen to report on this. It's always quite frustrating to figure out how a paper diverges from the press it gets.
Well I mean it's still pretty big though. We know for a fact now that there are only technicalities like fault tolerance in our way.
Still you're right though, the race for "fist post" by the media definitely exaggerates scientific breakthroughs. To the science specific literate they can sift through the hype but others get it in their head it's something else. Like investors for example, those guys are big dreamers and when they hear stuff like "ground breaking quantum supremacy" they'll throw everything they got at it. Investing in startups by itself has worse odds than slot machines, hype only makes it harder to predict the future. But oh well, at least we have the actual details and most of the articles link the results for those actually interested. Can't do much about what others think or do though.
>We know for a fact now that there are only technicalities like fault tolerance in our way.
We understand the theory behind nuclear fusion, we have plenty of hydrogen, we have the capability to initiate fusion reactions in the lab, so there are only technicalities in our way to powering our electricity needs through fusion. So it has been for 70 years.
Nailing down issues with fault tolerances are a whole lot different than pioneering methods and sciences of uncharted territory. I didn't generalize all technicalities, I specified fault tolerance for a reason, because it's trivial compared to what it took to get to this point. The system works, just not as well as it will take to get to reliable wide spread usage.
It is widely believed that implementing useful quantum error correction (if possible at all) will be orders of magnitude more difficult than what was done so far. To my knowledge, there is no realistic proposal how to implement it on any of the current platforms.
These seem to be much stronger claims than were made last time this was discussed:
”””While our processor takes about 200 seconds to sample one instance of the quantum circuit 1 million times, a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.”””
”””Quantum processors based on superconducting qubits can now perform computations in a Hilbert space of dimension 2^53 ≈ 9 × 10^15, beyond the reach of the fastest classical supercomputers available today. To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor. Quantum processors have thus reached the regime of quantum supremacy. We expect their computational power will continue to grow at a double exponential rate: the classical cost of simulating a quantum circuit increases exponentially with computational volume, and hardware improvements will likely follow a quantum-processor equivalent of Moore’s law [52, 53], doubling this computational volume every few years. To sustain the double exponential growth rate and to eventually offer the computational volume needed to run well-known quantum algorithms, such as the Shor or Grover algorithms [19, 54], the engineering of quantum error correction will have to become a focus of attention.
The “Extended Church-Turing Thesis” formulated by Bernstein and Vazirani [55] asserts that any “reasonable” model of computation can be efficiently simulated by a Turing machine. Our experiment suggests that a model of computation may now be available that violates this assertion. We have performed random quantum circuit sampling in polynomial time with a physically realized quantum processor (with sufficiently low error rates), yet no efficient method is known to exist for classical computing machinery. As a result of these developments, quantum computing is transitioning from a research topic to a technology that unlocks new computational capabilities. We are only one creative algorithm away from valuable near-term applications.”””
Yes, quantum supremacy disproves the extended church-turing thesis (Bernstein-Vazirani). This thesis states that any computation that can be computed efficiently can be computed efficiently with a classical computer (ie a Turing machine).
ECT thesis means that quantum computers will never be able to compute something significantly faster that classical ones.
However, technically, google haven't proved yet quantum supremacy since they just did the classical calculation using the best known classical algorithm. And there is no proof that there is no significantly better algorithm.
But this not a serious objection, no one thinks that there is an exponentially better way of classically simulating quantum computers.
> However, technically, google haven't proved yet quantum supremacy
To be absolutely clear, an experiment (or a set of replicated experiments) can't prove quantum supremacy. Quantum supremacy is a statement about about the asymptotic behavior of quantum and classical computers, and obviously, a finite set of data points cannot prove anything about asymptotes.
What these experiments are doing are gathering evidence against the extended Church-Turing thesis - in the same way other experiments are used to gather evidence in favor of/against general relativity or some other physical theory.
The proof of quantum supremacy within the framework of the currently accepted laws of physics (quantum theory), has to be a mathematical proof. Currently, we have no idea how to construct such a proof.
So funny that everyone is shitting on this or claiming it's incremental/trivial. It's not! Science almost always starts with clean benchmarks like the one developed here, and from my read and limited knowledge, it's clear this is a big deal and a qualitative leap forward.
Congrats to the team on great work. Looking forward to more to come.
I'm seeing a lot of comments claiming this claim can't be trusted because there's no useful computation being done.
There are many competing proposals to test for quantum supremacy: Boson Sampling, Fourier Sampling, IQP, Random Circuit Sampling. The computation needs to meet certain requirements. Worst case hardness, average case hardness, anti-concentration. Also there needs to be a way to test to see if the output of the computation is correct. This is easier said than done. RCS is the only one that meets all three and can be verified.
What's happening here is a test of RCS: A general quantum gates are being programmed to generate a random circuit. Think about randomly connecting some components on a breadboard and then simulating what would happen. Except this is done programmatically. The question is to then predict the output with a classical computer. This can't be done easily.
>> In this paper we study both the hardness and verification of RCS. While RCS was defined with experimental realization in mind, we show complexity theoretic evidence of hardness that is on par with the strongest theoretical proposals for supremacy. Specifically, we show that RCS satisfies an
average-case hardness condition – computing output probabilities of typical quantum circuits is as hard
as computing them in the worst-case, and therefore #P-hard. Our reduction exploits the polynomial structure in the output amplitudes of random quantum circuits, enabled by the Feynman path integral. In addition, it follows from known results that RCS satisfies an anti-concentration property, making it
the first supremacy proposal with both average-case hardness and anti-concentration.
Reading as somebody who is not in the field, and writing this down to read the responses of those who are more qualified:
How I understand it: the "computation" is actually "sampling" the cubits? And then sampling results in the sets of random numbers, which don't have uniform but some specific distribution (specific for quantum effects). Then they claim that such a distribution could not be achieved using classical computer so fast, if the classical computer has to do the simulation of the cubits. And therefore, they claim that their quantum process is better.
But couldn't one do a similar conceptual setup even before, e.g. measuring some noise source (e.g. a semiconductor based diode etc) and then claiming that measuring the source is faster than simulating on the digital computer the analog behavior of the noise source?
The interesting step here is that the source is not a single element but a set of quantum qubits, which is much harder to simulate, but on another side it appears that dealing with the behavior that doesn't allow e.g. Shor algorithm on the same setup is avoided by decision to do what was done.
Seems like a clever trick, but I miss in which aspects it makes the hoped-for outcome (e.g. breaking some crypto keys using Shor) in any way more achievable than before.
Please don't assume that I really know what I write, but consider this as a possibility to explain to the not fully informed public which however shows an interest in the topic.
No, the distribution is thought to be hard for any classical algorithm (this is still being worked on - there are still ongoing theory development for showing it is harder and harder), and by extension, any classical process.
Yes, the problem is very contrived - the distribution being sampled is given by the distribution you get from sampling from a randomly sampled quantum circuit. Of course this is easy on a quantum computer because you can just throw on the randomly chosen gates and measure it. I don't think it will be very useful in a application sense. This problem was designed for the very purpose of showing a "quantum supremacy" as soon as possible.
Also, from what I know, there is still a problem in the whole verification aspect for this problem. That is, even if it can sample from the distribution, it is difficult to verify that in fact the correct distribution was sampled. This is in contrast to the problem of factoring, where to verify the computation we just need to multiply the numbers together to check these are indeed the factors. A remedy I have heard of this (I think from Scott Aarsonson) is that we can modify the parameters so that the output is just within the limits of verifiability with a big supercomputer, and that should be good enough to show supremacy was achieved (EDIT: this is precisely what OP's article says it will do). I think this might be bending what is meant by the term "quantum supremacy" too much, but it's a pretty arbitrary point in computational power anyways , so I'm not too concerned.
If you're interested in more about quantum supremacy, these two articles have a lot of information:
> A remedy I have heard of this (I think from Scott Aarsonson) is that we can modify the parameters so that the output is just within the limits of verifiability with a big supercomputer, and that should be good enough to show supremacy was achieved.
There's additional piece to this that you are missing: while the classical simulation is _hard_ (i.e. increasingly hard for larger and larger number of qubits or circuit depths), it _can_ be computed with enough computational power for small number of qubits <=50 and depths<=20. (To get the classical simulation to 50 qubits was an achievement in itself and required a huge amount of compute power provided by google, without which they'd probably be only able to compute it for 30-40 qubits.) And where the classical computer can compute the result, the quantum computer does get the same answer with some probability (not a particularly high probability in absolute sense, but much higher than you would expect than if the quantum computer didn't work.)
Therefore the win here is that they have verified that a 50 qubit quantum computer "works" (success rate > 10^-3 > 0.) No classical computer could verify a 100 qubit quantum computer in the same way (but you could verify various 50 qubit subsections of the 100 qubit quantum computer). Alternatively, if they had two such 50 qubit devices (and presumably they will soon) they can verify the second one with the first much quicker than they can verify the first with a classical computer, so in that sense the quantum computer beats a classical computer at some task.
If you had some chip that took input and gave reproducible output that was impossible to simulate classically, I think you'd actually have something of interest.
“Quantum supremacy” should be taken as a mathematical term with a mathematical definition in the same way we have definitions for what it means for a problem to be “hard” in the complexity theoretic sense.
To me this reads a lot like “quantum computer faster than a classical computer at ‘simulating’ itself”. Constructing a physical experiment where it would take a classical computer 10000+ years to simulate the outcome is trivial. I don’t see how this is very different.
There is a slight difference, that really matters to theoretical computer scientists (as in complexity theory) and physicists working on quantum computing: The challenge is to construct a physical experiment that consists of only 50ish "elementary" and "programmable" building blocks that is infeasible to simulate on a classical computer. This "universality" and small number of elements is what is exciting and what makes the task quite nontrivial.
However, you are right that the only thing this device does is "perform something difficult to simulate". The technology still needs a few more generations before that "something" is useful. Given the consistent progress since 1995, I am fairly optimistic that within 10 years we can do that.
The difference seems to be that if it's just 'simulating itself' as the top level comment described, then it is a really bad demo of the universality that would be so exciting.
I get that its cool that it’s quantum and that the circuits are programmable, sure. My point was that the comparison to the classical computer makes little sense.
I've been out of this field for a while, if there are any experts with up to date knowledge I'm trying to figure out the following:
- Why this particular calculation chosen?
- With this many qubits I would think they could take some quantum simulations already. Is the gate fidelity not high enough?
- Am i reading this correctly that they do no error correction or entabglement distillation?
- This particular problem was artificially constructed with the specific goal of demonstrating quantum supremacy. It has little to none practical use. Essentially they looked what the easiest thing is for current quantum computers to do, and formulated a problem based on that
- Because of the above point, this does not mean that the quantum computer can do anything useful (definitely not factor numbers). It is nowhere near high fidelity enough to run highly error corrected algorithms (which Shor's and Grover's algorithms demand)
Since this is a mostly theoretical exercise to begin with, it's weird that they mention the computational cost of the classical method in terms of real computer times on supercomputers, and almost don't mention the theoretical bounds. The practical cost doesn't matter much as far as demonstrating quantum supremacy is concerned.
As far as I know, the key to claiming true quantum supremacy in this case, is actually the proof of the theoretical complexity of the classical algorithm. The quantum computer is obviously efficient at solving the problem, considering that the problem was constructed with what a QC is good at in mind. And the hardness of any classical algorithms had been somewhat demonstrated already, by Aaronson and Arkhipov in 2011. They managed to show that if there is a polynomial time classical algorithm capable of solving this sampling problem, then the polynomial hierarchy would collapse, which is seen as extremely unlikely (on the same level as showing P=NP).
PS: Aaronson recently gave a 3-part lecture at ETH as part of the annual Paul Bernays lectures. Links to recordings and PPTs here: https://www.scottaaronson.com/blog/?p=4301. Part 3 is specifically about this topic, and gives a good high level overview of the current state.
I'm not an expert, but you are correct in that there is no error correction here, and error correction is still a long way off. Around 2016, realizing that running useful quantum algorithms was a long way away and there needed to be earlier roadmarks for quantum devices to signal the progress, the problem of "quantum supremacy" was introduced. The goal is to find the problem with the _easiest_ demonstrable quantum speedup (regardless of usefulness). They proposed a particular sampling problem (running a randomly chosen quantum circuit that has output with a positive probability on many bitstrings, where the exact "speckle pattern" of the probability distribution would only show up if no errors were made when running the circuit.) As the circuit is randomly chosen, it can serve no particular use, except for verification that the computer runs correctly. Relevant papers for more info:
https://arxiv.org/pdf/1608.00263.pdfhttps://arxiv.org/pdf/1612.05903.pdf
They explain in the paper that the quantity plotted in Fig. 4 can also be interpreted as a probability of zero errors of running the circuit. You can see that it goes down quickly with number of qubits and number of cycles, showing the need for either error correction or much improvement in future devices in order to get any useful calculations done. The win here is that the probability of success is non-zero, as they say in the paper: "A
single bit or phase flip over the course of the algorithm
will completely shuffle the speckle pattern and result in
close to 0 fidelity". So the only way to get the right result is to have runs with zero errors.
Even if it's highly contrived, it's still something that they found a task for which the quantum computer is superior. It points the way towards a more general superiority where some at least have doubted the possibility.
Simulating a drop of water falling on the floor accurately at the molecular level would take thousands (if not millions) of years on the best supercomputer today.
Does it mean my water dropper is a better physical computer than regular computers?
This is to the point. The universe is a giant big computer even without quantum involved. If the starting state of the computation is not predefined, the so-called computer is no difference from randomly flipping some coins.
What the paper showed is no difference from showing the wind did not change the result of coin flipping.
Here's the relevant part of the twitter rant- seems interesting but I can't tell myself whether it makes sense or not :)
"they repeatedly asked the quantum processor for outputs to a random circuit, and the “quantum supreme computation” was that the outputs were not actually random, but subject to bias due to quantum interference, which they check by solving the path integral in classical simulation
which, to me, is an infuriating thing to label “quantum supremacy” because quantum interference is EXACTLY WHY WE CAN’T USE CURRENT QUANTUM PROCESSORS TO “BREAK ALL CRYPTO” IN THE FIRST PLACE BECAUSE WE CANNOT DO EFFICIENT QUANTUM ERROR CORRECTING CODES
it’s like pointing at a fucking Zener diode and saying, “quantum supremacy” simply because quantum electromagnetic domain wall tunnelling happens at random, followed by a non-justification of “well your iphone can’t do that”"
Sorry but no. The poster is taking a simplistic view of things. Random circuit sampling is a generally accepted method of establishing quantum supremacy.
This is just nonsense. It equivocates between two separate things by calling both of them "quantum interference":
1) A quantum computer uses computational gates to very deliberately orchestrate interference patterns. This is the reason quantum computers are faster than classical ones at certain problems.
2) On top of (1), there's a ton of undesired interference because of dirty real-world constraints. Quantum computers are not fully isolated from their environment, and executing a gate cannot be done without also affecting qubits that are not supposed to be part of the gate.
> subject to bias due to quantum interference, which they check by solving the path integral in classical simulation
This is an instance of (1)
> because quantum interference is EXACTLY WHY WE CAN’T USE CURRENT QUANTUM PROCESSORS TO “BREAK ALL CRYPTO” IN THE FIRST PLACE BECAUSE WE CANNOT DO EFFICIENT QUANTUM ERROR CORRECTING CODES
This is an instance of (2).
Quantum supremacy is all about showing that you can implement (1) with enough fidelity even in the presence of (2). Now keep this in mind while reading the rant again, and you'll quickly realize it's absolute nonsense.
Quantum supremacy is reached when a quantum computer does something that no classical computer can do.
So before quantum supremacy, quantum computers are useless.
Which does no mean that after quantum supremacy quantum computers are useful. Usefulness depends on people, is a social property. We need to be able to solve a problem relevant to humans.
There is reasonable hope that with more (quantum) computing power and people ingenuity we will be able to make quantum computer useful.
Actually, many (among them investors) think that quantum computing is a turning point in human history.
Quantum supremacy was mainly coined as a milestone as a response to certain vocal famous people (almost none physicists, which was the problem) who have been skeptical about quantum computers at a fundamental level. That is to say, there is something in the nature which will always forbid the quantum speed up (as in the case of perpetual motion machines) however you design a quantum computer.
Those certain individuals have publicly kept debating against it for years, from basically claiming that quantum computation is snake oil to moderate skeptical positions and kept saying either quantum computers can't do anything beyond the capabilities of classical computers (in terms of complexity) or that it's outright impossible to build one. For decades. (Some of those ramblings have appeared on news outlets and here as well as top news.)
Physicists of course know that quantum speed up is real (which is why Feynman proposed it, and why there are so many working on this, and so much money is being poured into it) and that it is an "engineering" problem (albeit a very non-trivial one in uncharted waters) to build a device which can make use of it.
Such experiments, although not of any practical use for people, are important to establish certain fundamental facts about nature in a way everyone (including people from other disciplines) can understand without studying quantum mechanics and the physics of the devices.
There was an article discussing how finance groups like DE Shaw and Two Sigma are investing in quantum computing to get an edge on spotting trends [0]. I wonder how finance groups will learn from this study?
Yes, they can do stuff like fast simulations of quantum mechanics, that should change everything from material sciences and electronics up to medicine.
What's the chance someone will come along with a classical analog that matches/beats this result? I remember seeing claims of quantum supremacy in the past and then clever people invented new classical algorithms that performed just as well.
There is certainly a chance, but for many of the researchers working on quantum computing (me included) that would still be a win: such classical results that beat quantum approached are usually incredibly beautiful and surprising.
However, on the extreme end, proving that quantum and classical computers are equivalent, would be probably more surprising than proving P=NP.
No, "quantum supremacy" is a technical term that has not been achieved before, and has not been claimed to be achieved by anyone serious. You may be misremembering previous claims called something different.
I guess you're right, I remember seeing quantum algorithms that were assumed to be more efficient than any classical algorithm and then people devised equally efficient classical algorithms. E.g. work by Ewin Tang https://arxiv.org/abs/1811.00414
They were definitely not being assumed to be faster. It was considered very uncertain. Quantum algorithms for which people are pretty confident there is no classical algorithm of similar speed, while still unproven, are Shor's algorithm (i.e., no fast classical factoring) and the present case of sampling for supremacy.
The Joshua Tree Quantum Computer Achieves Quantum Supremacy: Dr. Elliot McGucken's research group hath created a quantum computer capable of simulating over five sextillion atoms in the form of water molecules, performing massively complex calculations in a fraction of a second, leveraging quantum entanglement in the Hilbert Space, thusly tapping into the power of higher dimensions and the space of parallel universes. These highly complex many-body calculations would take classical computers over a trillion trillion trillion years to solve and simulate. The JT Quantum Computer leverages the quantum multiverse to generate random numbers and pattern distributions via the entangled interactions of five sextillion atoms.
Atoms in a drop of water = 3 atoms/molecule x 1.67 x 10^21 molecules = 5.01 x 10^21 atoms, or five sextillion atoms.
The patent-pending Joshua Tree Quantum Computer works as follows:
One obtains an eye-dropper and a glass of water.
One takes up a drop of water in the eye-dropper.
One drops said drop of water onto a flat piece of glass.
Immediately one achieves and observes the random pattern distribution based on the entangled interactions between the over 5.01 x 10^21 atoms (five sextillion atoms), as the combined wave function collapses and the universe splits a trillion trillion times. Using a classical computer, one would be unable to simulate this calculation even after a trillion trillion trillions years. The JT Quantum Computer leverages the entangled quantum interactions between the atoms, thusly mining the Hilbert Space of parallel universes and the multiverse in a manner that is at least several sextillion times faster than IBM's leading SUMMIT super computer.
And so the Joshua Tree Quantum Computer hath achieved quantum supremacy.
The Wright Brother's first flight lasted under a minute, and already we have crossed far further than that, with big plans for the future, and multitudinous commercial applications. A new era hath dawned.
We are currently seeking $200 million in funding to complete the design of the Redwood Quantum Computer, which will involve an entire bucket of water and which will be able to leverage the many worlds of Sean Carroll.
If you “produce random numbers”, against what distribution should they be generated? Usually RNGs produce uniformly distributed random numbers. The challenge of quantum supremacy is to produce them against another kind of distribution, called the Porter -Thomas distribution. If you run your quantum computer and you get uniformly random output (akin to what toy might do if you build an ASIC), you have a broken quantum computer.
The ASIC is really only relevant for quantum annealers like D-Wave, not for a QC like this. Even if a classical ASIC could close the gap, it would only be delaying the inevitable.
Error is very high, their probability of success drops very fast, see the paper.
“Supremacy” is a descriptor various parts of government use when they need funding. The implication is that achieving “supremacy” with some hot area of global security requires spending $X, and if $X _aren’t_ spent, then we will be behind.
While the choice of the word is indeed regrettable, it has nothing to do with your guess or with any national security topic. See the sibling comment https://news.ycombinator.com/item?id=21044196
Quantum computing is, like blockchain and AI, a topic that’s fretted about in the NatSec establishment, often by those who don’t understand the topic in depth but are otherwise NatSec experts and help control resources. “AI Supremacy” is a book in that vein; Kai-Fu Lee’s book has freaked out many others.
This is NOT how the media has chosen to report on this. It's always quite frustrating to figure out how a paper diverges from the press it gets.