Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://scottaaronson.blog/?p=8525#comment-1997424

“Gil Kalai #23: So we’re perfectly clear, from my perspective your position has become like that of Saddam Hussein’s information minister, who repeatedly went on TV to explain how Iraq was winning the war even as American tanks rolled into Baghdad. I.e., you are writing to us from an increasingly remote parallel universe. The smooth exponential falloff of circuit fidelity with the number of gates has by now been seen in separate experiments from Google, IBM, Quantinuum, QuEra, USTC, and probably others I’m forgetting right now. Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same. And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”

Ouch.



Hi Dave, nice to see you. Our quantum computer discussions go back to 2006 and as a member of the Google team you can certainly tell us about your perspective and personal angle if you were involved in one of the two recent assertions.

It is disappointing that you endorse Scott's uncalled for and a little juvenile analogy. I think it is a wrong analogy weather I am right or wrong (both on the general question of quantum computation and on the specific question of my evaluation of the Google supremacy efforts).

In any case here is my response to Scott's comment:

"Hi everybody,

1) I found the analogy in #39 offensive and inappropriate.

2) As I said many times, I don’t take it as axiomatic that scalable quantum computing is impossible. Rather, I take the question of the possibility of scalable quantum computing as one of the greatest scientific problems of our time.

3) The question today is if Google’s current fantastic claim of “septillion years beyond classic” advances us in our quest for a scientific answer. Of course, we need to wait for the paper and data but based on our five-year study of the 2019 Google experiment I see serious reasons to doubt it.

4) Regarding our claim that the fitness of the digital prediction (Formula (77)) and the fidelity estimations are unreasonable, Scott wrote: “And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”

Scott, Our concern is not with the exponential falloff. It is with the actual deviations of Formula (77)’s predictions (the “digital prediction”) from the reported fidelities. These deviations are statistically unreasonable (too small). The Google team provided a statistical explanation for this agreement based on three premises. These premises are unreasonable as well and they contradict various other experimental findings. My post gets into a few more details and our papers get into it with much further details. I will gladly explain and discuss the technical statistical reasons for why the deviations are statistically unreasonable.

5) “Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same”

Scott, do you have a reference or link to this claim that the exponential falloff pattern is the same? Of course, one way (that I always suggested) to study the concern regarding the “too good to be true” a priori prediction in Google’s experiment is to compare with IBM quantum computers."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: