Dan and Lisa's answers agree on all but problems 6 and 9, and Dan got two more correct than Lisa. So Dan is correct on both 6 and 9. Therefore, Mary is incorrect on 6 and 9. So she got seven of the other eight correct, and Dan got three of the other eight correct. But on those eight problems, Mary and Dan disagree on only problems 2, 3, 5, and 10, therefore Mary got all four of those correct. Then Mary got three of problems 1, 4, 7, 8 correct, but everyone answered the same way on those, so Colin got three right on those problems as well. Then read off the deduced answer key for the other six problems to get Colin's final score.
Based on this comic I've seen unit tests use 4 as replacement for random generated number to ensure non flakiness (of course, only when needed). But it might explain the LLM's bias?
Haha, I didn't know that one! It's consistent with OpenAI's conception of a "random" dice roll :-D.
Joke appart, I'm quite convinced many people would not find 1 or 6 to look "random" enough to be chosen as an example dice roll.
The thing you're missing is that at no point is it assumed that there are exactly two elements in a boolean algebra. In fact you can have a boolean algebra with four elements (see https://en.wikipedia.org/wiki/Boolean_algebra_(structure)).
It seems the author is using the word logic, so logic boolean algebra suggests the classical case. Perhaps what is not trivial is that one can use that rule to deduce the other axioms. So that is not the theorem what is important but that one can prove any tautology using that simple axiom.
Wizards of the Coast's in-house card database (Gatherer) is basically not maintained at all. I think they're very happy there is a third party willing to do that for free, and for a game with as much history as Magic, having a searchable card database is basically mandatory.
No, this is not correct. WLOG means: I assume one of the possible cases, but the proof works the same way for other cases. But that's not true here. The proof, as shown, only works for a>b>0, it does not work (without extra work or explanation) for a<b. The proof for a<b is similar, but not the same.
[And it certainly does not show it for a,b element of C]
WLOG just means the other cases follow from the one case. There is no implication about how hard it is to get to the other cases, although generally it is easy and you don't bother spelling it out exactly.
This is not what I meant. What is being proved is: a^2-b^2 - (a+b)(a-b) = 0. If you swap a and b you end up with a sign switch on the lhs which is inconsequential.
That is not what the proof proves. The proof proves the equivalence how it was originally stated, and assumes for that b<a.
Your rewriting is of course true for all a,b and might be used in an algebraic proof. But this transformation is not at all shown in the geometric proof.
“Gil Kalai #23: So we’re perfectly clear, from my perspective your position has become like that of Saddam Hussein’s information minister, who repeatedly went on TV to explain how Iraq was winning the war even as American tanks rolled into Baghdad. I.e., you are writing to us from an increasingly remote parallel universe.
The smooth exponential falloff of circuit fidelity with the number of gates has by now been seen in separate experiments from Google, IBM, Quantinuum, QuEra, USTC, and probably others I’m forgetting right now. Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same.
And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”
Hi Dave, nice to see you. Our quantum computer discussions go back to 2006 and as a member of the Google team you can certainly tell us about your perspective and personal angle if you were involved in one of the two recent assertions.
It is disappointing that you endorse Scott's uncalled for and a little juvenile analogy. I think it is a wrong analogy weather I am right or wrong (both on the general question of quantum computation and on the specific question of my evaluation of the Google supremacy efforts).
In any case here is my response to Scott's comment:
"Hi everybody,
1) I found the analogy in #39 offensive and inappropriate.
2) As I said many times, I don’t take it as axiomatic that scalable quantum computing is impossible. Rather, I take the question of the possibility of scalable quantum computing as one of the greatest scientific problems of our time.
3) The question today is if Google’s current fantastic claim of “septillion years beyond classic” advances us in our quest for a scientific answer. Of course, we need to wait for the paper and data but based on our five-year study of the 2019 Google experiment I see serious reasons to doubt it.
4) Regarding our claim that the fitness of the digital prediction (Formula (77)) and the fidelity estimations are unreasonable, Scott wrote: “And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”
Scott, Our concern is not with the exponential falloff. It is with the actual deviations of Formula (77)’s predictions (the “digital prediction”) from the reported fidelities. These deviations are statistically unreasonable (too small). The Google team provided a statistical explanation for this agreement based on three premises. These premises are unreasonable as well and they contradict various other experimental findings. My post gets into a few more details and our papers get into it with much further details. I will gladly explain and discuss the technical statistical reasons for why the deviations are statistically unreasonable.
5) “Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same”
Scott, do you have a reference or link to this claim that the exponential falloff pattern is the same? Of course, one way (that I always suggested) to study the concern regarding the “too good to be true” a priori prediction in Google’s experiment is to compare with IBM quantum computers."
This is covered by the article. A board game might take 4 people and 2 hours to play. If your three friends didn't have fun with a board game the first time, they probably won't want to play again, and so you won't be able to play again either. Therefore there is a strong drive among boardgamers to play the "full" game the first time, because there might not be a second time.
It also adds extra difficulty to design a game that even has a simple version that is fun to play. Take your Texas Hold'em example, imagine it takes 2 hours to play one game. If you start with a version that has all cards face-up and no betting, people would conclude that Texas Hold'em is a supremely boring game, and wouldn't bother to try the full Texas Hold'em experience!
I've seen people teach by playing the open-face (or otherwise simplified version) of a single turn as a way to give an overview of core mechanics.
Naturally, this works better if the 2 hour game consists of dozens of 5 minute turns rather than a few half-hour ones.
If you show one hand of Texas Hold'em, and don't actually play it, but instead talk through what players might be thinking at various points, then you not only cover the mechanics, but sell the game (through the rhetorical device of dramatic irony: you emphasize that the players don't have complete information, and may try to mislead each other, and may come to wrong conclusions even without such deceit, and of course nobody knows what will come on the river...). But of course, it's difficult to disentangle that from a strategy discussion.
An American's life expectancy when they're born is around 76. But the life expectancy among Americans that have already lived to the age of 65 is more like 83.