Yes. All protons are indistinguishable from each other. No matter what aspect of a proton you measure, it will be identical to any other proton you could measure instead.
This has important implications.
Consider a quantum mechanics experiment where you emit a proton at A and try to detect it at A′. You will find that that there is some quantifiable chance of detecting the particle at A′ some time T. Call that probability P.
Now consider a second experiment where you emit a proton at both A and B, and try to detect them at A′ and B′. What is the probability of finding a proton at A′ at time T? You will find that you get a different number! This is because the proton at A could travel to A′ and be detected, but also the proton at B could travel to A′ and be detected too. Since you cannot distinguish the two protons, you won’t be able to distinguish between these two outcomes, and so the probability must be different from P.
Your hypothetical seems to describe a current inability to differentiate between protons, rather than convincing me that protons must be identical. Is this like the monsters on old sea maps, just a gate around an unknown?
In quantum mechanics, probabilities are given by the square of the absolute value of more fundamental quantities called amplitudes. When something happens in two ways that can be distinguished, you must add the probabilities. When something happens in two indistinguishable ways, you must add the amplitudes, which yields a different probability after squaring. For example, .3^2+.4^2 != (.3+.4)^2. Thus, you can verify experimentally whether particles are or are not distinguishable.
> you can verify experimentally whether particles are or are not distinguishable
Thank you for a terrific explanation. Could you please go one layer deeper? Why does whether probabilities or amplitudes are summed imply fungibility (or its absence)?
Well, I am not a physicist, but I can fake it well enough for HN :-)
Before I say anything, if you have never heard of amplitudes before, you should read the Feynman lectures on physics vol. III, which you can find here: https://www.feynmanlectures.caltech.edu/III_toc.html Specifically read chapter 1 and chapter 3. The specific situation about distinguishable/indistinguishable particles is described in section 3-4.
Your question is kind of phrased backwards. Probabilities and amplitudes are human inventions to describe Nature's behavior, so by themselves they don't imply anything about Nature. The implication is the other way around: Nature has decided that some particle pairs are indistinguishable (proton vs. proton) and some are distinguishable (proton vs. neutron). Different rules apply to the two cases.
This indistinguishability is a pure quantum phenomenon. In a classical world, all objects are distinguishable---you can in principle label all protons and know which is which. But Nature does not work like that, and there exists this peculiar notion that you cannot tell two protons apart not even in principle. There is no deeper explanation of this phenomenon AFAIK---it is what it is.
You can tell experimentally whether two particles are or are not distinguishable by running the experiment as in Feynman 3-4. If you observe a distribution consistent with the add-amplitude rule, then the particles are indistinguishable. Any attempt to distinguish them leads to the contradictions explained in Chapter 1.
There is an even more peculiar phenomenon. Despite being indistinguishable, swapping two indistinguishable particles is not a no-op because it changes the amplitudes. You must still add amplitudes, but the amplitudes are different, yet different in such a subtle way that you still cannot tell the particles apart. Feynman 3-4 tells you how the amplitudes change during the swap, with a deeper explanation in chapter 4.
Because you can construct experiments where the proton from source A ends up at location B, and the proton from source C ends up at location D, or A ends up at D and B ends up at C. (Or some other possibilities.) You find that the A→B, C→D possibility's amplitude sums with the A→D, B→C possibility: i.e., that they're the same indistinguishable final state.
If swapping two things around gives a result indistinguishable from not swapping them, that's fungibility.
Photons of different polarizations are nonidentical, but if this argument is true, it would also prove that horizontally and vertically polarized photons were identical... in an experiment insensitive to polarization. I do not believe this answers the original question.
Identical means that either bosonic or fermionic conditions are applied to the joint tensor-product'd wavefunctions of multi-particle systems. It's well defined, and although I was hoping this thread would offer a more grounded definition I am not sure if it succeeded.
I am sorry but I am lost. If they both have probabilities or amplitudes of 1 would that not lead to a joint probability of 200% in the nonidentical case and 400% in the identical one?
Sorry, I didn't mean to imply that my comment should apply literally to all cases. I am just pointing out to parent that particles can be indistinguishable in principle, and not just as a technological limitation of our measurements. Moreover, there is an experimental way to tell the difference between distinguishable and indistinguishable, roughly based on the difference between probabilities and amplitudes. To dig deeper one must look at the details, e.g. in Feynman's lectures vol. III.
The statement that protons are indistinguishable is not strictly correct either, because protons have a spin. Protons with the same spin are indistinguishable, but you can tell apart protons with different spin. The spin of protons can only assume two values, so effectively there are two classes of protons, indistinguishable within the class.
In your specific case, it is clearly false that the probability of having one particle in one place is 200%. However, my statement still holds for expectations, and you end up with an expected two particles in one place. In the indistinguishable case, you must compute expectations based on amplitudes, not probabilities.
Yea, spin adds a new level of complications. You can distinguish between two otherwise–indistinguishable protons if they have opposite spin, but the spin of a proton can also change over time (usually due to interactions with other particles, such as stray radio waves passing through your experiment).
Going back to the experiment that I described, you can imagine that the particles are released at A and B with opposite spins, and then the detector at A’ only detects the spin that corresponds to the particle at A. This causes you to measure yet another probability, distinct from the other two, because there are now more possibilities and there are still multiple ways to cause the detector to find something. It could detect the proton from A, but the proton from A could also have its spin flipped and thus not be detected. The particle from B could arrive at A’ with the wrong spin and not be counted, or it could have its spin flipped along the way and be counted. You still cannot tell which proton you detected!
Similar complications occur with polarization of photons, which someone else mentioned in one of the comments. It’s worse though because polarization is a continuous quantity, and there are more ways to change it.
In addition to what has already been said, I want to point out that there is a normalization step as well. The amplitudes are always calculated in such a way that the probabilities would never add up to more than 100%. This normalization is generally baked into the wave function of the system you are studying, but it can also be done separately.
Incidentally, amplitudes are actually complex numbers. You can think of them as little arrows, like this: →, or this: ↖. In fact, these arrows are also rotating with the passage of time; they trace out little circles. To calculate the probability, we square the absolute value of the complex number. The absolute value of a complex number is equal to the length of the arrow, and squaring a length gives you an area. Thus the probability is essentially the same as the area of the circle traced out by the rotating arrow.
Events with high probability correspond to long arrows (big amplitudes), and low probability events have short arrows (small amplitudes). Amplitudes can cancel out when added together if they point in opposite directions. Thus we observe that some sequences of events have very low probability. We sometimes say that these events interfere with each other.
Sometimes this interference seems mysterious, as in the double–slit experiment, and other times it seems very mundane. In real life we rarely bother to calculate the probability that the batter will hit the ball before the pitcher throws it, but calculating the correct answers in quantum mechanics requires taking into account many such unusual events.
There is a lot of circular reasoning involved in quantum information and thermodynamics.
It's all totally, perfectly self consistent, but it does not derive from first principles like set theory or mathematical logic do. Physics is an experimental science; they are not required to state their axioms. Oftentimes they do (QFT for example), but the most glaring case where they don't is anything involving information.
The whole postulate that information is physical is something that was stumbled upon, and then turned out to explain a whole bunch of other weird things like heat and entropy, and some of those explanations in turn implied that information is physical.
I suspect that our current efforts to build cryptographically-relevant quantum computers are a lot like the efforts to build perpetual motion machines in the 1700s. Our current understanding of things isn't wrong, but there is some undiscovered general principle that we keep butting up against, so we'll keep trying to build these things until we figure out why nature keeps blocking us. That discovery -- rather than a computationally-useful device -- will be the most important result of all the quantum computing research going on right now.
> until we figure out why nature keeps blocking us
A quantum computer only behaves like our mathematical ideal quantum computer if it's sufficiently isolated from the rest of the universe: otherwise you don't get the “indistinguishable states” thing and the amplitudes don't sum, and the computer stops computing and starts being a regular ol' physics experiment.
It's an engineering problem, as far as I understand. Get things cold enough, get things isolated enough, so they stay entangled with each other and not with the rest of the universe.
Is it? In engineering problems you have to consider practical feasibility. Would it turn out that for the expenditure of energy to get things isolated we may as well spend it on classical computation?
The cosmic microwave background makes even deep intergalactic space warmer (3 or 4 K) than the inside of Earth’s best fridges (the best as low as nanoK). 3 or 4 K is easy to achieve using liquid helium cooling, which is presumably why eg. Google’s Sycamore chip operators in that range. But achieving those low temperatures is a regular occurrence in labs all over the world—-you call AirGas or somebody and they deliver a dewar of liquid He.
And, close to a star, you have several neutrinos flying through your experiment. You have vibrations from earthquakes. You have electromagnetic coupling. etc, etc.
The temperatures are easy enough; you just have to compensate for thermal noise. The isolation isn't; without isolation, your signal is wrong.
But unless you're doing a computation without I/O (i.e. no reading back the results, no providing inputs), you need coupling into the rest of the universe, and typically into a low-entropy part of the universe like the Earth, where you have entities that care about the computation. So, I am not convinced having a deeply isolated part of the universe is the answer; in fact, the isolation vs. signal quality tradeoff makes it sound more and more like a fundamental limitation of practical concern.
> But unless you're doing a computation without I/O (i.e. no reading back the results, no providing inputs), you need coupling into the rest of the universe, and typically into a low-entropy part of the universe like the Earth, where you have entities that care about the computation.
With a quantum computer, you can only do that at the end of the computation. Not half-way through; that'll cause the computer to start doing a different (unwanted) computation instead. While the calculation is happening, you need (a high probability of) total isolation from the rest of the universe, so that the intermediate state of the computer only interferes with itself.
> So, I am not convinced having a deeply isolated part of the universe is the answer; in fact, the isolation vs. signal quality tradeoff makes it sound more and more like a fundamental limitation of practical concern.
It is a fundamental limitation of practical concern! Just like the need to keep conventional computer processors cool or they melt, or the fundamental limitations on the bandwidth that a radio frequency can give you. The people who deal with these limitations are called engineers.
that seems plausible, but currently very unsupported by evidence. in the past decade, quantum computers have gotten way more powerful, and progress doesn't seem to be stalling out.
No. Bell’s Theorem states that if subatomic particles have any “hidden variable” which we cannot currently measure, then that “hidden variable” will either have no effect at all on the particles, or that it will have to have instantaneous “nonlocal” effects on them.
If this hidden variable has no effect at all, then it is useless. Most physicists don’t care to include extra variables in a quantum mechanical theory that have no effect at all. By definition, they could not be measured, and so they would be extra baggage to carry around to no effect.
Physicists don’t much like nonlocal theories either. Any theory that requires information about the particle at A to travel instantaneously to B in order to change the state of the particle there is going to be hard to sell. You might have heard of a fellow called Einstein, who proved that nothing can go faster than the speed of light.
Thus, most physicists take the easier road, as it requires only that subatomic particles are indistinguishable. All this means is that particles are too simple to be uniquely identified. You can in principle tell the difference between two baseballs, because they have different patterns of wear and other markings on their surface. But those wear patterns and markings are formed out of the complex arrangements of trillions or quadrillions of atoms. It is easy to see how rearranging the ink molecules on the surface of a baseball could create a unique baseball, or how selectively removing molecules from the surface of the leather (by scratching it, for example) could do the same.
But subatomic particles are too simple to have that kind of internal state. Even atoms are only slightly distinguishable. Most carbon atoms have 12 electrons, but a carbon ion might have 11 or 13 electrons. You can distinguish between the atom and the ion, but not between two atoms or two ions.
Every description of a scientific idea always carries the implied disclaimer that we are talking about the present state of our knowledge. Every "fact" is in fact a belief that's held on a tentative basis pending the introduction of better evidence. That sounds quite noncommittal, but in fact it's the best that we can hope for.
One interesting thing about protons is that you can describe the effect of exchanging any two of them in precise terms, and end up with macroscopic predictions that can be tested. So you're not limited to just trying to measure every proton in a bag to see if they're all the same. There are other ways to test the hypothesis.
Think of it this way: we can experimentally figure out what the probabilities are; they are an observed thing. The only way to make sense of these probabilities is if all protons are indistinguishable.
In my statistical mechanics course, we went through an illuminating exercise where we started by trying to take account of every atom in a gas cloud. We started taking limits and making assumptions. One of them was that all atoms are indistinguishable from each other. This decreases the possible states of the system by N! (factorial, not surprise). After making that assumption, out pops the ideal gas law.
I was thinking the same thing. If you read the experiment described but replace the word "proton" with "cat" -- I would just assume that the scientists in question were from a society with very coarse senses and measurements, not that all cats are indistinguishable.
It's a mistake to try to make analogies of quantum interactions with everyday objects like cats. It's not as useful as one might hope, and your intuition gets in the way.
Not if they interfere in just the right way to keep the probability the same as in P.
I see a second problem with this experiment - in the double slit experiment, there is one electron interfering with itself. If you released two protons at the same time, they'd interact with each other and change the probabilities, even classically.
You mean like the "One-electron universe" postulate [1]? First, I don't think it's really useful beyond being a fun idea to consider. Also there's a lot more matter than antimatter, which raises some... logistical problems. Also, unlike electrons/positrons, protons are not fundamental particles, they're made up of quarks which throws a whole other wrench in the idea.
This has important implications.
Consider a quantum mechanics experiment where you emit a proton at A and try to detect it at A′. You will find that that there is some quantifiable chance of detecting the particle at A′ some time T. Call that probability P.
Now consider a second experiment where you emit a proton at both A and B, and try to detect them at A′ and B′. What is the probability of finding a proton at A′ at time T? You will find that you get a different number! This is because the proton at A could travel to A′ and be detected, but also the proton at B could travel to A′ and be detected too. Since you cannot distinguish the two protons, you won’t be able to distinguish between these two outcomes, and so the probability must be different from P.