So to summarize, standard model theory predicted that the radius of the proton would be 0.84 femtometers. High energy electron scattering experiments from the 90s and 2000s suggested it was more like 0.88 femtometers, which was a large discrepancy that causes some consideration that the standard model would have to be revised. These researchers performed a reanalysis of the data correcting for some confounding phenomena (might have been neutron formation) to suggest that the old experimental data was consistent with the 0.84 femtometers predicted by both theory and by newer lower energy scattering experiments.
I’ll leave it as an exercise for the reader to decide whether the title of the article is clickbait.
I've been to oodles of talks on the problem. It has had special sessions at APS meetings and I've watched acquaintances and colleagues expend meaningful fractions of their careers attempting to resolve the Proton Radius Puzzle.
Pohl's measurements were incontrovertible, yet they disagreed with decades of work. It has been a big problem in a quiet community for some time.
If someone has a reliable angle to resolve the problem, it is big news. Enough really good physicists have tried and failed to resolve the conundrum that one should wait to see if this new approach holds up, too.
The headline gives the impression that the researchers found a new discrepancy where protons are found to be smaller than theory would predict. In actuality, the researchers are proposing a resolution to a preexisting discrepancy.
Additionally, I’m not sure why you’re saying we should wait to see whether the results hold up, if you don’t think writing “probably” in the headline is clickbait.
I can't parse your second question: A) we should wait to see if results hold up & B) supporting probably being in the headline are incompatible? It seems !A & B and A & !B are incompatible because A and B are compatible.
In general I think that whenever a new theory or analysis is published, “probably” indicates a far greater degree of confidence than is justified until the theory / analysis gains broad acceptance.
Your argument seems to be that the topic is relevant, active, and frequent—-ergo, newsworthy. And because the content of the article is newsworthy, the headline for it is not clickbait.
My definition of clickbait is a little different than that. It’s not so much the newsworthiness of the item behind the headline (one man’s old news is another man’s new insight), but rather the form of the headline itself. The more the headline plays on my emotions to pull me in, to try and push its relevancy ahead of an otherwise objective cataloging of information, the more clickbaity it is (to me at least).
This particular headline is far from the worst, but the air of mystery it elicits seems to waft “psst, you gotta check this out.” So imho, it is slightly clickbaity.
(I was going to use an analogy, but a previous HN article today taught me it’s better to use those in non-debate contexts).
> to summarize, standard model theory predicted that the radius of the proton would be 0.84 femtometers. High energy electron scattering experiments from the 90s and 2000s suggested it was more like 0.88 femtometers
I read the wikipedia articles for "proton" and "proton radius puzzle" and the press release we're commenting on. I didn't notice any mention of a theoretical prediction of proton radius - only some older experiments suggesting the larger size, and some newer experiments suggesting the smaller size.
Ignorant outsider to the field here: I've often heard the claim that quantum mechanics has been verified to amazing accuracy, that its predictions match with reality to the maximum degree that our instrumental precision allows, etc. A 5% difference seems big enough that at least some of our experiments should have error limits less than that. So how is it that this is only now being found, and there's still uncertainty surrounding it?
I feel like I'm missing something fundamental here, and I'd like to know what it is.
Real physicists can correct me, but here's my understanding of it: if only the electromagnetic force is involved, the numbers provided by QED are amazingly accurate (for example, calculating the magnetic moment of an electron). But when the strong force is involved, as for the radius of the proton, the calculations are much more difficult: you can't calculate what the radius should be.
Part of this is also that a proton isn't a elementary particle. And in fact it's not only the three quarks typically ascribed; that's just only a little over 1% of it's mass. The rest is a maelstrom of virtual particles.
Defining the radius anything more than extremely statistically runs up against the uncertainty principle of all these constituent particles.
I am a LQCD practitioner. “Very hard” means that even with a nontrivial fraction of all the available leadership-class supercomputing in the world we don’t have enough computer power.
There are lots of different numbers that you can use quantum mechanics to predict. It turns out that the size of the proton is both harder to predict and harder to measure than many of those other numbers.
The fine structure constant can be measured in two different ways.
One is to measure the g-factor of the electron in a Penning trap. This can be related to the fine structure constant using quantum electrodynamics (QED).
The other is to measure the recoil that an atom receives when it absorbs a photon in an atom interferometer. By combining the result with another well known constant, the fine structure constant can be calculated.
The results of both methods agree to 12 digits, which shows that the QED calculations of the electron g-factor are correct on that level.
That 5% smaller value was between 4 and 7 sigma away from the old value (depending how you average measurements). It's absolutely not within expectation.
That’s just a back-of-the-envelope calculation. Nobody knows how to do a bone-fide calculation because we don’t have an accepted quantum theory of gravity.
I guess that's why they invented slide rules. I could get within 3 significant digits on a slide rule (and compute the exponent in my head), whereas I guess the margin of this envelope is too small to contain the calculation.
I wish they explained how exactly the fact that protons are 5% smaller than previously thought actually affects our current knolwedge. Or it just doesn't?
(Disclaimer: I work in this field, having done of the original measurements. I do not believe this is a settled case at all.)
If the proton is indeed smaller, it changes the Rydberg constant by several sigma. The Rydberg constant is one if not the best determined constant of nature. This has implications for precision tests of QED, for example.
In this context, it's the root-mean-square of the electric charge distribution. (Or more precisely, it's related to the slope of the electric form factor at Q^2=0.) There is also at least a magnetic radius (similar to the electric), and a gravitational radius.
Does the electrical charge ever reach 0 or does it keep getting less and less and for practical purposes, we have to set a cut-off limit and count the boundary of the proton from there?
As we understand it, the extent is infinite. However, isolated charges have a hard time staying isolated. They tend to rearrange the charges around them, so that the net effect of their charge becomes effectively zero if you define a vicinity for other particles. I believe the term for this is "shielding."
Just a layman but I think they usually define size via the halfway distance between the centers of two identical bound particles. Not entirely sure what that would be in this case though, given helium-2 is unstable and helium-3 might give a different result. (?)
As far as I know the size in these discussions is some kind of scattering cross-section. So roughly it's just how likely you are to hit it / how precise you need to aim.
I don't know the exact details of the definition though.
You are right about the atomic radii often quoted in chemistry, but the sizes of subatomic particles are usually defined in ways related to their cross-sections.
But isn't the cross section dependent upon what is interacting with them? Like, my understanding is that the cross section of hafnium is massive with respect to neutron capture, but not with respect to photons.
Although the size isn't simply the square root of the cross section, you've hit on a truth that you'd get different sizes if you were asking about the distributions of different things.
They do for a very short duration (10^-9 s.) Nuclear fusion happens by protons binding via the strong force, then one beta-decays into a neutron before the electromagnetic repulsion pushes them beyond the range of the strong force. The beta decay happens fast enough in 0.01% of proton-proton collisions.
Yes. All protons are indistinguishable from each other. No matter what aspect of a proton you measure, it will be identical to any other proton you could measure instead.
This has important implications.
Consider a quantum mechanics experiment where you emit a proton at A and try to detect it at A′. You will find that that there is some quantifiable chance of detecting the particle at A′ some time T. Call that probability P.
Now consider a second experiment where you emit a proton at both A and B, and try to detect them at A′ and B′. What is the probability of finding a proton at A′ at time T? You will find that you get a different number! This is because the proton at A could travel to A′ and be detected, but also the proton at B could travel to A′ and be detected too. Since you cannot distinguish the two protons, you won’t be able to distinguish between these two outcomes, and so the probability must be different from P.
Your hypothetical seems to describe a current inability to differentiate between protons, rather than convincing me that protons must be identical. Is this like the monsters on old sea maps, just a gate around an unknown?
In quantum mechanics, probabilities are given by the square of the absolute value of more fundamental quantities called amplitudes. When something happens in two ways that can be distinguished, you must add the probabilities. When something happens in two indistinguishable ways, you must add the amplitudes, which yields a different probability after squaring. For example, .3^2+.4^2 != (.3+.4)^2. Thus, you can verify experimentally whether particles are or are not distinguishable.
> you can verify experimentally whether particles are or are not distinguishable
Thank you for a terrific explanation. Could you please go one layer deeper? Why does whether probabilities or amplitudes are summed imply fungibility (or its absence)?
Well, I am not a physicist, but I can fake it well enough for HN :-)
Before I say anything, if you have never heard of amplitudes before, you should read the Feynman lectures on physics vol. III, which you can find here: https://www.feynmanlectures.caltech.edu/III_toc.html Specifically read chapter 1 and chapter 3. The specific situation about distinguishable/indistinguishable particles is described in section 3-4.
Your question is kind of phrased backwards. Probabilities and amplitudes are human inventions to describe Nature's behavior, so by themselves they don't imply anything about Nature. The implication is the other way around: Nature has decided that some particle pairs are indistinguishable (proton vs. proton) and some are distinguishable (proton vs. neutron). Different rules apply to the two cases.
This indistinguishability is a pure quantum phenomenon. In a classical world, all objects are distinguishable---you can in principle label all protons and know which is which. But Nature does not work like that, and there exists this peculiar notion that you cannot tell two protons apart not even in principle. There is no deeper explanation of this phenomenon AFAIK---it is what it is.
You can tell experimentally whether two particles are or are not distinguishable by running the experiment as in Feynman 3-4. If you observe a distribution consistent with the add-amplitude rule, then the particles are indistinguishable. Any attempt to distinguish them leads to the contradictions explained in Chapter 1.
There is an even more peculiar phenomenon. Despite being indistinguishable, swapping two indistinguishable particles is not a no-op because it changes the amplitudes. You must still add amplitudes, but the amplitudes are different, yet different in such a subtle way that you still cannot tell the particles apart. Feynman 3-4 tells you how the amplitudes change during the swap, with a deeper explanation in chapter 4.
Because you can construct experiments where the proton from source A ends up at location B, and the proton from source C ends up at location D, or A ends up at D and B ends up at C. (Or some other possibilities.) You find that the A→B, C→D possibility's amplitude sums with the A→D, B→C possibility: i.e., that they're the same indistinguishable final state.
If swapping two things around gives a result indistinguishable from not swapping them, that's fungibility.
Photons of different polarizations are nonidentical, but if this argument is true, it would also prove that horizontally and vertically polarized photons were identical... in an experiment insensitive to polarization. I do not believe this answers the original question.
Identical means that either bosonic or fermionic conditions are applied to the joint tensor-product'd wavefunctions of multi-particle systems. It's well defined, and although I was hoping this thread would offer a more grounded definition I am not sure if it succeeded.
I am sorry but I am lost. If they both have probabilities or amplitudes of 1 would that not lead to a joint probability of 200% in the nonidentical case and 400% in the identical one?
Sorry, I didn't mean to imply that my comment should apply literally to all cases. I am just pointing out to parent that particles can be indistinguishable in principle, and not just as a technological limitation of our measurements. Moreover, there is an experimental way to tell the difference between distinguishable and indistinguishable, roughly based on the difference between probabilities and amplitudes. To dig deeper one must look at the details, e.g. in Feynman's lectures vol. III.
The statement that protons are indistinguishable is not strictly correct either, because protons have a spin. Protons with the same spin are indistinguishable, but you can tell apart protons with different spin. The spin of protons can only assume two values, so effectively there are two classes of protons, indistinguishable within the class.
In your specific case, it is clearly false that the probability of having one particle in one place is 200%. However, my statement still holds for expectations, and you end up with an expected two particles in one place. In the indistinguishable case, you must compute expectations based on amplitudes, not probabilities.
Yea, spin adds a new level of complications. You can distinguish between two otherwise–indistinguishable protons if they have opposite spin, but the spin of a proton can also change over time (usually due to interactions with other particles, such as stray radio waves passing through your experiment).
Going back to the experiment that I described, you can imagine that the particles are released at A and B with opposite spins, and then the detector at A’ only detects the spin that corresponds to the particle at A. This causes you to measure yet another probability, distinct from the other two, because there are now more possibilities and there are still multiple ways to cause the detector to find something. It could detect the proton from A, but the proton from A could also have its spin flipped and thus not be detected. The particle from B could arrive at A’ with the wrong spin and not be counted, or it could have its spin flipped along the way and be counted. You still cannot tell which proton you detected!
Similar complications occur with polarization of photons, which someone else mentioned in one of the comments. It’s worse though because polarization is a continuous quantity, and there are more ways to change it.
In addition to what has already been said, I want to point out that there is a normalization step as well. The amplitudes are always calculated in such a way that the probabilities would never add up to more than 100%. This normalization is generally baked into the wave function of the system you are studying, but it can also be done separately.
Incidentally, amplitudes are actually complex numbers. You can think of them as little arrows, like this: →, or this: ↖. In fact, these arrows are also rotating with the passage of time; they trace out little circles. To calculate the probability, we square the absolute value of the complex number. The absolute value of a complex number is equal to the length of the arrow, and squaring a length gives you an area. Thus the probability is essentially the same as the area of the circle traced out by the rotating arrow.
Events with high probability correspond to long arrows (big amplitudes), and low probability events have short arrows (small amplitudes). Amplitudes can cancel out when added together if they point in opposite directions. Thus we observe that some sequences of events have very low probability. We sometimes say that these events interfere with each other.
Sometimes this interference seems mysterious, as in the double–slit experiment, and other times it seems very mundane. In real life we rarely bother to calculate the probability that the batter will hit the ball before the pitcher throws it, but calculating the correct answers in quantum mechanics requires taking into account many such unusual events.
There is a lot of circular reasoning involved in quantum information and thermodynamics.
It's all totally, perfectly self consistent, but it does not derive from first principles like set theory or mathematical logic do. Physics is an experimental science; they are not required to state their axioms. Oftentimes they do (QFT for example), but the most glaring case where they don't is anything involving information.
The whole postulate that information is physical is something that was stumbled upon, and then turned out to explain a whole bunch of other weird things like heat and entropy, and some of those explanations in turn implied that information is physical.
I suspect that our current efforts to build cryptographically-relevant quantum computers are a lot like the efforts to build perpetual motion machines in the 1700s. Our current understanding of things isn't wrong, but there is some undiscovered general principle that we keep butting up against, so we'll keep trying to build these things until we figure out why nature keeps blocking us. That discovery -- rather than a computationally-useful device -- will be the most important result of all the quantum computing research going on right now.
> until we figure out why nature keeps blocking us
A quantum computer only behaves like our mathematical ideal quantum computer if it's sufficiently isolated from the rest of the universe: otherwise you don't get the “indistinguishable states” thing and the amplitudes don't sum, and the computer stops computing and starts being a regular ol' physics experiment.
It's an engineering problem, as far as I understand. Get things cold enough, get things isolated enough, so they stay entangled with each other and not with the rest of the universe.
Is it? In engineering problems you have to consider practical feasibility. Would it turn out that for the expenditure of energy to get things isolated we may as well spend it on classical computation?
The cosmic microwave background makes even deep intergalactic space warmer (3 or 4 K) than the inside of Earth’s best fridges (the best as low as nanoK). 3 or 4 K is easy to achieve using liquid helium cooling, which is presumably why eg. Google’s Sycamore chip operators in that range. But achieving those low temperatures is a regular occurrence in labs all over the world—-you call AirGas or somebody and they deliver a dewar of liquid He.
And, close to a star, you have several neutrinos flying through your experiment. You have vibrations from earthquakes. You have electromagnetic coupling. etc, etc.
The temperatures are easy enough; you just have to compensate for thermal noise. The isolation isn't; without isolation, your signal is wrong.
But unless you're doing a computation without I/O (i.e. no reading back the results, no providing inputs), you need coupling into the rest of the universe, and typically into a low-entropy part of the universe like the Earth, where you have entities that care about the computation. So, I am not convinced having a deeply isolated part of the universe is the answer; in fact, the isolation vs. signal quality tradeoff makes it sound more and more like a fundamental limitation of practical concern.
> But unless you're doing a computation without I/O (i.e. no reading back the results, no providing inputs), you need coupling into the rest of the universe, and typically into a low-entropy part of the universe like the Earth, where you have entities that care about the computation.
With a quantum computer, you can only do that at the end of the computation. Not half-way through; that'll cause the computer to start doing a different (unwanted) computation instead. While the calculation is happening, you need (a high probability of) total isolation from the rest of the universe, so that the intermediate state of the computer only interferes with itself.
> So, I am not convinced having a deeply isolated part of the universe is the answer; in fact, the isolation vs. signal quality tradeoff makes it sound more and more like a fundamental limitation of practical concern.
It is a fundamental limitation of practical concern! Just like the need to keep conventional computer processors cool or they melt, or the fundamental limitations on the bandwidth that a radio frequency can give you. The people who deal with these limitations are called engineers.
that seems plausible, but currently very unsupported by evidence. in the past decade, quantum computers have gotten way more powerful, and progress doesn't seem to be stalling out.
No. Bell’s Theorem states that if subatomic particles have any “hidden variable” which we cannot currently measure, then that “hidden variable” will either have no effect at all on the particles, or that it will have to have instantaneous “nonlocal” effects on them.
If this hidden variable has no effect at all, then it is useless. Most physicists don’t care to include extra variables in a quantum mechanical theory that have no effect at all. By definition, they could not be measured, and so they would be extra baggage to carry around to no effect.
Physicists don’t much like nonlocal theories either. Any theory that requires information about the particle at A to travel instantaneously to B in order to change the state of the particle there is going to be hard to sell. You might have heard of a fellow called Einstein, who proved that nothing can go faster than the speed of light.
Thus, most physicists take the easier road, as it requires only that subatomic particles are indistinguishable. All this means is that particles are too simple to be uniquely identified. You can in principle tell the difference between two baseballs, because they have different patterns of wear and other markings on their surface. But those wear patterns and markings are formed out of the complex arrangements of trillions or quadrillions of atoms. It is easy to see how rearranging the ink molecules on the surface of a baseball could create a unique baseball, or how selectively removing molecules from the surface of the leather (by scratching it, for example) could do the same.
But subatomic particles are too simple to have that kind of internal state. Even atoms are only slightly distinguishable. Most carbon atoms have 12 electrons, but a carbon ion might have 11 or 13 electrons. You can distinguish between the atom and the ion, but not between two atoms or two ions.
Every description of a scientific idea always carries the implied disclaimer that we are talking about the present state of our knowledge. Every "fact" is in fact a belief that's held on a tentative basis pending the introduction of better evidence. That sounds quite noncommittal, but in fact it's the best that we can hope for.
One interesting thing about protons is that you can describe the effect of exchanging any two of them in precise terms, and end up with macroscopic predictions that can be tested. So you're not limited to just trying to measure every proton in a bag to see if they're all the same. There are other ways to test the hypothesis.
Think of it this way: we can experimentally figure out what the probabilities are; they are an observed thing. The only way to make sense of these probabilities is if all protons are indistinguishable.
In my statistical mechanics course, we went through an illuminating exercise where we started by trying to take account of every atom in a gas cloud. We started taking limits and making assumptions. One of them was that all atoms are indistinguishable from each other. This decreases the possible states of the system by N! (factorial, not surprise). After making that assumption, out pops the ideal gas law.
I was thinking the same thing. If you read the experiment described but replace the word "proton" with "cat" -- I would just assume that the scientists in question were from a society with very coarse senses and measurements, not that all cats are indistinguishable.
It's a mistake to try to make analogies of quantum interactions with everyday objects like cats. It's not as useful as one might hope, and your intuition gets in the way.
Not if they interfere in just the right way to keep the probability the same as in P.
I see a second problem with this experiment - in the double slit experiment, there is one electron interfering with itself. If you released two protons at the same time, they'd interact with each other and change the probabilities, even classically.
You mean like the "One-electron universe" postulate [1]? First, I don't think it's really useful beyond being a fun idea to consider. Also there's a lot more matter than antimatter, which raises some... logistical problems. Also, unlike electrons/positrons, protons are not fundamental particles, they're made up of quarks which throws a whole other wrench in the idea.
Each solution to the Schrodinger equation describes a different kind of particle. We assume that QFT is deterministic and complete so each particle will have exactly the same properties as the others of its kind aside from position and momentum (as far as we can tell).
As a fun consequence, the theory treats all particles the same -- even so-called quasi-particles! The difference between fundamental and other kinds of particles is that the fundamental particles can exist (at least for a short time before decaying) in vacuo.
Good question. I'm not sure. We treat them like they are all the size, which I assume is a function of all them being themselves composed of pieces that we treat as fungible. On the other hand its not like anyone is measuring a proton size with a pair of calipers, so them all being the same 'size' could simply be a function of them all having the same charge.
Protons are made of 3 quarks and a bunch of tiny gluons. if a big proton existed, it would be made of different or more quarks/stuff, and be called something else.
Now, is there a reason why there isn't a something else that acts like a proton?
That gets into Elementary particle physics and what combinations are stable and have matching charge.
Quarks have charge +- n/3 and generally come in triples.
Could the number of gluons vary? Maybe? But they wouldn't affect the size measurements much? And variations wouldn't be stable?
(Disclaimer: I don't do anything even slightly related to this field, and I somehow managed to skip taking any physics in college except quantum, which I literally slept through and dropped because it was too early in the morning three times before simply giving up. So, my question is probably super super dumb ;P.)
Isn't a sigma a lot? Like, I think that's a standard deviation? If there is even a longshot chance that we are might be off on that constant by multiple standard deviations, isn't that certainly a less-determined constant than, say, the acceleration of gravity? I feel like there is no chance in hell we could one day discover that our calculations are that far off for gravity.
Yes and no. It's a lot in the sense that we are way off from what we believed or knowledge to be. But on absolute scale, it's not a lot. The current determination of the Rydberg constant puts it at 10973731.568160 1/m with an uncertainty of 0.000021 1/m. So a relative precision of ~2*10^-12 (or maybe only 10^-11).
The standard acceleration of gravity is, btw, defined, so no uncertainty. The gravitational constant G is known only to 10^-5 or so.
Is the gravitational constant something that tells us a lot about quantum mechanics? Since it derives from mass?
What I mean is, just like the sun radiates in the EM spectrum and tells us a lot about the properties of those fields, does gravity do something similar?
Your correction is wrong... The size of the two objects is irrelevant. The only variables are their respective masses and the DISTANCE between them.
If it's worth correcting people, it's worth correcting them correctly... Don't you agree?
Also... The previous poster was pretty obviously talking about the constant of Gravitation (https://en.m.wikipedia.org/wiki/Gravitational_constant). His way of phrasing it is a common English shorthand that I see frequently enough that it's a well understood usage. They may not have used precise language, but their phrasing was definitely less misleading than your (incorrect) correction.
I assumed the gp was thinking about 9.8m/s^2 — because on the English speaking non-physics-expert world, this is called the acceleration due to gravity.
You’re right that it’s distance, of course, but I was coming from a place of trying to be helpful, and thought that if they were talking about terrestrial physics, it’d be easiest to imply it depended on the size of the earth (which determines our distance from it) and your height.
I don't buy that you were coming to this from a place of trying to be helpful. I know that here on HN, we're supposed to assume good motives in other posters, but I just can't do it, here.
I think you were trying to score points, and get an ego fix by correcting someone else.
Not so much fun when someone else dunks on you, though, is it?
That’s not something that could change based on experiment though. We define standard gravity as 9.80665 m/s^2 but the actual value will vary considerably based on location on the earth.
I think you have a misunderstanding about sigma.
When describing the measurement of a particular physical constant, the "standard deviation" is something that changes as we get better at making measurements. It basically means "If all our assumptions (e.g. assumptions about how good our equipment is, uncertainties about other physical constants) are correct, then it is unlikely that we would have made the measurements we did if the true value is not within 2 standard deviations of the result we got".
When a more accurate measurement is made, then "1 standard deviation" gets smaller, so we know the value better, but it's always true to say "we know the value to within a few standard deviations (given some assumptions made by experimenters)" . If it turns out the measurement was wrong by several standard deviations, then it's very likely that some assumptions were wrong.
You are absolutely right. For deviations beyond, say 4 or 5 sigma, it's much more probable that it's not a statistical fluke, but a systematic error. Assumption wrong, experiment wrong, theory wrong or something like that.
We expect that measurements land within +-1 sigma of the true value with ~68% probability (and 95% for +-2sigma). Implicitly, we assume a Gaussian distribution (often the case to good approximation, at least for small deviations), and also that we can invert the sentence: the true value is within the error band around the measurement with that probability.
The latter says "The new result implies that earlier attempts to measure the proton’s radius in electronic hydrogen tended to overshoot the true value. It’s unclear why this would be so" and it seems that these researchers have now shown why.
The value of the proton charge radius in itself is pretty much irrelevant, otherwise it wouldn't be so hard to measure.
As scientists we want to know if our understanding of nature is correct. To test this, we measure the same quantity, for example the proton charge radius, in different ways. If the underlying theory is correct, the results should agree within the experimental uncertainties.
Since 2010 there was a big disagreement between the proton charge radii measured by hydrogen spectroscopy and electron/proton scattering (which roughly agreed at the time), and a much more accurate measurement using muonic hydrogen spectroscopy. This has lead to a lot of excitement, since discrepancies could be a hint for new physics. Since then, more accurate hydrogen spectroscopy experiments have been performed and most agree with the muonic hydrogen value. This probably indicates that the discrepancy is due to underestimated error bars in the old measurements.
In contrast to laser spectroscopy which gives relatively direct results, getting the charge radius out of electron scattering data is notoriously hard. Different groups have found different charge radii from the same data for a long time.
Most likely it indicates protons have devalued their currency slightly. Probably not a big deal for now but we should keep a close eye on it in case it happens again.
Is this going to mean going back and looking at a bunch of old experiments and slapping your forehead and saying, "god dammit, THAT'S why it didn't work" like I do with my code when I find a wrong sign or off by one error? Or even worse, saying "how the hell did that ever work?" That one... ooooh, that one.
> Or even worse, saying "how the hell did that ever work?"
The most unsettling feeling. I often have that feeling when doing the devops and infrastructure part of my job, e.g. encountering paradoxes in the dark dreary bowels of systemd.
The idealist and perfectionist in you wants to keep digging deeper to arrive at a proper understanding. The realist and lazy SOB in you wants to slowly back away and pretend this never happened. This dialectic hopefully guides you toward a happy compromise, somewhere down the middle.
The more critical the infrastructure you work on, the less you will be willing to take the "slowly back away" route. Because as long as you don't understand every single detail of the failure, that feeling of "what horrible thing do I not understand here" won't go away, and you dread having a major "oh... crap, so that's what I did not understand" moment when it's inevitable.
This week, we had such a failure when testing a change. The root cause was found eventually, but what two extremely good engineers plus me still could not figure out was why that problem was only surfacing now, after years of having that buggy code in there. Very important code. I did not want to risk some obviously unknown property of it be our demise later on.
Fortunately, in this case it just turned out that the seemingly unrelated changes did, after all, hold the now failing thing differently than before. As it had been used, the problem was masked entirely, and the code did work well for all those years.
But if protons shrank, wouldn't the yardstick also shrink seeing as it's also composed of protons? These measurements taken didn't use a physical yardstick.
OK, another dumb question from a neophyte here, doesn't the Heisenberg uncertainty principle make it impossible to know the size of a proton for sure? It seems like if you cannot not know the position and momentum of a particle then measuring its size exactly would be impossible?
Thanks in advance to anyone willing to take the time to give me an ELI5 on this :)
How big is the smell of a baking pie? The concentration of pie molecules in the air is not a binary yes/no function of position, but at the same time the volume is definitely smaller than the county, and larger than the kitchen. Fuzzy-edged objects have some sense of size but there is some freedom in where to define their edge.
For the proton, the "size" is defined as a length-valued parameter in the function that expresses the charge distribution through space. The parameter is objective - but its association with the word "size" is imprecise. It is necessarily imprecise, not because of anything quantum or even unfamiliar, but because there's not another English word for the scale of objects without hard edges.
If I'm blindly shooting a lot of bullets at a lot of ducks, I should be hitting some ducks. If I then count the number of shots fired and the number of ducks dead on the ground I should be able to calculate the size of ducks with high precision.
The Heisenberg principle only makes it impossible to know the exact size of a single proton in a particular moment in time. But it doesn't say that you can't calculate the average of millions of protons, for example.
Thanks for all the good discussion, this actually makes sense, of course, now that I think about it, I am sure they are crashing millions of particles together and doing some analytics, so based on that we know about what the size is. Sort of like how when we do analytics online we know the approximate height of say all Amazon customers who purchased jeans. OK dumb analogy but it works for me :)
I'm an ignorant layman here, but the uncertainty principal only says that if you're sure about the size/location, then you're very unsure about the momentum. It's possible that measuring its size does not need much certainty about its momentum, so they can get the accuracy they want.
We're at quantum scales and the very notion of "size" is rather ill-defined.
The position of a proton, is - as a matter of fact - a probabilistic affair, so when you talk about size ... if you don't even know where the darn thing is, how are you going to measure where it starts and where it ends ...
It is a misconception that things are ill-defined just because they are at a quantum scale. The proton has an electric charge distribution. The definition of the proton size used here is the root mean square charge radius of that charge distribution.
Indeed, but it's actually a little bit hard to define what that charge distribution is (or in which frame...). The way it shows up, both in the cross section for elastic lepton scattering as well as in spectroscopy, is via the related quantity "form factor", as the slope at zero four-momentum transfer. While the form factor can be thought as the Fourier transform of the charge distribution for heavy objects (say, iron nuclei), for the proton, this becomes dicey.
Did you happen to know that the difference between 1 lightyear and 1.000001 light years is 9.4605284 × 10^24 femtometers? That means they're not at all similar distances on an astronomic scale!
Not sure why my og comment was downvoted. In absolute terms, the difference in size of the proton is quite large. In relative, the difference in size is quite small. And that blows my little sapien mind.
I thought that someone had a measure for a thought and that a long thought was measurably shorter than a short(?) thought, by the length of a proton.
This then went off into a consideration of neural message length, etc.
Cosmic black hole of thought in a spilt second.
It’s easy to drop that link. Much harder to deny the math that works perfectly. But probably it’s even harder to convince you to take serious look at it :)
I’ll leave it as an exercise for the reader to decide whether the title of the article is clickbait.