This is neat, but I would take it with so many grains of salt.
We don't know what dark matter is. We have some ideas. XENON1T was/is testing some of those theories about dark matter.
We know next to nothing at all about dark energy. We're pretty sure it's a thing, or else our model of the universe is just completely wrong and it's not anything at all.
So if XENON1T finds something that doesn't fit their model of dark matter, going on a limb and saying "I dunno, maybe it's dark energy?" is a fun thought experiment and model, but at best it might lead to further experiments to see how likely that model is to be true. And from what I'm reading, that's the attitude the scientists behind XENON1T are taking here.
But alas, we all know that tomorrow we can expect pop-sci articles saying "DARK ENERGY HAS BEEN SOLVED".
With all due respect to theorists, everyone in particle theory and cosmology is doing this sort of thing. I remember in grad school, cohort-mates down the hall whenever the LHC announced any sort of blip would rush to get a preprint from nothing in a week for the chance to be the one to have made a discovery that fits the discrepancy. One theorist even told me the lack of experiment was "good" because they had space to keep churning nice theory papers without having to worry about being shot down.
Occam's razor suggests to look at explanations that are mundane first. Even beyond dark energy candidates, it looks like Axions are another explanation which is more well established in the theory world (for what that's worth...), so absent more data it's hard to say it's an observation yet without elimination of alternatives.
>> One theorist even told me the lack of experiment was "good" because they had space to keep churning nice theory papers without having to worry about being shot down.
"Scientists" are forgetting the distinction between hypothesis and theory. They're also constantly looking for "new physics" and using the word "novel". Nobody seems to care about using what we have to explain what they see.
Sunny Vagnozzi can't really be said to be suddenly coming up with the idea - there are several obviously related preprints at https://arxiv.org/a/vagnozzi_s_1.html
I'd like to give some broad background on the link at the top of this discussion.
If one substitutes "dark energy" with "quintessence field driving the accelerated expansion" one gets closer to what the press release (and associated paper, found at https://journals.aps.org/prd/abstract/10.1103/PhysRevD.104.0... ) discusses. The "quint" in "quintessence" refers to a "fifth fundamental force", which I'll get to below.
As early as the 1920s (per https://arxiv.org/abs/1211.6338 which is really good history of science) relativists were discussing whether some long-range force could stabilize a universe filled with dense matter, which would otherwise have a tendency to collapse over surprisingly short timescales (e.g., attempts to measure the age of the Earth by studying radioisotope ratios and coming up with billions of years of planetary age meant Earth should already be squashed into the remains of a collapsed universe). Einstein preferred a geometrical solution, the cosmological constant, to model the obvious non-collapse.
With the 1990s discovery of the accelerating expansion ( https://en.wikipedia.org/wiki/Accelerating_expansion_of_the_... ) this preference has been vindicated in the effectiveness of the standard cosmology, \Lambda-CDM, where \Lambda is the cosmological constant. It captures in one variable the measured value of the accelerated expansion, even taking into account disputes over the exact value of the Hubble constant.
The bulk scatter of redshifting luminous matter in universe is best approximated by the Friedmann-Lemaître-Robertson-Walker equations, with the Robertson-Walker term being a metric describing a universe that, if one considers 2d-plane-slicing of space, is like a stack of plates, where each plate is stacked upon an earlier plate, and in turn supports a stack of later plates. The Robertson-Walker metric has parameters which define how flat each plate is -- typical dinner plates, tea saucers, and so on tend to be curved so that liquids don't roll off the edges. (We observe distant galaxies "rolling out of sight", growing too dim and too red to see). The R-W metric also has parameters which define the evolution in size from one plate to the next. Plates may get bigger towards the future; stay the same size; decrease; or even oscillate; depending on the parameter function. Bigger here means that we consider the radius from the centre of the plate to the edge. When we add the Copernican Principle, which says our model should not require that Earth be at or even very close to the exact centre of the cosmos (if there even is such a centre), we take this to mean the distance from us (or any comparable observer) to the observer-centred cosmological horizon.
In our universe, we have exceptionally flat "plates" stacked in such a way that earlier plates are much smaller than larger plates. The "plates" at this age of the universe have a diameter of nearly a hundred billion light-years. The accelerated expansion means that the growth in the future direction is superlinear. And of course, we extend the 2-d planar plates to a 3-d volume.
However, we can go the route that Einstein did not prefer, namely that something non-geometrical is at work driving the accelerated expansion of the universe. In that case, we probably retain the ideas from Robertson-Walker and stack up "plates", but choose a function where either all the plates are the same size, or where the plates grow purely linearly -- no acceleration of the expansion. (The latter approach is how cosmology was modelled when the expansion was believed to be inertial, driven by some early-time impulse that went away around the time of the hot big bang).
To this non-accelerating universe, we have to add something to make galaxies continue to accelerate apart, rather than drifting apart inertially, or recollapsing in the distant future. We add nonzeros to the tension (aka pressure, but negative) components of the stress-energy tensor. That tensor is really a tensor-field where each point in spacetime has a tensorial value which describes the flow of energy-momentum in and out of a point, including in the timelike-direction. In general the tension term must be very small, or the accelerated expansion would look very different, and galaxies would look even more different. In general the tension must vanish inside stable gravitationally bound structures. Moreover, the tension term must also appear in vacuum in deep inter-galaxy-cluster space.
When we study mechanisms that can plausibly generate such stress-energy, we'd tend to be drawn towards our best fundamental theories for known forms of matter, namely the Standard Model of particle physics. We'd want to extend the Standard Model by introducing some particles which give rise to interpretation as carrying and feeling a new fundamental matter force to go with electromagnetism and the nuclear forces. This is usually called a fifth force, but maybe better to be just called yet another long (as in practically infinite) distance force comparable to electromagnetism or Newtonian gravitation.
Now, an aside. Manhattan is not undergoing cosmic expansion. The Earth-Moon system is not expanding. Neither is our solar system to the best of our ability to measure. Our galaxy and the cluster that it's in do not appear to be expanding. Distant galaxy clusters appear to be bound together against the cosmological accelerated expansion.
In the standard cosmology we are forced to say that the cosmos in bulk is best described by the Friedmann-Lemaître-Robertson-Walker model, but that local systems, like Earth, are best described with exterior Kerr metrics, or a Lemaître-Tolman-Bondi collapsing metric, which we can stitch together through an annoying process called the Israel-Darmois Junction, or alternatively we can use something inhomogeneous that captures a set of these Kerr/Schwarzschild-like metrics all at once. This is no big deal; general-relativists do this all the time at much smaller scales with good results. Cosmologically, for instance, "swiss-cheese" models are reasonably powerful, where the "cheese" is the increasingly sparse expanding space and the "holes" are voids where the matter is collapsing into ever denser arrangements.
A fifth force model also has to avoid an expanding solar system. It can do this in at least two ways. Firstly, one can distribute the sources of the fifth force carefully in such a way that they are almost exclusively found outside galaxy clusters. (We take a similar approach with Dark Matter, which is found mainly inside galaxy clusters; the exclusiveness of Dark Matter is less strict than what would have to apply to a fifth force particle family). Alternatively, one could create an interaction with matter which turns the fifth-force charge into something else, like a low-mass particle such as the axion. This process is called "screening" when a large object like the sun (or its magnetic field) encourages this conversion. One could then compare the environment around the sun and Jupiter to see a mass-dependence or magnetic-field-strength dependence on such screening, or even look to more extreme objects like magnetars.
This paper by Vagnozzi chases a small deviation from expected results towards this type of fifth-force screening by the sun driven by theoretical particles which [a] seek to solve a problem in the Standard Model of Particle physics, [b] are decent candidates for at least some of the dark matter energy-density, and [c] are still as far as I know theoretical rather than discovered. https://en.wikipedia.org/wiki/Axion
At this stage, this idea is a more complicated way of capturing the results we have from many astrophysical observations in a way that must practically completely reproduce what we get from the FLRW equations of \Lambda-CDM. More complicated because it adds parameters to the energy-density of the standard cosmology. It also almost certainly would require additional parameters in the Standard Model of Particle Physics. These are not impossible demands, but will be difficult enough to do in practice that few people will make realistic attempts without more compelling results than those reported by Vagnozzi et al.
This may be one of the best HN comments I've ever read. Thanks for posting. I do have a question though. Probably a stupid one, as IANAP(hysicist).
Regarding this:
Now, an aside. Manhattan is not undergoing cosmic expansion. The Earth-Moon system is not expanding. Neither is our solar system to the best of our ability to measure. Our galaxy and the cluster that it's in do not appear to be expanding. Distant galaxy clusters appear to be bound together against the cosmological accelerated expansion.
Do you mean that Manhattan (and the Earth-Moon system, etc.) literally are not expanding even one smidgen, or do you mean that at that scale the expansion is just too small to (notice|measure|care about)? I ask, as I'd always interpreted "expansion" as being space itself expanding, and thought that it happened at all scales (including now that I think about it, inside of atoms, which could cause some weird stuff???). But are we saying that expansion isn't something that happens at all scales then? Eg, that only the space between galaxies (or between galaxy clusters) is expanding? If so, that makes the whole expansion issue feel even weirder than ever to me, and it felt weird enough before!
IANAP, but I've been through early courses in QM & GR. The negative value of the tensor (could be) dependent upon the local curvature, such that in the region of massive objects the tensor is 0 or positive (which means "regular" GR dominates); and, "very far away" (in the cheese), the tensor is negative, and causes expansion.
> Do you mean that Manhattan (and the Earth-Moon system, etc.) literally are not expanding even one smidgen, or do you mean that at that scale the expansion is just too small to (notice|measure|care about)?
More the former.
In General Relativity, we have several exact solutions of the Einstein Field Equations, which basically means we have a lot of standard spacetime metrics. The Schwarzschild metric for a central spherically symmetrical non-rotating mass is one. The Kerr metric, which is essentially an axisymmetric deformation of Schwarzschild is another. There are related metrics which incorporate gravitationally collapsing matter into a spacetime like these.
Many such solutions are asymptotically flat. Very roughly, the inverse square law for gravity means that at a large distance you can ignore the gravitation of a central mass (which grows more and more pointlike in gravitational behaviour with increasing distance). Eventually you're in an area where the gravitational contribution can be ignored. In the language of General Relativity you are in effectively flat spacetime. The function of distance goes asymptotically to flat. There are obvious analogies with electromagnetism: distant stars are dim and pointlike, and really distant ones can be clumped together in larger structures with their clumped-together light curves being an example of an aggregated observable. (Indeed even at the level of a single star we are aggregating lots of tiny events into one spectrum equipped with emission and absorption lines, both for close-up stars and for distant ones).
We have some procedures available that let us stitch together asymptotically flat spacetimes with a "thin shell" mathematical boundary used to translate values from one side of the stitching to another. We can thus build up our solar system as a hierarchical stitching-together of Kerr-like metrics (one for each rotating body) each of which can "meet" another at some relatively flat-space point.
We can even stitch in Kerr-like metrics into a broader spacetime. The swiss cheese cosmology approach does this, and that technique traces back to the 1930s: https://en.wikipedia.org/wiki/Einstein%E2%80%93de_Sitter_uni...
(it has of course been refined over the decades).
Careful observation of our solar system supports this hierarchical stitching method reasonably well, but only if the far regions away from bodies are asymptotically flat. If we generate almost any amount of metric expansion -- much less than \Lambda -- to the otherwise asymptotically flat areas around the Earth, the moon's orbit changes dramatically. Likewise, if we change it within our solar system, things look very different in fairly short order. The same so far holds remarkably well for larger structures that are gravitationally bound, up to galaxy clusters.
A couple of decades ago, there were good astrophysical-observation reasons to think the hierarchical "stitching" process was broken enough that either an inhomogeneous metric would be needed from the start (throwing away lots of useful symmetries). These have faded with subsequent observation.
There is still some small wiggle room that allows for things like fifth-force screening to be taken seriously, however one has to do headstands to keep Manhattan (or Earth-Moon or Earth-Sun) from expanding measurably.
Measurability here is very tight. Laser lunar ranging, very long baseline interferometry, and even GPS and friends keep tightening the bounds on how much expansion the "true" metric Earth sources can allow compared to its approximate Kerr metric.
This is why I think it is safer to say that it's not expanding at all, rather than that we will find expansion if we look closer and closer.
As a sibling comment has noted, there are also constraints from particle physics and chemistry. Those constraints also arise in astrophysical systems like megamasers, planetary nebulae, stellar deflagrations, supernovae, binary+ millisecond pulsars, and so forth. The wiggle room for a suppression rather than extinction of cosmological expansion keeps tightening, and the constraints are from a diversity of lines of evidence.
However, it is reasonable to qualify the "it's not expanding" with "all our measurements to date are consistent with exactly no expansion in the solar system, and we have lots of rather different types of measurements all saying the same thing". I'm not sure that's as helpful for understanding the physical neighbourhood around here, or in galaxies and star systems generally, though.
> I'd always interpreted "expansion" as being space itself expanding, and thought that it happened at all scales
Observations are consistent with expansion happening only in really good extragalactic (extra-galaxy-cluster, even) vacuum.
This is really easy to explain with a non-accelerating expanding universe.
The mechanism for the (actually accelerated!) expansion is not known, but is usually what is meant by "dark energy".
This is a highly conventional take on the matter. I'm not offering up any sort of pet hypotheses, and I generally avoid doing so anywhere like HN as explaining the standard theory is more interesting (even to me) anyway.
The sky is full of weird stuff that can be seen. Check out the "variable universe" -- astronomers like https://asas-sn.osu.edu/atlas/visualizations#star-map-panel keep finding bizarro things to think about even far away (in a theory-space sense) from the dark matter / dark energy sectors, that may test theories about those sectors.
You'd expect that as visible matter gets weirder, the invisible stuff must get weirder still in proportion. Oddly, that is not really the case.
Thank you, you're a fantastic writer and clearly have some expertise. I am not on your level but do spend a decent amount of time trying to deepen my understanding of physics.
This got me thinking, would one way to explain expansion possibly be gravity is slowly getting stronger on shorter distances, or that the fabric of spacetime itself is not perfectly rigid, not only in the "depth" component like the classic trampoline analogy, but also in the "length/width" component? Galaxy filaments are thinning, so if you think of the center of a supervoid surrounded by filaments on all sides, that void is being stretched apart in every direction, at some level that is so fundamental that it "creates more space". Then again, everything everywhere is surrounded by filaments and all space is being pulled apart by the same reasoning, but if there is anisotropic mass close enough, this overrides the creation of new space.
The tl;dr is that the implosion of early dense baryon clouds created shockwaves which threw most of the matter (including dark matter) out of the regions that later became cosmic voids.
Few voids are outright surrounded by denser parts of the cosmic web, and there are many "invasions" of dense filamentary structures into large voids. Additionally, at much larger scales the difference between relatively empty space and relatively full space blurs away (WiggleZ Dark Energy Survey, SDSS-BOSS) preserving our ability to work with the spatially homogeneous and isotropic distribution of matter in the concordance cosmology. Thus the matter an be modelled as a uniform "dust" whose individual motes move only with the expansion, and even as a set of perfect fluids which carry attributes such as pressure, density, velocity of sound, and effective equation of state.
It is pretty reasonable to expect that ongoing detailed sky surveys will find small numbers of galaxies and/or quasars in most voids, as any reasonably dense concentrations of matter left behind from the BAO would tend to collapse gravitationally. The orbits in such in-void structures will be important observational targets. One should expect that these galaxies (and the subsystems within them) do not expand with the cosmos any more than our galaxy or our solar system, and that their peculiar against the cosmological comoving coordinates is low, like galaxies clearly outside voids.
It seems a bit nuts that people are chasing Standard Model extensions for dark energy but they haven't even resolved particle with gravity yet, or has there been recent advances?
I never heard of string theory being somewhat formally accepted. Hm, now they have things like "loop gravity". I guess they are still trying to crack that nut.
At least we can measure gravity in a local/controlled setting...
> But are we saying that expansion isn't something that happens at all scales then? Eg, that only the space between galaxies (or between galaxy clusters) is expanding?
Yes, that's what is being said here. The space inside you and me and our atoms don't expand, but the empty spaces between galaxies do.
> Yes, that's what is being said here. The space inside you and me and our atoms don't expand, but the empty spaces between galaxies do.
It is very weird, and non-intuitive.
Why is it weird that localized entities don’t expand in the same way as galaxies do? Why would my body expand in the same way that the universe would when exposed to a Big Bang?
>(including now that I think about it, inside of atoms, which could cause some weird stuff???).
If the distance between electrons and protons were expanding over time, that would be hard to square with quantitized energy levels for electron orbitals. Not sure if you could get around having material and chemical properties changing over time.
The amount of time and care that went into your response is a beautiful thing. Thank you for your contribution. In fact, your comment history is full of these kinds of careful, well-considered responses.
It makes me happy that people like you exist in the world, ones that combine expertise with clarity of thought and expression. Would you be willing to share emails?
Sounds like you know a thing or two about physics. In your opinion, are particles like higgs bosons or gravitons or even electrons real?
Coming from a coding perspective,
making a physics simulation is about tradeoffs in effort/performance/complexity/fidelity, and there's often multiple ways to implement the same feature. From that perspective, theres no reason to expect a 1-to-1 mapping between theory and observed reality.
How solid are the grounding posts that anchor the idea of particles to reality? Are electrons more real then bosons? Are they all just an implementation detail that makes the math look good?
That's very helpful. Based on the paper, which seems to state that they have a 2σ significance level, this seems very preliminary (I am absolutely not a physicist!)
> As an example, we examine whether the electron recoil excess recently reported by the XENON1T collaboration can be explained by chameleon-screened dark energy, and find that such a model is preferred over the background-only hypothesis at the 2.0σ level, in a large range of parameter space not excluded by stellar (or other) probes.
Yes - the Phys.org article also states they need to replicate these results and the kind of projects that are underway that can assist with this sort of science.
I would bet at 10 to 1 rate that this will turn out not to be dark energy. We haven't observed any dark energy effects with scales less than billions light years. I would've been much less skeptical if they announced that they found dark matter.
> About 27% is dark matter—the invisible force holding galaxies and the cosmic web together—while 68% is dark energy, which causes the universe to expand at an accelerated rate.
I understand that matter slows and energy accelerates the expansion of the universe but how is this ratio calculated? It seems like we would need to know the strength of dark energy to arrive at this reasoning.
It’s based on actual observations, with some assumptions on the nature of the universe. As you said - dark matter is detected by extra “holds stuff together” force than observed masses would indicate exists from a host of phenomena (find the Wikipedia article on dark matter for more detail), and dark energy which is the energy required to make the universe expand at an accelerated rate is based on on distance-redshift relation, measurements of the cosmic background radiation, and theoretical additional non matter or dark matter energy required to form an ‘observationally flat universe’.
The basics for both of these are discussed on the Wikipedia for both subjects.
Disclaimer - I am not a physicist but I did stay at the holiday inn express last night and I do watch a lot of PBS Spacetime on youtube.
I actually found both Nature and Science to be excellent at covering science news. They are famous for science journal, but they also have science news department.
Both of which have excellent weekly podcasts covering their articles and other science news of the week.
Oh, and speaking of Science News, that is (was?) an excellent weekly with short articles on recent results. I just haven't subscribed in a number of years.
This is why I don't belive there is scientific stagnation. New discoveries and theories are constant being made and proposed. Its only because the problems have gotten harder that progress seems slower.
This is not how I understood the argument of scientific stagnation though, wasn't it more to do with 'breakthrough' discoveries that somehow radically disrupted / changed our way of life?
In that regard I'm inclined to believe there is some sort of stagnation. Though not necessarily at the fault of the community or researchers. It could very well be a mix of what you said about problems being harder that we have, between 1900 - 2000 picked all the 'low hanging fruit' in physics, and that we now need lots of patient, 'baby step' type improvements to get to a new era where the technology is powerful enough to make big leaps again.
Kind of like how Deep Learning research stagnated due to lack of data volume and processing power.
This is my perspective, but I think it's totally up for debate and I'm keen to hear different opinions.
Between ~1880 and ~1920, the physical sciences made huge, crazypants leaps both in theory and experimentally. The rest of the 20th century was spent catching up with all the implications of those leaps. Now, the sciences have a set of very powerful ideas that have a bad habit of producing the right answers as far as anyone can see, but which have very visible holes and don't fit together. Thus, stagnation.
On the other hand, it may just be a return to the normal status quo.
Whether its the fault of the community or just natural consequences of the low hanging fruit being picked away will only be possible to know in retrospect, really. Is the issue that we don’t have the technology to figure things out, or that the community was too stuck in a local maximum of explainability to find the summit where everything ti explained? Until the next major breakthrough actually happens, it’s hard to know which is the case.
Although in general, I’d say it’s possible a big leap never happens again. Given that physics will ultimately be a finite set of rules, if we can explain the vast majority of phenomenon accurately, slotting the last few pieces into place might not grant us much. It will feel great for humanity to know of course, but it’s entirely plausible that the reason we have so much trouble figuring these these out is that they’re almost completely separate from the human experience. We might figure out quantum gravity, go “that’s nice”, but if it’s only relevant when there are stellar masses involved not be able to use it to change our way of life. Big changes to how humanity lives going forward could be entirely reliant on human invention/ingenuity, not us learning new facts about the universe we live in.
There are still poorly understood physical phenomena that could prove to be tremendously useful, like superconductors. If quantum gravity leads us to room-temperature superconductors (for example), that would be absolutely earth-shattering for humanity.
My model for determining the number of scientific discoveries indicates that visible, understandable scientific discoveries only account for about 10% of the scientific discoveries out there. I theorize that there must be a remaining 90% of scientific discoveries that are undetectable, unknowable "dark discoveries."
Also what people consider science is a bit of a continuum. There is what is universally objective scientific breakthrough (like discovery of quarks) on the one end, and universally objective non-breakthrough (like the construction of a simple bridge) on the other but where does the progress of iPhone 1 to iPhone 13 fit?
So much advancement is going on in or around computing right now I think that in the future we'll sorta look back and consider that closer to science than mere engineering.
No, I disagree. Science is something that's well established and has been around for centuries. Engineering is also well established and has been around for centuries. The iPhone is an engineering advancement and will always be viewed that way; it'll always be in the same category as building the aqueducts. There was nothing scientifically groundbreaking about it, unlike discovering gravity or radiation.
I don’t see anything wrong done here by Phys.org. It’s the world’s most sensitive dark matter detector so I’m happy they’re reporting on its findings. This isn’t some tabloid-level news but making its rounds at universities.
To be clear, they're reporting on an interpretation of a not-understood part of Xenon1T results (a low-energy excess in electronic recoil event rate). Here is the original paper by Xenon1T reporting on the excess: https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.0...
And no, there's nothing wrong with it, but there are many many such interpretation papers any time an experiment reports something unusual and at most one of them (but probably zero!) might be true :).
I haven't been following cosmology very closely in a while; could someone update my knowledge a bit?
* The only evidence for dark matter is that galaxies and galactic clusters are moving faster than they should be; i.e. they are acting like there is more gravitational mass around.
* The only evidence for dark energy is that the redshift of distant objects is higher than it should be; i.e. they seem to be accelerating away rather than decelerating.
* All of the other properties of dark matter and dark energy are negative: we have not been able to observe anything, so we know what it's not.
> The only evidence for dark matter is that galaxies and galactic clusters are moving faster than they should be; i.e. they are acting like there is more gravitational mass around.
I'm pretty far from this stuff, so I am sure the people studying it know far more than I do.
However when I read about dark matter/energy, and quantum physics, it sometimes reminds me of stories of the geocentric model, where the scientists of the day derived this incredibly complex clockwork geometry of the solar system, to account for the strange curves the other planets seemed to take to cross the night's sky.
I wonder if we're similarly missing a key insight which will make these "strange results" fall into place.
On the other hand, it's entirely possible that we've just reached the limits of what can be found intuitive to our primitive ape brains.
In the present it seems like we've discovered so much but modern science is still such a young concept in the timeline of humanity. I'd wager we still have many wrong answers and much of how the universe works is still undiscovered
I just thought: if fusion makes heavier elements, it makes sense that the universe expands faster... And I don't even know how I got there...
fusion makes elements heavier.
Heavier collections of elements make more gravity.
(is the creation of matter limited in our universe?)
In bucket with limited supply of new matter, through fusion the existing matter makes clumps which get heavier and heavier. But the substrate will thin everywhere else.
Supermassive black holes make more gravity.
The more compression there is, the more heat is generated,
And matter switches state at some point. (solid -> gaseous -> plasma -> another step -> another step?)
If the matter "becomes" dark energy as a new sort of state, it could get "flung" out like two electrons with the same poles.
Since Energy doesn't get lost, there's no other way than to "collect" at the fringes of the universe
If the universe was like a baloon in the universe it would push the boundaries of the bucket indefinitely.
Gravity is kinda like our substrate, not sure if fish are aware about their water being "heavier or less heavy", not sure if this analogy will ever hold up...
can someone let me know if I got some of it right?
First of all, while fusion makes elements heavier, gravity does not increase this way, since the total energy of the system remains constant, and E=mc^2. It could be that the radius the mass/energy is contained in does decrease, but not all stars end up as black holes, while ack holes have no more or less gravity than any other object with the same mass/energy.
Then, most of your explanation is carried by the solid -> liquid -> gas -> plasma ->... -> dark energy idea, which makes no sense. Dark energy is not matter - none of the observations are consistent with any kind of matter in intergalactic space. And matter of any kind will always have gravity, so it can't be responsible for the expansion of space time.
unless it goes from matter to something else (like two particles with oposite poles) when being cooked up inside the black that tries to get away from each other
Matter inside a black hole, by definition, can never leave the are of the black hole. The shape of space beyond the event horizon is simply such that there is no path outside.
And there is no 'something else' beyond matter, except energy, which is not a thing per se, but the potential for other things to move or happen.
They emit Hawking radiation from the event horizon itself, not from the area inside the event horizon. As far as it is theorized, they shrink by emitting this radiation. This phenomenon is poorly understood and likely impossible to study, at least until we have a quantum account of gravity. But what is clear is that a particle that has crossed the event horizon will never again cross it back, the very geometry of space prevents this.
We don't know what dark matter is. We have some ideas. XENON1T was/is testing some of those theories about dark matter.
We know next to nothing at all about dark energy. We're pretty sure it's a thing, or else our model of the universe is just completely wrong and it's not anything at all.
So if XENON1T finds something that doesn't fit their model of dark matter, going on a limb and saying "I dunno, maybe it's dark energy?" is a fun thought experiment and model, but at best it might lead to further experiments to see how likely that model is to be true. And from what I'm reading, that's the attitude the scientists behind XENON1T are taking here.
But alas, we all know that tomorrow we can expect pop-sci articles saying "DARK ENERGY HAS BEEN SOLVED".