The weight difference in the hand between the 14 pro in stainless and the 15 pro in titanium is considerable, I use both every day. Sometimes weight is a way to denote solid and premium like in watches, but with the size of phones and how we use them, being lightweight is really where it’s at.
(OP makes a good point, just going on a slight tangent here)
We really need a term that sits between correlation and causation in situations where data is difficult to come by. There's such a huge rift of meaning between these terms, and too often 'correlation is not causation' gets wheeled out in a room of people that already know that and are trying to figure out the nuances.
How about plausal? Aka it's rather plausible that there is a causal relationship between two things but causality is hard to prove.
"Air quality and dementia have a plausal relationship".
The bar for plausation is much lower, yet many correlates still won't meet it. "Bad air quality causes dementia" is a categorically different statement than "ice cream sales cause shark attacks", if we establish the category of plausal relationships.
Those two examples relied on heavy investment over many years by the worlds sole superpower to bounce back. See the Marshall Plan and MacArthur’s reconstruction of Japan. These things do not happen automatically, and unlike those examples, there doesn’t seem to be a rich, well-run superpower waiting in the wings to lift anyone back up. I’m sure some relations will normalize post-47 but economic might and attractiveness to the world may not.
Not quite apples to apples tho because you have to take into consideration what was known at the time each theory was developed (the input), not just the output.
Theory A: fits 7 known predictions but also makes a not-yet-verified prediction
Theory B: fits 8 known predictions and offers no new ones
In this example wouldn't Theory A be better, because all else equal it is less likely the product of overfitting and required more insight and effort to discover? In other words, Theory A used a different process that we know has a higher likelihood of novel discovery.
(Maybe this is a restatement of the simplicity argument, in that Theory A requires fewer predictions to discover it, ergo it is simpler)
> In this example wouldn't Theory A be better, because all else equal it is less likely the product of overfitting and required more insight and effort to discover?
No, Theory A might simply be a dead end with no new insights to offer. And alas: the universe does not care about insights, efforts, or simplicity.
All else equal if Theory B is easier to teach - easier for more people to understand - it might have value for that reason. It might also be valuable to teach multiple ways to understand the same underlying phenomenon.
> In other words, Theory A used a different process that we know has a higher likelihood of novel discovery.
How would we measure "likelihood of novel discovery"?
Now to call myself out here: the best way to answer any of these questions is to probe both theories at their limits to find differences in predictions that we can test. It may be that we don't have the right equipment or haven't designed experiments sufficient to do that currently.
Remember that Einstein's GR was validated by its prediction and the Eddington experiment, though his initial 1911 prediction was wrong and he later refined it in 1915. The 1919 Eddington measurements validated the theory.
We should remember though: That only worked out because the 1912 attempt to make the observations (which would have invalidated Einstein) got rained out. Who knows how Einstein's career would have turned out if the 1912 observations had succeeded. Perhaps people would have said he simply over-fit his theory to fit observation.
I don’t think that is implied. It was discovered first, but that doesn’t mean it is necessarily simpler or required less data to discover. Take Newton/Leibniz calculus for example as a clear example of similar discovery time, leading to the same result but using different approaches. Leibniz started after Newton technically, and yet is the preferred way.
Especially if theory B is equivalent to theory A, then using it as a replacement for theory A seems perfectly fine (well as long as there are other benefits).
In some cases it might be pointless though from a scientific standpoint because the goal is “not-yet-known” predictions, but if viewed through a mathematical lens, then it seems like a valid area of study.
Maybe the process behind creating theory A is more generalisable towards future scientific discovery, but that would make the process worthwhile, not the theory.
Love it. Immediately reminded of the text filters back in the day like the pirate one that would drop letters and replace with apostrophes and change certain passages into "arr" or "yarr matey"
At first glance its like Hollywood movies announcing they're the best selling of all time, ignoring inflation. In other words a ratchet just to get clicks.
However this is relevant because this is an investor report helping people forecast, and this stat helps calibrate readers expectations of just how fast a product can scale in this day & age, using a relevant comparison of products in the same category that when launched offered the same step change in value.
My gripe is not with relevancy of the data, its with the chosen comparison. Comparing with Google at the beginning of the internet revolution, to now with billions of internet enabled devices across the world, is not a fair comparison and does not give any meaningful insight.
but that’s precisely the point and it does give insight. Google scaled off of existing infrastructure like computers. Computers scale off of existing infrastructure like electricity.
The point is to compare current era of scaling to the previous era and see how much faster it is.
It’s not comparing Google to Open Ai. It’s comparing the environment that produced Google to the environment that produced Open Ai.
It’s kind of obvious that new eras will produce faster scaling. But what if you ran the numbers and it wasn’t true?
There are plenty of times when this happens, the obvious is actually something different. This time isn’t the case but that’s the point of research, to back up common sense with evidence.
Also, it is very different to know that it is faster vs it is 5.5x faster. The 5.5x might not be completely accurate but it’s more in depth than just your intuition.
There is wisdom in simple, profound statements that open up new lines of thought. But There is also wisdom in doing research to make things you already know quantified and more concrete.
One example of research being wisdom is demographics. It’s one thing to know that there are more whites than blacks in the US, it’s another thing to know that there are 200m whites and 40m blacks. The numbers shed light into precision and also validate or clarify your thinking. For instance maybe you thought that blacks should be the second largest demographic since they have been here longest. Not so, Hispanic is at around 60m. Or maybe you knew that already. But if you wanted to argue with others about demographic growth and what is actually happening in immigration, knowing the numbers is wisdom, and going off of intuition leads to “they took my job” hot takes.
If you continue reading, they're comparing ChatGPT with more companies than just Google. TikTok and Fortnite are also included for example, both came much later so I'm guessing you'll feel it is a bit fairer of comparison.
I think of it as being reactively empathetic instead of proactively empathetic. Comes from a place of incuriosity and probably fear of mortality and bursting the just world fallacy, among other things. It's a bummer so many are so stingy with their hearts, as though love is some finite resource.
reply