TL;DR: "The scientific method is not about testability or falsifiability per se. It is about trying hard to falsify. Nothing more, nothing less. Truth is to be found in those ideas that survive despite the multitude of attempts to falsify them[...]quantum gravity theories needs to reproduce all known low-energy physics, as well as all known classical gravitational phenomena. In other words: if you ensure your favorite theory of quantum gravity in the appropriate limits reduces to the standard model and Einstein's theory of gravity, your theory has survived all falsification attempts ever undertaken in the history of physics. This is a key point that fails to receive attention in 'string theory and falsifiability' debates."
Let me put on my reductionist hat: what use would be such a theory, if it perfectly matches observed reality but offers absolutely nothing new?
String theory has often been called mental mstrbtn (evidently using the real world hellbans such comments); should anyone get paid for pleasuring their minds in this way???
In all the cases that have been historically tested, it agrees with observations, as the article claims. It doesn't mean that's all there is.
At least in the particular example of string theory, it is way more than that.
1. Firstly, it is more a framework, rather than a specific model. Eg: You cannot test statistical mechanics (framework) as such, you test the kinetic theory of gases (model). String theory is still quite nascent: there's been tremendous progress, but the goals are also very ambitious. It might take a few more decades for scientific research to culminate in a concrete model that provides ripe pickings for practical applications.
2. There are definite some limits where string theory has added new insights on things we simply had no clue about (One such poster child is the concept of "dualities" between field theories, and new ways to understand very practical aspects of quantum field theory.) I do not wish to go into too many details, for fear of digressing.
Sometimes it takes a lot of imagination and a significant period of time before progress in basic science can be applied to human existence. But in the long run, it will invariably alter the situation drastically, in a way we can barely imagine.
1. http://www.newyorker.com/magazine/2014/12/22/material-questi...
2. Amara's law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
Given the possible payoff in the long run, imho, the investment being made today is a trivial amount. Eg: "It's been only half jokingly said that today a third of the GDP is attributable to quantum mechanics."http://www.fnal.gov/pub/today/Augustine2.html
Conceptual simplicity. Newtonian celestial mechanics does not make any new predictions that Copernican mechanics does not (at least before space travel is viable), but it sure is much nicer to work with.
Yes it does. Even assuming you meant "Keplarian mechanics" (circular orbits give horrible predictions for planetary movement), Newton's theory of Universal Gravitation provided much more accurate predictions of the Moon's movements.
Not the movements of Mercury though. We had to wait for Einstein for that.
Sorry I'm completely forgetting the name of the theory I'm thinking of. It involves planets moving on circles on harmonic spheres (as opposed to sjmply following great circles) and accounts fairly well for celestial motion.
Its now mostly forgotten due to being a complete tangent on the way to Newtonian mechanics, but it was pretty clever.
>>> But then the "appropriate limits" are part of your theory.
That's certainly true. I'm not active in that field, but a couple of examples of appropriate limits are where classical mechanics is an appropriate limit of both quantum mechanics and special relativity. It only says that if your theory profoundly disagrees with existing observation, try again. At an even simpler level, the approximately parabolic arc of a stone thrown in the air is an appropriate limit of Newton's mechanics.
But the hope is to create a theory where some other limit can also be found -- such a limit might be an unexpected new particle, astronomical observation, etc.
"I take the hard view that science involves the creation of testable hypotheses" - Michael Crichton
Consensus is not science. Computer Models are not science. Elegant equations are not science. Science is science. Science is testable, repeatable, falsifiable.
Science is not as simple as that. We have to use the experiments we can a actually perform, and those are usually less rigorous than we'd like. Disregarding anything that doesn't fulfill some arbitrary standard of rigor as non-science is foolish. We have to keep the limits of our experiments in mind, but that doesn't mean that anything shirt of a perfect experiment is useless.
Computer Models are also testable, repeatable and falsifiable. They are certainly science, you just have to always keep the limitations and goals of a specific model in mind and avoid over-interpretation.
Follow the link and you'll see Crichton is not talking about those sorts of models; from near the end:
To an outsider, the most significant innovation in the global warming controversy is the overt reliance that is being placed on models. Back in the days of nuclear winter, computer models were invoked to add weight to a conclusion: "These results are derived with the help of a computer model." But now large-scale computer models are seen as generating data in themselves. No longer are models judged by how well they reproduce data from the real world -- increasingly, models provide the data. As if they were themselves a reality. And indeed they are, when we are projecting forward. There can be no observational data about the year 2100. There are only model runs.
This fascination with computer models is something I understand very well. Richard Feynman called it a disease....
I'd add that I read the TAPPS paper when it came out (back before I gave up on the AAAS) and followed the debate closely: one of the many reasons that paper was fatally flawed was that it used a one dimensional model of the atmosphere and planet, i.e. particles could go up and down, but there were no winds, no oceans, etc. People in the field said such models were notorious for being easily overbalanced by inappropriate inputs, which were of course another problem with the paper, one I could personally confirm (having been a nuclear war survivalist since 2nd grade in 1969).
When LLNL did 3D modelling, their results said, yeah, maybe a nuclear fall, but that's it. Which comports with the observed effects of volcanic eruptions, which we have a lot of data on. I.e. the bottom line in simplest terms is "earth big, humans small".
ADDED: just remembered that Feynman was there at the dawn of computerized scientific models, back when "computers" were the operators of calculators. Some of the necessary scientific inputs into creating the bomb could not be solved, only approximated, and they used a bunch of computers as they were called then and the current state of the art electro-mechanical calculator. He helped organize the operation so it would run well. This no doubt helps explain his much much later involvement in the development of the Connection Machine, and one can imagine his horror as cheaper and cheaper computing was abused. [Insert a Cargo Cult Science observation here.]
So virology is not science? The belief that HIV causes AIDS is based on consensus. (Not everybody agrees even today, but the scientific community has moved on.)
Protein folding is not science? Its modern study is based on computational simulations.
Not all sciences are like physics. It's strange that the author of Jurassic Park would speak up against consensus and models when they're so fundamental to paleontology, for instance. Should we not believe that dinosaurs existed because evidence of their existence is not a falsifiable theory?
You can't smear protein folding because of a modern method of study. Are you arguing that you think it's possible that proteins don't fold or that some specific example of a protein might be folded differently than certain simulation experiments postulate?
You can't smear the fossil record because people put feathers on different species all the time. Fossils exist and organisms existed which created them.
There is science in each area. testable, repeatable, falsifiable. In theoretical physics, the argument is far more salient. There is a wack wing doing something that's not science (per the 3 tests definition) and that wing is trying to argue that the 3 tests shouldn't apply so they can continue to get funding.
Most physics in and/or past high school will gladly tell you that the Bohr model for atoms is a klunky model that gets the uninitiated into grasping a modestly functional model.
Stringnuts seem to be raising their klunky model up and trying to say it gets to be a theory in a way which makes science and scientists (3 tests style) seem no better than charlatans and preachers. Part of me thinks this is because most of what seemed should be evident from ST is not panning out and they know that they are too far down a path. If ST gets funding cuts, their CVs atomize and blow away with the wind. They get to teach basic physics for kids meeting core requirements and polish mirrors in someone else's lab.
When they are getting to close to the corner they backed themselves into, they try to argue that Bayesian analysis says it is highly unlikely that the corner exists.
The HIV consensus is because we're too wimpy to try Koch's postulates on condemned prisoners. And that's not an issue with science. Also because money and politics.
Protein folding research still produces testable results, last time I checked (granted, in 1990).
Dinosaurs are too cool to require applying this level of rigor to research into them ^_^.
It says something truly ugly about science as it's practiced today that the authors' call for requiring fallibility in this grand unified theory domain is motivated in part by pointing out that the reverse will call into question the grand unfalsifiable climate change consensus (or as I uncharitably add, gravy train, $4 billion a year for climate "science" as of the Climategate release; BTW, seems like they're lying about ocean acidification as well by ignoring 70+ years of inconvenient facts: http://wattsupwiththat.com/2014/12/23/touchy-feely-science-o... (the plus comes from mention that recordings, evidently not systematized NOAA ones, started in 1850).
ADDED: when I went back to the article I noticed they explicitly used the phrase "climate change", meaning they also have bought into (at least for the purposes of publishing an item in the world's #1 journal of science) the unfalsifiability of climate "science" as it's now practiced.
Rather than e.g. noting that the lack of required falsifiability in climate "science" could well be infecting physics, and for that matter, why stop there, as long as it's important (where, as I note, "important" starts with getting grants from the usual suspects).
What is the utility of advancing a theory that is unfalsifiable in the Popperian sense? This category of theory, even if on strong philosophical footing, seems to be quite apart from theories that can be used to build devices that won't work if the theory turns out to be wrong.
In any case, I doubt very seriously that there are more than a few thousand people alive who can have an informed opinion on this.
To quote from the article: "As we see it, theoretical physics risks becoming a no-man's-land between mathematics, physics and philosophy that does not truly meet the requirements of any."
What I think you're suggesting is a philosophical benefit. Which echoing another comment of mine, probably shouldn't get funded by the NSF (the US National Science Foundation, one of the parts of the US Federal government who's remit includes funding theoretical physics).
I wouldn't go that far - there's still utility to be had in generating the theories.
What is a practical problem though, is that other less sexy fields of physics are losing funding to the string theorists. Give them their own discipline, their own pot of cash and let them figure out their theories - while simultaneously maintaining a separate pot of cash for those who work on the falsifiable parts of physics.
In other words, let it be a no-man's land, so long as it's boundaries are delineated and separate from the math, physics and philosophy that it came from. There's nothing wrong with designing a new discipline.
Yeah, one of the other big issues that I've noted people in the field complaining about is how string theorists are indeed sucking up a disproportionate share of theoretical physics, but worse than that. Since their's is, from what I read, the US consensus on how to do that area of theoretical physics, in the usual group think fashion of funding organizations, other approaches find difficulty getting any respect, let alone funding.
Anyway, the problem here of course is that they would never accept being relegated to a field that's not "physics", since capital P Physics has so much respectability, pretty much the most of anything. But since they're now openly trying to redefine scientific truth (as the FDA did to come up with its second hand smoke "science"), it's hard to see any option less severe than the one you propose.
A theory that is falsifiable is more like a reformulation of the existing theories, and should be judged as such. Heaviside reformulated Maxwell's equations, and we use the new ones because they make things simpler to understand and use. Its not about finding "truth"; its more about finding useful information. However, if a physics theory fails this criteria as well, then it is certainly pointless.
I was obliquely suggesting that there is clearly "utility" in speculation (but let me note here that this layman would not accept metaphysics dressed up as physics).
I agree that the institutional angle here is what is the critical issue. So to answer the GP once again, "Go ask the Pope!"
Well, if you keep thinking about a theory for long enough, you might finally come up with a way to falsify it. No theory is falsified overnight. It takes a lot of work to do proper falsifications.
"The issue of testability has been lurking for a decade."
I guess he means it's become too big to ignore, for one of the things I remember about string theory when I first learned of it in the mid-late '80s was that at that time the string theorists couldn't conceive of a way to test it.
Relevant quote from the last line of the article: "The imprimatur of science should be awarded only to a theory that is testable. Only then can we defend science from attack."
Pity that's entirely negated by the authors' defense of "climate change", for changing the last word from cooling (when I was growing up) to warming, to finally "change", following of course short term temperature patterns, unambiguously signaled that the field of climate "science" had become untestable.
(Evolution, on the other hand, is sort of testable, in for example molecular genetics. Weasel words because the biggest argument is metaphysical, or as I poise it, "Who are we to say how God created the earth?" I'm agnostic, but as someone put it in a discussion I just read, not very good at it. :-)
The theory of anthropogenic climate change aka global warming can be tested and falsified.
The general hypothesis was put forth in the late 19th century: that an increase in the burning of fossil fuels would lead to a warming of the atmosphere. This is indeed what we have observed since then.
More granularly, you could test and falsify components of the theory: that CO2 absorbs and re-emits IR light; that burning fossil fuels release CO2; that CO2 persists in the atmosphere for a certain duration; that the ocean absorbs CO2 at a certain rate; that CO2-driven warming is not fully offset by another system such as clouds or snowfall; etc.
All of these (and more) have already been repeatedly tested. It was the cumulative results of all these tests that drove scientists to begin sounding the alarm in the 1980s--almost a century after the first hypothesis.
"Cooling", "Warming" and "Change" are one about as vague as the other. You're oversimplifying it, if you want a rigorous overview by what is meant by climate change go read IPCC's reports introduction: the major impacts of rapid man-made disturbances on the global climate. Some word had to be used instead of "Let's discuss Major Impacts of Rapid Man-made Disturbances on The Global Climate". (Ill stop derailing the discussion here)
Review my comment, particularly the "when I was growing up" bit. If my account oversimplifies, that's only because I'm repeating what the received wisdom was during each period as the "scientific consensus" changed to match the short term temperature record, and that filtered into the culture.
I was born in 1960, when Eisenhower was still President. It was the Official Truth, e.g. in the science fiction of the period (e.g. https://en.wikipedia.org/wiki/Time_of_the_Great_Freeze), that man was bringing on a new ice age.
I remember when the Party Line changed to "global warming". And I of course remember when that became untenable and changed to "climate change". And I read some of the scientific literature for each stage, I knew by the end of 1st grade that my calling was to be a scientist.
You may prefer to get your received wisdom from entities that make full use of modern style versions of Nineteen Eighty Four memory holes, I'll stick with what I personally witnessed and now remember.
Why should I be called upon to remember exactly which year ... for that matter, this is not the sort of thing that can generally be pinned down to one year, although it's possible it was that fast. Fashions can change rather rapidly.
But let's focus on the meat of my argument here: do you deny that the big thing use to be "global warming?", in the '80s and '90s? If not, this article is evidence that it's now "climate change".
But to partly answer your question, even the massaged global temperature data stopped showing warming ~15 years ago. For a while, of course, nobody thought much of it, a year or three's pause means nothing. Hmmm, maybe it was around 10 years into that? It did take a while for the short term trend to become evident, then after taking enough flack on that....
> Why should I be called upon to remember exactly which year
Your own words were: "I of course remember when that became untenable". I didn't think it would be any effort.
> Hmmm, maybe it was around 10 years into that?
So about 5 years ago then? 2009 or so. Let's say after 2000 just to be safe.
Because the IPCC - you know, the Intergovernmental Panel on Climate Change - was formed in 1988. That's "Climate Change". In 1988. I think perhaps your memory is faulty.
Also, "global temperature data stopped showing warming ~15 years ago" is just wrong. I thought you said you were a scientist?
As waterlesscloud notes, I'm talking about the popular use. I'll turn around your IPCC argument: anyone with a clue would realize there was a distinct chance another change in the party line like the cooling to warming flip would be required by the short term temperature record, so the IPCC merely future proofed their gravy train.
My calling was science. Finances prevented me from fulfilling that potential, which is why you'll find me here in Hacker News.
The first graph shows that temperature for 1998 (I think) is very high (~0.6C), and indeed temperatures in this century centers around this value. Has global warming stopped since 1998?
But wait, one quickly discovers that the temperature for 1998 was the highest among all years up to 1998. Not only that, it is ~0.2C higher than 1997, which was also higher than all previous years. In fact, 1998 is >0.4C higher than the coolest year in the 1990s.
It doesn't take a lot of statistical expertise to conclude that 1998 was in fact an anomalously hot year, if one were to look at data only up to that year.
In this century, we hover around this 1998 temperature, and even routinely exceed them. What was an outlier became the normal.
Soon the same temperature will be considered a cool year. (Of course, the graph doesn't say that, but thousands of very bright people who spent their lifetime studying these graph say that. I have no reason to doubt them, when the best argument thrown against them is "if you compare the hottest year of the last century with today's average, they are the same!")
Regarding your search, most of those links are explanations of why you're wrong. The problem is that scientists are saying something like, "Given the measurements we've made of the extra energy the earth is absorbing, we'd expect surface air temperatures to be rising even more rapidly than they are. The energy can't just disappear. Where is that extra energy going to?"
What deniers hear is, "Computer models tell us the earth should be heating up and it isn't! The energy must be missing! The earth is cooling!"
But, fair enough I should have said more than "just wrong". So allow me to explain why it's wrong and, since you don't like temperature data, we can do it talking about waves on a beach.
Imagine you're measuring the height that waves come up a beach. You notice that if you look at, say, the last 30 measurements, you can see a clear upward trend: the tide is coming in.
Now you'd like to determine whether or not the tide is turning. So you look at the last 16 measurements (it helps to start from that big wave 16 measurements back), and there you go - no significant increase in the height up the beach.
Does that tell you whether the tide has turned or not? Well of course it doesn't. Otherwise you could continually find moments where the "tide was turning" simply by taking a short enough period of measurement. And you'd continually be wrong.
What's the correct way of doing this? It's pretty simple. You check the longer term trend (tide is coming in) and then take the last 16 measurements and see if that trend is significantly different to your previous trend.
This way adding data strengthens your findings, whereas with the previous way removing data strengthens your findings.
So, basically, you're doing it wrong. Do it the correct way and then tell me what you find.
As for:
> the IPCC merely future proofed their gravy train.
Well, there's no falsifying the Golden Rule: "it's all part of the conspiracy"
It's not a clear cut thing, but there was definitely a period when "Global Warming" was the popular phrase, as a result of being promoted by scientists talking about public policy.
"But global warming became the dominant popular term in June 1988, when NASA scientist James E. Hansen had testified to Congress about climate, specifically referring to global warming."
So it looks like his memory was just fine on that topic.
His memory was when the phrase "global warming" became "untenable and changed to "climate change"". That's what's wrong. The phrase "Climate Change" was not introduced late on because there was some problem with "global warming". It's been in use from the start.
If he'd said "unfashionable" rather than "untenable" then he'd have been closer to the mark.
It looks from that like the two phrases were equally popular till 1994, when "Climate Change" suddenly took off. There was never a point where "Global Warming" was significantly more popular.
If a theory is mathematically elegant, I would consider that evidence for the theory. For example, the ability to explain the exact gauge groups and representations that comprise the Standard Model, or the masses of the elementary particles. In fact, I would say that a theory that could do this would be "tested" because it is a historical accident that we discovered the gauge groups/masses before the theory.
As far as I understand it, string theory once promised to do precisely this. String theory was considered promising because it had no vertex factors in its Feynman diagram. Because there were only strings, you only needed to know the free space propagator for a patch of string. However, as I understand it, string theory then turned out to be have more possible parameters, and this original promise was lost.
The kind of elegance that the author seems to be talking about is conceptual elegance, and I also reject this as evidence. For example, invoking the anthropic principal plus multiple universes to explain the parameters of the Standard Model. To me this is no more compelling than the Christian "grand unified theory" that unifies ethics, numinous awe, and the existence of the universe.
To me, science is the nature of prediction. How well do your predictions match reality? Predictions are made on the basis of some kind of model (a set of equations is an example of one such model). The quality of science then, is predicated upon how closely predictions made via a model match reality.
In this sense, physics is the "best" science currently (12 decimal places of accuracy in predicting the gyromagnetic ratio of an isolated electron), followed probably by chemistry, then biology, then medicine, then psychology, etc. The only thing I mean by this ranking is that it is harder to come up with models that give accurate predictions in some fields relative to others.
A few people on this page have stated "Computer models are not science."
I disagree with this. My work is in molecular dynamics simulations and quantum chemistry.
Consider an experiment: you test nature and try to derive some sort of model that allows you to make future predictions. However, your model is marred by the concept of experimental error and system complexity (for instance, we can't currently observe all the neural activity in the human brain because of how difficult it is to image a brain without being too invasive).
Now consider a simulation: you take pre-existing models and use them to formulate predictions.
Both experiment and simulation allow you to make predictions that can be tested experimentally. But the quality of your predictions depends only on how accurate your model is, not the mere fact that you are running a simulation!
Let me elaborate: quantum mechanics is almost perfectly accurate in describing all phenomena that occur in everyday life (Peter Gill has said this, and I believe Dirac said it as well). It only falls apart with things like quantum gravity.
So we have a very accurate model. If you plug the wavefunction of a particular human brain into the Schrodinger equation, you'll get an almost perfect evolution of the brain through time (as compared experimentally to the real brain). The problem is that despite having a very accurate model of reality, it is not easy at all to turn this model into a testable prediction. We can really only use this model to its fullest capability for a couple of particles.
And this is where computational modeling comes into play. You can take this highly accurate model of reality called quantum mechanics and intentionally introduce error into it in order to speed up the rate at which you can formulate a prediction from it, e.g., you make an approximation to the Schrodinger equation that you can solve quickly, and while you know that this approximation is not an exact solution, you can still develop predictions from it. And when you take these predictions and then test them experimentally, you find that the experimental results match up pretty well. In fact, in some cases the error in your approximation to the Schrodinger equation is small enough that you can do better than experiment, because your numerical error is smaller than the systematic error that an experimentalist must deal with in the lab.
So to me, it doesn't matter where the prediction comes from, or how it was formed. If that prediction can be tested -- and if once tested it is found to match reality to some degree -- then you have some form of science (of varying levels of quality).
String theory does in fact make physical predictions. But those predictions can't be tested yet. So it's still a form of science, although maybe not a very useful form since it could be a long time (or never) before we are able to test it somehow. Many-world theory however is NOT a form of science, because it makes NO predictions whatsoever. It's just an interpretation. It says nothing different than any other interpretation of quantum mechanics. It is not science. (In a similar vein, I've always thought people who try to "use" science to argue for or against the existence of God are totally wasting their time since neither side can even come up with a testable prediction.)
The Earth goes around the Sun, not the other way around. Copernicus realized this for the first time in the early 16th century. By the end of the 17th century, almost every scientist in Europe were convinced that Copernicus was right. Why? It was elegant (no epicycles) [1], it was politically exciting (it went against the doctrine of the Roman Catholic Church and was therefore especially attrative to reformers), and some of its followers (ever heard of Isaac Newton?) had come up with really cool ideas.
In other words, heliocentrism in the late 17th century had all the features that a popular programming language today might have [2]. But as a scientific theory, it remained unconfirmed, and its rival (geocentrism) remained unfalsified, even as the world embraced heliocentrism as the One True Theory.
Conclusive falsification of geocentrism, and thus compelling confirmation of heliocentrism, only arrived in 1838 with the first observation of parallax. That was almost 300 years after Copernicus first advocated heliocentrism, and over 100 years after almost everyone accepted it. For all those decades and centuries, people had been believing in heliocentrism without having tested it.
The history of science is rife with examples like this. Some theories are inherently difficult to test, so it can take a few decades, or even centuries, to collect conclusive experimental evidence. Sometimes you have to wait for others to develop the technology you need, just as heliocentrism had to wait for highly accurate telescopes to measure parallax. Because it costs a lot to develop such technologies, a theory without a critical mass of highly motivated followers is at risk of fizzling out before it can ever be tested.
This is an inevitable consequence of the fact that science depends on a bunch of hairless bipedal monkeys for its existence. When falsification takes a long time, humans tend to be influenced by political, philosophical, aesthetic, and even religious factors. And this isn't a Totally Evil Thing™, because if we weren't influenced by such factors, much fewer theories would ever make it to falsification. They would just fizzle out for lack of motivation.
How long has string theory been around? 50 years? And we're already being impatient with it? Remember how long we had to wait for Darwin's theory of evolution to become mainstream and well-supported by evidence? Remember how long it took for "driftists" (those who supported plate tectonics) and "fixists" (those who opposed it) to reach a consensus? 50 years is about as long as it takes to test a theory about the distant history of this planet using recent technology. How much longer do you think it will take to test a theory about the fundamental structure of the universe, when we can't even imagine the kind of technology we'd need in order to start testin'?
Copernicus waited 300 years. String theorists should expect to wait 500-1000 years, if not more. And we, the rest of the society, should strive to support such long-term endeavors to the best of our abilities [3]. It's not as if string theorists are asking us to build expensive underground facilities for them, right?
[1] The lack of epicycles was a particularly attractive feature because of the newly discovered moons of Jupiter. Too many levels of epicycles made the Ptolemaic model look rather inelegant.
[2] Elegant syntax, sexy community, and a cool standard library.
[3] That is, unless there's something so obviously fishy about a theory that the consensus is that it's not even worth trying to falsify. Intelligent design probably falls into this category.
Copernicus realized this for the first time in the early 16th century.
Copernicus came up with the predictive mathematical model, but the idea of heliocentrism goes back almost two millenia before him.
The history of science is rife with examples like this.
The collection of ideas now known as 'the scientific method' didn't exist in Copernicus's day, and was still having major modifications over the past 100 years. At this point, we've really nailed down something that was barely a wisp of an idea in Copernicus's time. It took a long time simply because falsifiability wasn't an ingrained scientific process in the age of Copernicus.
And we, the rest of the society, should strive to support such long-term endeavors to the best of our abilities
500-year experiments should be very low on the list, particularly ones to be supported 'to the best of our abilities'. We have more pressing issues and ideas that should get more of our attention. Saying 'trust us, this will be important long, long into the future' isn't too far from founding a new religion.
Except within a certain faction of purists who want to purge science of anything that cannot be falsified in the short term, and outside of textbooks influenced by their opinions, I don't see any consensus on what constitutes the scientific method and what falls outside of it. How do we know that there won't be further "major modifications" over the next 100 years?
Modern people often suffer the illusion that they exist at the apex of history, and their proud claims of perfection just as often fail to stand the test of time.
> It took a long time simply because falsifiability wasn't an ingrained scientific process in the age of Copernicus.
No, it took a long time because the technology needed to falsify geocentrism simply did not exist until the mid-19th century. Likewise, we currently do not have the technology to test large parts of contemporary theoretical physics. In fact, it seems to be in the very nature of theoretical physics that its predictions are hard to test. I can't think of a single major advance in modern physics that took less than a decade to test conclusively.
> 500-year experiments should be very low on the list
I agree. But when it comes to highly abstract theories of cosmology, do we really need to compile a list of priorities at all? It's not as if string theorists and multiverse proponents are asking for funding to send probes to outer space. It's all armchair speculation at this point, and nobody's forcing anyone to believe anything.
If we as a society can afford to throw money at SF films and number-puzzle games with a high "geek factor", surely we can afford to let some professors engage in geeky speculation for a few more centuries? Because that's all I mean by "support to the best of our abilities". Just keep respecting them as fellow scientists whose conjectures unfortunately cannot be tested within their lifetime. Acknowledge that a small fraction of our scientific workforce needs to be engaged in such long-term projects, and take comfort in the fact that this fraction will always remain small.
Or have we become so insecure, narrow-minded, and obsessed with short-term ROI's that we cannot stand the sight of as-yet-unfalsified theories occupying the precious pages of our journals?
TL;DR: "The scientific method is not about testability or falsifiability per se. It is about trying hard to falsify. Nothing more, nothing less. Truth is to be found in those ideas that survive despite the multitude of attempts to falsify them[...]quantum gravity theories needs to reproduce all known low-energy physics, as well as all known classical gravitational phenomena. In other words: if you ensure your favorite theory of quantum gravity in the appropriate limits reduces to the standard model and Einstein's theory of gravity, your theory has survived all falsification attempts ever undertaken in the history of physics. This is a key point that fails to receive attention in 'string theory and falsifiability' debates."