I disagree. Negative results are important. Null results are of very limited interest. The two are worlds apart.
A null result simply means you tried something and it didn't work. But you don't know why. You haven't proven it didn't work. There are literally millions of reasons why something might not work. For instance, you could try to use compound X to cure disease Y, observe no effect, and conclude that X doesn't cure Y. But what if somewhere in the process of making X you made an uncaught mistake and you instead used X'?
A negative result means that you tried something and you came to the proven conclusion it doesn't work. This is, crucially, as hard to obtain as a positive result. In my example, it would imply a much longer process than simply "apply X, see no effect in Y, make a few robustness checks, done".
You could say "Well, publish the null anyway, somebody will catch the mistake". Unlikely. There are already so many papers out there that keeping up is impossible. If we were publishing also null results this number will grow tenfold at the very least. Nobody could possibly check everything. They will see a paper "X doesn't cure Y" and call it knowledge, stifling a possible cure virtually forever.
Am I splitting hairs? Perhaps. But I think HN prizes itself to be a scientifically minded community, and thus it has a mandate to use terms correctly. Confusing "null" with "negative" is a sin.
I hope one day I'll find a way to strongly and passionately argue against the "null results are as important as positive results" position. It is a bad meme. Charitably, I consider it most of the times a honest mistake. But sometimes it gives me the impression it is a cheap trick used by people to erode the reputation of academia.
True, but I'm not really arguing against what you're saying. It is true that, when you have a positive (or a negative!) result you should also report on the nulls you obtained on the way (most likely in the supplementary materials) as a compendium of the result, to put it into context.
What I'm arguing against is publishing a null result as a stand-alone publication. This creates the illusion of it being somehow a "result", which is not (in fact, we should stop calling them "results" altogether). With a null you haven't proven anything, and thus it is not a sufficient basis for a publication.
I see. Thanks for adding to the clarification. I think that the presentation of nulls as "results" can definitely be disingenuous. Ideally, science would have a better database to keep track of what people find, where we could add nulls in a way that doesn't highlight their "importance". As the person above says, reporting nulls is still useful to prevent p-hacking and publication bias.
(Of course, ideally I think we'd be better off focusing on reporting the data in a Bayesian approach, but that hasn't really gotten traction in the broader community.)
> Negative results are important. Null results are of very limited interest.
Correct. There is a highly cited paper in CS where the author showed that a mathematical model that was widely used in research didn't actually work (anymore) in reality. That paper was the starting point of a lot of new research in that field.
> I disagree. Negative results are important. Null results are of very limited interest. The two are worlds apart.
I agree they're different but, but disagree that they're worlds apart. There's a spectrum between them, caused by uncertainty and statistics. If I say the average treatment effect of my new drug is probably somewhere between -x and +y, it could be a negative result or a null result. It's the fuzzy line between statistically insignificant and materially insignificant.
Maybe I only had two patients per experimental cell, so I barely learned anything. The drug's treatment effect on lifespan is between -30 years and +10 years. It's "null" in that we didn't learn much of anything.
Maybe I had a billion patients per cell and I learned that the average treatment effect on lifespan is between -0.001 days and +0.1 days. It's "negative" in that we learned the drug doesn't materially affect lifespan.
The position we seem to be in is that most conventional experiments are powered with a moderate effect size at 80%, meaning that many of our null-or-negative (-x, +y) results will be right around the region where it's unclear whether results are null or negative.
I generally agree in the sense that "null results" should not be published as "results." But, especially in the experimental sciences, I think it would be an incredible (and very useful) feat of work to have well-documented experiments that turned out to be ultimately null or failed, to prevent others from doing the same. (Or, on the other hand, to have people improve on the given methods in order to get a positive/negative result in some specific sense. For example, photonics returning to lithium niobate platforms, which were essentially abandoned in the 80s, but has had incredible successes lately. I'm sure there's been a lot of replicated work here.)
Of course, the problem with all of this is that there really aren't very good incentives to accurately and carefully report null experimental results (except as a kind of "folk knowledge" within a given lab) which would limit its general usefulness. But the "platonic ideal," so to speak, of a null result journal I think would be relatively useful.
I think you need to rework your definitions. Avoid using the word proven. Most of the time science proves things false. You can't prove anything to be true.
The difference between a null and a negative is just that a negative is an interesting null. In your null example, to create a proper negative you'd probably report several compound synthesis methods instead of one. You'd probably also want to use more mice/data in your analysis.
Those are some good reads, and have absolutely been my experience. It's depressing how many publications that I've come across don't provide the whole story, and are probably false.
I've found that looking at what a paper doesn't report can be far more important that what they claim.
I definitely think there's more room for this sort of guided / ML analysis, but I'm not quite sure to make traction on extracting the structure of scientific papers...hopefully someone with more experience can chime in.
See also the "file-drawer problem" (https://en.wikipedia.org/wiki/Publication_bias). Also, with regards to the incentives in the field and the lack of null results, there's always Ioannidis's classic work (https://journals.plos.org/plosmedicine/article?id=10.1371/jo...).