> so that you won't be turned down for lack of novelty
I think this is also a reason for lots of fraud. It can be flat out fraud, it can be subtle exaggerations because you might know or have a VERY good hunch something is true but can't prove or have the resources to prove (but will if you get this work through), or the far more common obscurification. The latter happens a lot because if something is easy to understand, it is far more likely to be seen as not novel and if communicated too well it may be even viewed as obvious or trivial. It does not matter if no one else has done it or how many people/papers you quote that claim the opposite.
On top of this, novelty scales extremely poorly. As we progress more, what is novel becomes more subtle. As we see more ideas the easier it is to relate one idea to another.
But I think the most important part is that the entire foundation of science is replication. So why do we have a system that not only does not reward the most important thing, but actively discourages it? You cannot confirm results by reading a paper (though you can invalidate by reading). You can only confirm results by repeating. But I think the secret is that you're going to almost learn something new, though information gain decreases with number of replications.
We have a very poor incentive system which in general relies upon people acting in good faith. It is a very hard system to solve but the biggest error is to not admit that it is a noisy process. Structures can only be held together by high morals when the community is small and there is clear accountability. But this doesn't hold at scale, because there are always incentives to cut corners. But if you have to beat someone who cuts corners it is much harder to do so without cutting more corners. It's a slow death, but still death.
> the entire foundation of science is replication. So why do we have a system that...
Because science is just like a software company that has outgrown "DIY QA": even as the problem becomes increasingly clear, nobody on the ground wants to be the one to split off an "adversarial" QA team because it will make their immediate circumstances significantly worse, even though it's what the company needs.
I wouldn't extrapolate all the way to death, though. If there are enough high-profile fraud busts that funding agencies start to feel political heat, they will suddenly become willing to fund QA. Until that point, I agree that nothing will happen and the problem will get steadily worse until it does.
I think I would say short term rewards heavily outweigh long term rewards. This is even true when long term rewards are much higher and even if the time to reward is not much longer than the short version. Time is important, but I think greatly over valued.
On top of this, novelty scales extremely poorly. As we progress more, what is novel becomes more subtle. As we see more ideas the easier it is to relate one idea to another.
But I think the most important part is that the entire foundation of science is replication. So why do we have a system that not only does not reward the most important thing, but actively discourages it? You cannot confirm results by reading a paper (though you can invalidate by reading). You can only confirm results by repeating. But I think the secret is that you're going to almost learn something new, though information gain decreases with number of replications.
We have a very poor incentive system which in general relies upon people acting in good faith. It is a very hard system to solve but the biggest error is to not admit that it is a noisy process. Structures can only be held together by high morals when the community is small and there is clear accountability. But this doesn't hold at scale, because there are always incentives to cut corners. But if you have to beat someone who cuts corners it is much harder to do so without cutting more corners. It's a slow death, but still death.