I guess I have to sleep on this because at a quick glance I can't really make sense of how it answers my question.
I do find it ironic though that this is so difficult to explain that I apparently have to read a paper to understand it... I would've thought the blog post was trying to explain things in simple terms...
Well, if this stuff was easy to understand, we wouldn't be having a crisis because of it, would we?
Probability is highly non-intuitive (kind of like quantum mechanics), so most people (including scientists) don't understand it and just memorize "protocols" and "formulas" and "p-value stuff".
The crisis isn't caused by ignorance. It isn't like if people took some courses on stats or just chose another system we'd resolve everything. Replace p values with any other system for distinguishing publishable work and non publishable work and you end up in the same place.
Science is messy but eventually consistent. Everybody should be cautious of single results and instead rely on the community to synthesize those results into predictive models.
How about evaluating a scientist beyond how many "publishable" discoveries he makes? There are so many other roles a scientist can play in the community beyond that.
We've learned not to value developers based on lines of code written, and to value refactoring and elimination of code, scientist can also do the equivalent. Scrutinize, weed out, reinterpret old things, mentor others, etc. Much stuff beyond finding p<0.05.
But such things are rather seen a side quests, secondary purposes, almost like a hobby (e.g. writing a textbook).
I do find it ironic though that this is so difficult to explain that I apparently have to read a paper to understand it... I would've thought the blog post was trying to explain things in simple terms...