Hacker News new | past | comments | ask | show | jobs | submit login

There is whole group of people in social sciences who are pushing for abandoning the null hypothesis testing methods. For the reason you mentioned, I cannot believe changing the test (or threshold of the test) would solve this issue. You ask people to find something significant or fit a model to a data — and tell them that's what matters — and they'll do it, either intentionally or unintentionally.

Machine Learning will (have) the same issue if the only thing that matters is hitting a certain level of accuracy given your model and data. This has been observed in Kaggle competitions over and over, you ask a group of people to find the best fit, and they'll, by learning your train, validation and test datasets.

As mentioned, problem is not p-value, or null hypothesis testing, the problem is journals who promoted the wrong incentive, and educators who were not aware of the consequences and propagated the wrong incentive (interpretation) to students.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: