Hacker News new | past | comments | ask | show | jobs | submit login

What if our understanding of the laws of the natural sciences are subtly flawed and AI just corrects perfectly for our flawed understanding without telling us what the error in our theory was?

Forget trying to understand dark matter. Just use this model to correct for how the universe works. What is actually wrong with our current model and if dark matter exists or not or something else is causing things doesn't matter. "Shut up and calculate" becomes "Shut up and do inference."




All models are wrong, but some models are useful.


The black box AI models could calculate epicycles perfectly so the middle ages Catholic Church could say just use those instead of being a geocentrrism denier.


High accuracy could result from pretty incorrect models. When and where that woukd then go completely off the rails is difficult to say.


ML is accustomed with the idea that all models are bad, and there are ways to test how good or bad they are. It's all approximations and imperfect representations, but they can be good enough for some applications.

If you think carefully humans operate in the same regime. Our concepts are all like that - imperfect, approximative, glossing over some details. Our fundamental grounding and test is survival, an unforgiving filter, but lax enough to allow for anti-vaxxer movements during the pandemic - survival test is not testing for truth directly, only for ideas that fail to support life.


Also lax enough for the hilarious mismanagement of the situation by "the experts". At least anti-vaxxers have an excuse.


Wouldn’t learning new data and results give us more hints to the true meaning of the thing? I fail to see how this is a bad thing in anyone’s eye.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: