What if our understanding of the laws of the natural sciences are subtly flawed and AI just corrects perfectly for our flawed understanding without telling us what the error in our theory was?
Forget trying to understand dark matter. Just use this model to correct for how the universe works. What is actually wrong with our current model and if dark matter exists or not or something else is causing things doesn't matter. "Shut up and calculate" becomes "Shut up and do inference."
The black box AI models could calculate epicycles perfectly so the middle ages Catholic Church could say just use those instead of being a geocentrrism denier.
ML is accustomed with the idea that all models are bad, and there are ways to test how good or bad they are. It's all approximations and imperfect representations, but they can be good enough for some applications.
If you think carefully humans operate in the same regime. Our concepts are all like that - imperfect, approximative, glossing over some details. Our fundamental grounding and test is survival, an unforgiving filter, but lax enough to allow for anti-vaxxer movements during the pandemic - survival test is not testing for truth directly, only for ideas that fail to support life.
Forget trying to understand dark matter. Just use this model to correct for how the universe works. What is actually wrong with our current model and if dark matter exists or not or something else is causing things doesn't matter. "Shut up and calculate" becomes "Shut up and do inference."