That interpretation makes the test basically useless because we know a priori that any two variable for things within each others light cone affect each other at least a little. More practically, since the test tells you nothing about size of effect, it will pick up on the tiniest of bias in your experimental procedure and always reject the null if you have enough data.
From the author of the article: "The general point reminds me of my dictum that statistical hypothesis testing works the opposite way that people think it does. The usual thinking is that if a hyp test rejects, you’ve learned something, but if the test does not reject, you can’t say anything. I’d say it’s the opposite: if the test rejects, you haven’t learned anything—after all, we know ahead of time that just about all null hypotheses of interest are false—but if the test doesn’t reject, you’ve learned the useful fact that you don’t have enough data in your analysis to distinguish from pure noise."
To add to this, the problem with t-tests in not the threshold. It is that the hypothesis you are rejecting (effect is exactly 0.0000000...) is infinitesimaly small. You've rejected basically nothing of your hypothesis space.
Your null should have a width. You should always be rejecting "effect is greater than some margin" which you should have to argue is greater than any bias you might expect in your experiment. There are always at least tiny biases.
From the author of the article: "The general point reminds me of my dictum that statistical hypothesis testing works the opposite way that people think it does. The usual thinking is that if a hyp test rejects, you’ve learned something, but if the test does not reject, you can’t say anything. I’d say it’s the opposite: if the test rejects, you haven’t learned anything—after all, we know ahead of time that just about all null hypotheses of interest are false—but if the test doesn’t reject, you’ve learned the useful fact that you don’t have enough data in your analysis to distinguish from pure noise."
(https://statmodeling.stat.columbia.edu/2019/08/18/i-feel-lik...)