The AI companies don’t want you “anthropomorphising” the models because it would put them at risk of increased liability.
You will be told that linear algebra is just a model and the fact that epistemology has never turned up a decent result for what knowledge is will be ignored.
We are meant to believe that we are somehow special magical creatures and that the behaviour of our minds cannot be modelled by linear algebra.
I don't see how anthropomorphism reduces liability.
If a company does a thing that's bad, it doesn't matter much if the work itself was performed by a blacksmith or by a robot arm in a lights-off factory.
> We are meant to believe that we are somehow special magical creatures and that the behaviour of our minds cannot be modelled by linear algebra
I only hear this from people who say AI will never reach human level; of AI developers that get press time, only LeCun seems so dismissive (though I've not actually noticed him making this specific statement, I can believe he might have).
No, you’re just meant not to assert that linear algebra is equivalent to any process in the human brain, when the human brain is not understood well enough to draw that conclusion.
You will be told that linear algebra is just a model and the fact that epistemology has never turned up a decent result for what knowledge is will be ignored.
We are meant to believe that we are somehow special magical creatures and that the behaviour of our minds cannot be modelled by linear algebra.