I'd argue that if your ML model is sensitive to the anti-aliasing filter used in image resizing, you've got bigger problems than that. Unless it's actually making a visible change that spoils whatever it is the model supposed to be looking for. To use the standard cat / dog example, filter choice or resampling choice is not going to change what you've got a picture of, and if your model is classifying based in features that change with resampling, it's not trustworthy.
If one is concerned about this, one could intentionally vary the resampling or deliberately add different blurring filters during training to make the model robust to these variations
You say that “if your model is classifying based in features that change with resampling, it’s not trustworthy.”
I say that choice of resampling algorithm is what determines whether a model can learn the rule “zebras can be recognized by their uniform-width stripes” or not; as a bad resample will result in non-uniform-width stripes (or, at sufficiently small scales, loss of stripes!)
A zebra having stripes that alternate between 5 black pixels, and 4 black pixels + 1 dark-grey pixel, isn’t actually a visible change to the human eye. But it’s visible to the model.
I'm not saying your general argument is wrong, but... zebra stripes are not made out of pixels. A model that requires a photograph of a zebra to align with the camera's sensor grid also has bigger problems.
If one is concerned about this, one could intentionally vary the resampling or deliberately add different blurring filters during training to make the model robust to these variations