Haven't we seen real life examples of this occurring in AI for medical imaging? Models trained on images of tumors state that tumors circled in purple ink or images of tumors that also include a visual scale are over identified as cancerous because they reason that both of those items indicate cancer due to the training data leading them that way?