Claims that MS did due diligence aside, Tay having a vulnerability that some people exploited is still better explanation than "it was a real AI that learned poor morals from the Internet".
Also, the claim was that there were small groups that input a lot of data to the bot quickly - the script might have been "don't say X unless everyone is saying X already", which might have worked in small tests but clearly could be exploited.
I suspect most talking as if "she" was like a teen soaking up random bad ideas are falling for the Eliza Effect.
Claims that MS did due diligence aside, Tay having a vulnerability that some people exploited is still better explanation than "it was a real AI that learned poor morals from the Internet".
Also, the claim was that there were small groups that input a lot of data to the bot quickly - the script might have been "don't say X unless everyone is saying X already", which might have worked in small tests but clearly could be exploited.
I suspect most talking as if "she" was like a teen soaking up random bad ideas are falling for the Eliza Effect.
https://en.wikipedia.org/wiki/ELIZA_effect