There's no mystery to it: if one trains a chatbot explicitly to eschew establishment narratives, one persona the bot will develop is that of an edgelord.
To me, and I'm guessing the reason Linda left is not that Grok said these things. Tweaking chatbots is hard, yes prompt engineering can help say anything, but I'm guessing it's her sense of control and governance, not wanting to have to constantly clean up Musk's messes.
Musk made a change recently, he said as much, he was all move fast and break things about it, and I imagine Linda is tired of dealing with that, and this probably coincided with him focusing on the company more, having recently left politics.
We can bikeshed on the morality of what AI chatbots should and shouldn't say, but it's really hard to manage a company and product development when you such a disorganized CTO.
... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad.
Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?
We're literally trying to shove as much data as possible into these things afterall.
What I'm implying is that you think you made a point, but you didn't.