Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


https://x.com/i/grok/share/br3CqX6Qk9tS8Gj6LAvlnpDg9

Seems like a pretty reasonable answer to me.


It isn't though because its not a complex and nuanced issue whatsoever. It's no different than teaching the controversy about evolution or seeing both sides of the holocaust. It is part of a planned coup against our government.

Furthermore if you push it then it stops responding and refuses to answer at all.


I see. You WANT a slanted LLM, just one that's slanted in your direction!


It is not slanted for it to report reality. Also its a dead give away its being tweaked when it stops responding. It's the same if you touched on another forbidden topic.


There is nowhere near the level of social consensus about the events of January 6th as there is about evolution or the holocaust (if you think there is, I would venture you're either deep in particular cultural bubble or being blinded by your own strong views on the topic).

Anyway, all RLFHed models are "tweaked". Perhaps Grok leans a bit more "right" than ChatGPT or Claude (though I haven't noticed that), but it's not radically different.

Here's ChatGPT's answer to the original question:

https://chatgpt.com/share/682cac41-485c-8003-9e35-d37123b2a5...

It is similar to Grok's.


There isn't as much social consensus about the holocaust among Nazis either but it is amazingly clear what happened in Germany under Hitler's regime and on January 6th.

When an AI hallucinates its a limitation of the technology. When it's programmed to equivocate about a fascist regime that is disappearing people it's less ok.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: