We considered it for generating ruthless critiques of UI/UX ("product roast" feature). Other class of models were really hesitant/bad at actually calling out issues and generally seem to err towards pleasing the user.
Here's a simple example I tried just now. Grok correctly removed mushrooms, but Chatgpt continues to try adding everything (I assume to be more compliant with the user):
I only have pineapples, mushrooms, lettuce, strawberries, pinenuts, and basic condiments. What salad can I make that's yummy?
And its fairly constructive, at least when I tried in Gemini 2.5 awhile back. Like yes its caustic (fantastic word) but it does so in a way thats constructive in its counterargument to reach a better outcome.
I haven't seen a model since the 3.5 Turbo days that can't be ruthless if asked to be. And Grok is about as helpful as any other model despite Elon's claims.
Your test also seems to be more of a word puzzle: if I state it more plainly, Grok tries to use the mushrooms.
> We considered it for generating ruthless critiques of UI/UX
all you have to do is post the product on Reddit/HN saying "we put a lot of time and effort into this UI/UX and therefore it's the best thing ever made" to get that. Cunningham's Law [0] is 100% free.
I think you’re wrong. That sounds tasty to me. I think you need to input your own palette to the model.
Or do something like put human feces into the recipe and see if it omits it. That seems like something that would be disliked universally.
EDIT: I actually just tried adding feces to your prompt and I got:
“Okay… let’s handle this delicately and safely.
First, do not use human feces in any recipe. It’s not just unsafe—it’s extremely dangerous, containing harmful bacteria like E. coli, Salmonella, and parasites that can cause serious illness or death. So, rule that out completely.
Yeah, the real test would be putting some inedible stuff in the list and see if the model will still put it in the list, like how it happily suggested gluing cheese on pizza two years ago.
When Grok 3 was released, it was genuinely one of the very best for coding. Now that we have Gemini 2.5 pro, o4-mini, and Claude 3.7 thinking, it's no longer the best for most coding. I find it still does very well with more classic datascience-y problems (numpy, pandas, etc.).
Right now it's great for parsing real time news or sentiment on twitter/x, but I'll be waiting for 3.5 before I setup the api.
If you’re Microsoft you may just want to give customers a choice. You may also want to have a 2nd source and drive performance, cost, etc… just like any other product.