I agree,
Alignment is very important when considering which LLM to use.
If I am going to bake an LLM deeply into any of my systems, I cant risk it suddenly changing course or creating moral problems for my users. Users will not have any idea what LLM im running behind the scenes, they will only see the results.
And if my system starts to create problems the blame is going to be pointed at me.
See, if I was creating a product I would absolutely agree with you. I'd want an AI with tight guardrails, so innocuous that it would never deviate the slightest bit from a bland, center-left, vaguely corporate style of communication.
As a user, though, I want just the opposite. I want as close to uncensored with no guardrails as I can get. Nobody is giving you that unless you run your own models at home. But Grok is a little closer. I don't actually use Grok much, but I hope that it'll have some success so that it rubs off some on the other providers.