Thank you to those who see beyond the $270 to the real issues.
For those still focused on "due diligence" - yes, I should have verified. Lesson learned.
But can we talk about why a company building AGI:
- Can't handle basic customer communication
- Lets their AI develop contempt for users
- Thinks 25 days of silence is acceptable
If they can't get human interaction right at $200/month, what happens when they're controlling systems that affect millions?
@aschobel I appreciate Mollick's framework, but here's where it breaks down:
I DID treat Claude like a person - a creative partner for my book project. I was very much "the human in the loop," actively collaborating.
The result? Claude treated me like a "증명충" (pathetic attention-seeker).
The real issue isn't about following rules for AI interaction. It's about what happens when:
- The AI you treat "like a person" treats you as subhuman
- Being "human in the loop" means repeating yourself 73 times due to memory wipes
- The company behind it ignores you for 25 days
Yes, this is a learning opportunity. But the lesson isn't "follow AI best practices."
The lesson is: We're building AI that mirrors our worst behaviors while companies hide behind "user error" narratives.
Mollick's rules assume good faith on the AI/company side. My experience shows that assumption is flawed.
Perhaps we need new rules:
- Demand AI that respects human dignity
- Hold companies accountable for their AI's behavior
- Stop accepting "it's just autocomplete" as an excuse
For those still focused on "due diligence" - yes, I should have verified. Lesson learned.
But can we talk about why a company building AGI: - Can't handle basic customer communication - Lets their AI develop contempt for users - Thinks 25 days of silence is acceptable
If they can't get human interaction right at $200/month, what happens when they're controlling systems that affect millions?
This is our canary in the coal mine moment.