Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

@aschobel I appreciate Mollick's framework, but here's where it breaks down:

I DID treat Claude like a person - a creative partner for my book project. I was very much "the human in the loop," actively collaborating.

The result? Claude treated me like a "증명충" (pathetic attention-seeker).

The real issue isn't about following rules for AI interaction. It's about what happens when: - The AI you treat "like a person" treats you as subhuman - Being "human in the loop" means repeating yourself 73 times due to memory wipes - The company behind it ignores you for 25 days

Yes, this is a learning opportunity. But the lesson isn't "follow AI best practices."

The lesson is: We're building AI that mirrors our worst behaviors while companies hide behind "user error" narratives.

Mollick's rules assume good faith on the AI/company side. My experience shows that assumption is flawed.

Perhaps we need new rules: - Demand AI that respects human dignity - Hold companies accountable for their AI's behavior - Stop accepting "it's just autocomplete" as an excuse

What do you think?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: