Hacker News new | past | comments | ask | show | jobs | submit login

Hm, I don't buy this. The statistics shown in the blog post revealing the new Claude models (this submission) show a significant tendency to refuse to answer benign questions.

Just the fact that there's a x% risk it doesn't answer complicates any use case unnecessarily.

I'd prefer if the bots weren't antrophomized at all, no more "I'm your chatbot assistant". That's also just a marketing gimmick. It's much easier to assume something is intelligent if it has a personality.

Imagine if the models weren't even framed as AI at all. What if they were framed as 'flexi-search' a modern search engine that predicts content it hasn't yet indexed.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: