Hacker News new | past | comments | ask | show | jobs | submit login

I find this way of looking at LLMs to be odd. Surely we all are aware that AI has always been probabilistic in nature. Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.

Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.

There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.






The problem is they are being sold as everything solutions. Never write code / google search / talk to a lawyer / talk to a human / be lonely again, all here, under one roof. If LLM marketing was staying in its lane as a creator of convincing text we'd be fine.

This happens with every hype cycle. Some people fully buy into the most extreme of the hype, and other people reverse polarize against that. The first group ends up offsides because nothing is ever as good as the hype, but the second group often misses the forest for the trees.

There's no shortcut to figuring out what the truth of what a new technology is actually useful for. It's very rarely the case that either "everything" or "nothing" is the truth.


I think a lot of problems will be solved by explicitly training on high quality content and probably injecting some expert knowledge in addition

Yeah but that's not easy, which is why it wasn't done in any of the cases where it's needed.

>I find this way of looking at LLMs to be odd.

It's not about it being perfect or not. It's about how they come about with the responses they do.

>Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.

Yeah, but no one is anthropomorphizing binary classifiers.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: