Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The main issue is that if you ask most LLMs to do something they aren't good at, they don't say "Sorry, I'm not sure how to do that yet," they says "Sure, absolutely! Here you go:" and proceed to make things up, provide numbers or code that don't actually add up, and make up references and sources.

To someone who doesn't actually check or have the knowledge or experience to check the output, it sounds like they've been given a real, useful answer.

When you tell the LLM that the API it tried to call doesn't exist it says "Oh, you're right, sorry about that! Here's a corrected version that should work!" and of course that one probably doesn't work either.



Yes. One of my early observations about LLMs was that we've now produced software that regularly lies to us. It seems to be a quite intractable problem. Also, since there's no real visibility as to how an LLM reaches a conclusion, there's no way to validate anything.

One takeaway from this is that labelling LLMs as "intelligent" is a total misnomer. They're more like super parrots.

For software development, there's also the problem of how up to date they are. If they could learn on the fly (or be constantly updated) that would help.

They are amazing in some ways, but they've been over-hyped tremendously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: