Hacker News new | past | comments | ask | show | jobs | submit login

> If it's critical I get it right, I go to the wiki sources referenced

the problem with this is that humans will likely use it for low key stuff, see that it works (or that the errors don't affect them too badly) and start using it for more serious stuff. It will all be good until someone uses it in something more serious and some time later it ends badly.

Human basic thinking is fairly primitive. If yesterday was sunny, the assumption is that today should too. The more this happens the higher your confidence. The problem is that this confidence emboldens people to gamble on that and when it is not sunny anymore, terrible things happen. A lot of hype driven behaviour is like that. Crypto was like that. The economic crisis of the late 00s was like that. And LLMs look set to be like that too.

It is going to take a big event involving big critical damage or a high profile series of deaths via misuse of an LLM to give policymakers and business leaders around the world a reality check and get them looking at LLMs in a more critical way. An AI autumn if you wish. It is going to happen at some point. Maybe not in 2025 or 2026 but it will definitely happen.

You may argue that it is the fault of the human using the LLM/crypto/giving out loans but it really doesn't matter when those decisions affect others.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: