Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean, is anyone really surprised by this? LLMs (as I understand them today) only predict the next token based on previous tokens, so there's no actual logical cohesion to what they produce.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: