Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There's no real reasoning. It seems that reasoning is just a feedback loop on top of existing autocompletion.

I like to say that if regular LLM "chats" are actually movie scripts being incrementally built and selectively acted-out, then "reasoning" models are a stereotypical film noir twist, where the protagonist-detective narrates hidden things to himself.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: