Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even without a definition of intelligence, this is not what the paper is about, which only mentions LLMs in passing. And LLMs can be useful even if they are wrong, because formal verification (though Lean and such) checks the result.

Are LLMs useful enough? I don't know.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: