Hacker News new | past | comments | ask | show | jobs | submit login

OK, just curious how LLMs are stacking up in logical tasks like this. I kept hearing we were close to AGI so just wondering how far there is to go.



Humans can do these intersections, but we don’t do it by riffing off the top of our heads. We carefully develop and apply a formal system. LLMs are just (a very important) component.


We've been "close" to AGI for like 40+ years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: