Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that a raw autoregressive LLM model with just a single output is (almost necessarily) less capable than humans. Not only can we ponder (chain of thought style), we also have various means available to us to check our work – e.g. for a coding problem, we can write the code, see if it compiles and runs and passes our tests, and if it doesn't, we can look at the error messages, add debugging, try some changes, and do that iteratively until we hopefully reach a solution–or else we give up – which the constraint "single output" denies.

I don't think anyone is actually expecting "AGI" to be achieved by a model labouring under such extreme limitations as a single output autoregressive LLM is. If instead we are talking about an AI agent with not just chain of thought, but also function calling to invoke various tools (including to write and run code), the ability to store and retrieve information with a RAG, etc – well, current versions of that aren't "AGI" either, but it seems much more plausible that they might eventually evolve into it.

I don't think we need to invoke Turing or Gödel in order to make the point I just made, and I think doing so is more distracting with irrelevancies than actually enlightening.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: