Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The system made of humans+LLMs is an AGI.

Pay no attention to the man behind the curtain.

This type of thinking would claim that mechanical turk is AGI, or perhaps that human+pen and paper is AGI. While they are great tools, that's not how I'd characterize them.



> Pay no attention to the man behind the curtain.

I could say the same for us, pay no attention to the other humans who are behind the curtain.

Humans in isolation are dumb, limited, and can get nowhere with understanding the world. Intelligence is mostly nurture over nature, the collective activity of society nurtures intelligence. It's smart because it learns from many diverse experiences and has a common language for sharing discoveries.

A human, even the smartest of us, can't solve cutting edge problems on demand, we're not that smart. But we can stumble on discoveries, especially in large numbers, and can share good ideas. We're smart by stumbling onto good ideas, and we can build upon these discoveries because we have a common language. Just a massive search program based on real world outcomes, that is what looks like general intelligence at societal level.

If you take the social aspect of intelligence into consideration then LLMs are judged in an inappropriate way, as stand alone agents. Of course they are limited, and we're almost as limited alone. The real locus of intelligence is the language-world system.


The A in AGI stands for artificial, so a human+LLM system would not qualify as it has a natural, human component. That doesn't mean it's not an interesting topic, or that it won't help humans discover our world better, it's just the wrong label. Remove the human and you'd just have LLMs talking nonsense at each other. It's not surprising that you get an intelligent system when you include natural intelligence.


The key ingredients are not the humans but the feedback they carry to the model. Humans are embodied and can test ideas in the real world, LLMs need some kind of special deployment to achieve that. It just so happens that chat rooms are such a deployment.

For example, AlphaZero started from scratch and only had feedback from the self-play game outcomes, but that was enough to reach superhuman level. It was the feedback that carried insights and taught the model.

You can make a parallel to the scientific method: you have two stages, ideation and validation. Ideation alone is not scientific. Validation is what makes or breaks ideas. LLMs without a validation system are just like scientists without a lab.

We're not that smart, as demonstrated by the large number of ideas that don't pan out, we can churn ideas fast but we learn from their outcomes, we can't predict outcomes from the beginning and skip validation.

Here is an example of LLMs discovering useful ideas by feedback, even when they are completely outside their training distribution:

"Evolution through Large Models" https://arxiv.org/abs/2206.08896

This works because the task proposed by this paper is easy to test, so there is plenty of feedback. But the LLM still needs to apply ingenuity to optimize it, you can't brute force it by evolutionary methods alone.


That _is_ an interesting paper, I'll need to give it a read through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: