That's a good point. In my head I was considering stuff like chess, where even though it took a long time to reach superhuman performance on computers, the issue was mainly compute. People basically knew how to do it algorithmically before then (pruning tree search).
I guess the underlying issue with my argument is that we really have no idea how large the search space is for finding AGI, so applying something like Bayes theorem (which is basically my argument) tells you more about my priors than reality.
That said, we know that human AGI was a result of an optimisation process (natural selection), and we have rudimentary generic optimisers these days (deep neural nets), so you could argue we've narrowed the search space a lot since the days of symbolic/tree search AI.
> we know that human AGI was a result of an optimisation process (natural selection)
I don't think this is obviously correct.
Three things:
1) Many actions we think of as "intelligence" are just short-cuts based on heuristics.
2) While there's probably an argument that problem solving is selected for it's not clear to me how far this goes at all. There's little evidence that smarter people end up in more powerful positions for example. Seems like there is perhaps there is a cut-off beyond which intelligence is just a side effect of the problem solving ability that is useful.
3) Perhaps humans individually aren't (very?) intelligent and it is only a society of humans that are.
(also perhaps human GI? Nothing artificial about it.)
> no idea how large the search space is for finding AGI, so applying something like Bayes theorem (which is basically my argument) tells you more about my priors than reality.
There are plenty of imaginable forms of intelligence that are often ignored during these conversations. One in common use is "an intelligent footballer" which applies to sport for someone who can read a game well. There are other, non-human examples too (Dolphins, crows, parrots etc).
And then in the world of speculative fiction there's a range of different types of intelligence. Vernor Vinge wrote about intelligences which had motivations that people couldn't comprehend (and Vinge is generally credited with the concept of the singularity). More recently Peter Watt's Blindside contemplates the separation of intelligence and sentience.
Basically I don't think your expression of Bayes' theorem had nearly enough possibilities in it.
> While there's probably an argument that problem solving is selected for it's not clear to me how far this goes at all. There's little evidence that smarter people end up in more powerful positions for example.
Evolution hasn't had enough time to adapt us to our new fangled lifestyle of last few hundred years, or few thousand for that matter, and anyways in the modern world people are not generally competing on things affecting survival, but rather on cultural factors that affect number of children we have.
Humans and most (all?) intelligent animals are generalists, which is why we need a big brain and intelligence - to rapidly adapt to a wide variety of ever changing circumstances. Non-generalists such as herbivores, crocodiles don't need intelligence and therefore don't have it.
The main thing that we need to survive & thrive as generalists - and what evolution has evidentially selected for - is ability to predict so that we can plan ahead and utilize past experience. Where will the food be, where will the water be in a drought, etc. I think active reasoning (not just LLM-like prediction/recall) would also play a large role in survival, and presumably parts of our brain have evolved specifically to support that, even if the CEO probably got his job based more on height/looks and golf handicap.
I strongly agree that the predictive and planning ability is very important - things like agriculture rely on it and must be selective at that point.
But the point has previously been made else humans developed large brains long (1.5M years?) before agriculture, and for a long time the only benefit seemed to be fire and flint tools.
It's not widely understood the causal link here - there are other species that have large brains but haven't developed these skills. So it's not clear exactly what facets of intelligence are selected for.
> also perhaps human GI? Nothing artificial about it.
Lol, thanks, that's quite funny. I should spend less time on the internet.
> While there's probably an argument that problem solving is selected for it's not clear to me how far this goes at all.
Yeah, I meant something much more low brow which is that _humans_, with all of our properties (including GI), are a result of natural selection. I'm not claiming GI was selected for specifically, but it certainly occurred as a side-effect either way. So we know optimisation can work.
> There are plenty of imaginable forms of intelligence that are often ignored during these conversations.
I completely agree! I wish there was more discussion on intelligence in the broad in these threads. Even if you insist on sticking to humans it's pretty clear that something like a company or a government is operating very intelligently in its own environment (business, or politics), well beyond the influence of its individual constituents.
> Basically I don't think your expression of Bayes' theorem had nearly enough possibilities in it.
Another issue with Bayes in general is that you have a fixed probability space in mind when you use it, right? I can use Bayes to optimise my beliefs against a fixed ontology, but it says nothing about how or when to update the ontology itself.
And no doubt my ontology is lacking when it comes to (A)GI...
Jeremy Howard has said the same thing for example.
> What's your definition of AGI ?
Things that we consider intelligent when humans do them.
Basically we had all these definitions of AGI that we have surpassed (Turing test etc). Now we are finding more edge cases where we go "ahh... it can't do this so therefore it isn't intelligent".
But the issue with that is that lots of humans can't do them either.
I think the ARC challenge is valid. But I'd also point out that there are substantial numbers of people who won't be able to solve them either (blind people for example, as well as people who aren't good at puzzles). We make excuses there ("oh we can explain it to a blind person" or for many physical problems things like "Oh Stephen Hawking couldn't solve this but that is an exception") but we don't allow the same excuses for machine intelligence.
I don't think the boundary of AGI is a hard line, but if you went back 10 years and took what we had now and showed it to them I think people would be "Oh wow you have AI!".
OK, so where we differ is in defining AGI. To me, and I think most people, it's referring to human-level (or beyond) general intelligence. Shane Legg from DeepMind has also explicitly defined it this way, but I'm not sure where others in the industry stand.
LLMs do have a broad range of abilities, so not narrow AI, but clearly it's not general intelligence (or at least not human level), else they would not be failing or struggling on things that to us are easy - general means universal (not confined to specific types of problem), not just multi-capability.
The lack of reasoning ability, especially since it is architecturally based, seems more than a matter of patching up corner cases that aren't handled well. This shoring up of areas of weakness by increasing model size, adding targeted synthetic data and post-training is mostly just addressing static inference, much like adding more and more rules to CYC.
To make an LLM capable of reasoning it needs to go beyond a fixed N-layers of compute and support open-ended exploration, and probably replace gradient descent with a learning mechanism that can also be used at inference time. In a recent interview John Schulman (one of the OpenAI co-founders) indicated that they hoped that RL training on reasoning would improve it, but that is still going to be architecturally limited. You can learn a repertoire of reasoning templates than can be applied in gestalt fashion, but that's not the same as being able to synthesize a solution to a novel problem on the fly.
LLMs are certainly amazing, and as you say 10-years ago we would have regarded them as AI, but of course the same was true of expert systems and other techniques - we call things we don't know how to do "AI" then relabel them once we move past them to new challenges. Just as we no longer regard expert systems as AI, I doubt in 20 years we'll regard LLMs (which in some regards are also very close to expert systems) as AI, certainly not AGI. AGI will be the technology than can replace humans in many jobs, and when we get there LLMs will in hindsight look very limited.
To be clear, I think we have AGI (LLMs with tool use are generalized enough) and we are currently finding edge cases that they fail at.