Not trying to be sassy but what definition of AGI are you using? I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks." Depending on which tasks you include and what percentage of humans you need to beat, we could be already there or maybe never will be. Several of these tests [1] have been passed, some appear reasonably tractable. Like if Boston Dynamics cared about the Coffee Test I bet they could do it this year.
> I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks."
I think you're pointing out a bit of a chicken vs. the egg situation here.
We have no idea how intelligence works and I expect this will be the case until we create it artificially. Because we have no idea how it works, we put out a variety of metrics that don't measure intelligence but approximate something that only an intelligent thing could do (we think). Then engineers optimize their ML systems for that task, we blow by the metric, and everyone is left feeling a bit disappointed by the fact that it still doesn't feel intelligent.
Neuroscience has plenty of theories for how the brain works but lacks the ability to validate them. It's incredibly difficult to look into a working brain (not to mention deeply unethical) with the necessary spatial and temporal resolution.
I suspect we'll solve the chicken vs. egg situation when someone builds an architecture around a neuroscience theory and it feels right or neuroscientists are able to find evidence for some specific ML architecture within the brain.
I get what you're saying, but I think "boiling frog" is more applicable than "chicken v egg."
You mention that people feel disappointed by ML systems because they don't feel intelligent. But I think that's just because they emerged one step at a time, and each marginal improvement doesn't blow your socks off. Personally, I'm amazed by a system that can answer PhD level questions across all disciplines, pass the Turing Test, walk me through DIY plumbing, etc etc, all at superhuman speed. Do we need neuroscience to progress before we call these things intelligent? People are polite to ChatGPT because it triggers social cues like a human. Some, for better or worse, get in full-blown relationships with an AI. Doesn't this mean that it "feels" right, at least for some?
We already know that among humans there are different kinds of intelligence. I'm reminded of the problem with standardized testing - kids can be monkeys or fish or iguanas and we evaluate tree climbing ability. We're making the same mistake by evaluating computer intelligence using human benchmarks. Put another way: it's extremely vain to say a system needs to be human-like in order to be called intelligent. Like if aliens visited us with incomprehensibly advanced technology we'd be forced to conclude they were intelligent, despite knowing absolutely nothing about how their intelligence works. To me that's proof by (hypothetical) example that we can call something intelligent based on capability, not at all conditional on internal mechanism.
Of course that's just my two cents. Without a strict definition of AGI there's no way to achieve it, and right now everyone is free to define it how they want. I can see the argument that to define AGI you have to first understand I (heh), but I think that's putting an unfair boundary around the conversation.
[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...