> I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
I don't know what you mean by "as-yet circular assumption". (Though in the philosophy of knowledge, the Münchhausen trilemma says that everything is ultimately either circular, infinite regression, or dogmatic).
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
Sounds like you're arguing against ASI not AGI: G = General like us; S = Super-, exceeding us.
That said, there's evidence that ASI is also possible: All the different ways in which we've made new minds that do in fact greatly exceed ours in capability.
When I was a kid, "intelligent" was the way we described people who were good at maths, skilled chess players, good memories, having large vocabularies, knowing many languages, etc. Even ignoring the arithmetical component of maths (where a Pi Zero exceeds all of humanity combined even if each of us were operating at the standard of the current world record holder), we have had programs solving symbolic maths for a long time; Chess (and Go, Starcraft, Poker,…) have superhuman AI; even before GPT, Google Translate already knew (even if you filter the list to only those where it was of a higher standard than my second language) more languages than I can remember the names of (and a few of them even with augmented reality image-to-image translations).
And of course, for all the flaws the current LLMs have in peak skill, most absolutely have superhuman breadth of knowledge: I can beat GPT-3.5 as a software engineer, maths and logic puzzles, or when writing stories, but that's basically it.
What we have not made is anything that's both human (or superhuman) skill-level while also human-level generality — but saying the two parts separately isn't evidence that it can be done is analogous to looking at 1 gram of enriched uranium and a video of a 50 kg sphere of natural uranium being forced to implode spherically, and saying "there no evidence that humans are capable of designing an atom bomb or that it's possible to make an atom bomb that greatly exceeds chemical bombs in yield."
I don't know what you mean by "as-yet circular assumption". (Though in the philosophy of knowledge, the Münchhausen trilemma says that everything is ultimately either circular, infinite regression, or dogmatic).
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
Sounds like you're arguing against ASI not AGI: G = General like us; S = Super-, exceeding us.
That said, there's evidence that ASI is also possible: All the different ways in which we've made new minds that do in fact greatly exceed ours in capability.
When I was a kid, "intelligent" was the way we described people who were good at maths, skilled chess players, good memories, having large vocabularies, knowing many languages, etc. Even ignoring the arithmetical component of maths (where a Pi Zero exceeds all of humanity combined even if each of us were operating at the standard of the current world record holder), we have had programs solving symbolic maths for a long time; Chess (and Go, Starcraft, Poker,…) have superhuman AI; even before GPT, Google Translate already knew (even if you filter the list to only those where it was of a higher standard than my second language) more languages than I can remember the names of (and a few of them even with augmented reality image-to-image translations).
And of course, for all the flaws the current LLMs have in peak skill, most absolutely have superhuman breadth of knowledge: I can beat GPT-3.5 as a software engineer, maths and logic puzzles, or when writing stories, but that's basically it.
What we have not made is anything that's both human (or superhuman) skill-level while also human-level generality — but saying the two parts separately isn't evidence that it can be done is analogous to looking at 1 gram of enriched uranium and a video of a 50 kg sphere of natural uranium being forced to implode spherically, and saying "there no evidence that humans are capable of designing an atom bomb or that it's possible to make an atom bomb that greatly exceeds chemical bombs in yield."