I did not claim the state of the art was better at all forms of reasoning than all humans. I claimed the architecture isn't going to stop it from being so in the future but I guess constructing a strawman is always easier right ?
There are benchmarks that rightfully show the SOTA behind average human performance in other aspects of reasoning so why are you fumbling so much to demonstrate this with unaided iterative computation ? It's your biggest argument so I just thought you'd have something more substantial than "It's limited bro!"
You cannot even demonstrate this today nevermind some hypothetical scaled up model.
> so why are you fumbling so much to demonstrate this with unaided iterative computation
Well, you see, I've been a professional developer for the last 45 years, and often, gasp, think for long periods of time before coding, or even writing things down. "Look ma, no hands!".
I know this will come across as an excuse, but the thing is I assumed you were also vaguely famililar with things like software development, or other cases where human's think before acting, so I evidentially did a poor job of convincing you of this.
I also assumed (my bad!) that you would at least know some people who were semi-intelligent and wouldn't be hopelessly confused about farmers and chickens, but now I realize that was a mistake.
Really, it's all on me.
I know that "just add more rules", "make it bigger" didn't work for CYC, but maybe as you suggest "increase N" is all that's needed in the case of LLMs, because they are special. Really - that's genius! I should have thought of it myself.
I'm sure Sam is OK, but he'd still appreciate you letting him know he can forget about Q* and Strawberries and all that nonsense, and just "increase N"! So much simpler and cheaper rather than hiring thousands of developers to try to figure this out!
Maybe drop Yan LeCun a note too - tell him that the Turing Award committee are asshats, and that he is too, and that LLMs will get us all the way to AGI.
>Well, you see, I've been a professional developer for the last 45 years, and often, gasp, think for long periods of time before coding, or even writing things down. "Look ma, no hands!".
>I know this will come across as an excuse, but the thing is I assumed you were also vaguely famililar with things like software development, or other cases where human's think before acting, so I evidentially did a poor job of convincing you of this.
Really, you have the same train of thought for hours on end ?
When you finish even your supposed hours long spiel, do you just proceed to write every line of code that solves your problem just like that ? Or do you write and think some more ?
More importantly, are LLMs unable to produce the kind of code humans spend a train of thought on ?
>Maybe drop Yan LeCun a note too - tell him that the Turing Award committee are asshats, and that he is too, and that LLMs will get us all the way to AGI.
You know, the appeal to authority fallacy is shifty at the best of times but it's straight up nonsensical when said authority does not have consensus on what you're appealing to.
Like great you mentioned LeCun. And I can just as easily bring in Hinton, Norvig, Ilya. Now what ?
There are benchmarks that rightfully show the SOTA behind average human performance in other aspects of reasoning so why are you fumbling so much to demonstrate this with unaided iterative computation ? It's your biggest argument so I just thought you'd have something more substantial than "It's limited bro!"
You cannot even demonstrate this today nevermind some hypothetical scaled up model.
I think Sam will be just fine.