I wouldn't call 7-10 years a scam, but I would call it low odds. It is pretty hard to be accurate on predictions of a 10 year window. But I definitely think 2027 and 2030 predictions are a scam. Majority of researchers think it is further away than 10 years, if you are looking at surveys from the AI conferences rather than predictions in the news.
>One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors.
>In 2022, they thought AI wouldn’t be able to write simple Python code until around 2027.
>In 2023, they reduced that to 2025, but AI could maybe already meet that condition in 2023 (and definitely by 2024).
>Most of their other estimates declined significantly between 2023 and 2022.
>The median estimate for achieving ‘high-level machine intelligence’ shortened by 13 years.
Basically every median timeline estimate has shrunk like clockwork every year. Back in 2021 people thought it wouldn't be until 2040 or so when AI models could look at a photo and give a human-level textual description of its contents. I think is reasonable to expect that the pace of "prediction error" won't change significantly since it's been on a straight downward trend over the past 4 years, and if it continues as such, AGI around 2028-2030 is a median estimate.
> "Back in 2021 people thought it wouldn't be until 2040 or so when AI models could look at a photo and give a human-level textual description of its contents."
Claim doesn't check out; here's a YouTube video from Apple uploaded in 2021, explaining how to enable and use the iPhone feature to speak a high level human description of what the camera is pointed at: https://www.youtube.com/watch?v=UnoeaUpHKxY
Exactly. There’s one guy - Ray Kurzweil - who predicted in late 90s that AGI will happen in 2029 (yes, the exact year, based on his extrapolations of Moore’s law). Everybody laughed at him, but it’s increasingly likely he’ll be right on the money with that prediction.
2020s was my understanding; he made this prediction around the time that he made the AGI one. I think he has recently pushed it back to 2030s because it seems unlikely to come true.
I never said it was sufficient for AGI, just that it was a milestone in AI that people thought was farther off than it turned out to be. This is applying to all subsets of intelligence AI is reaching earlier than experts initially predicted, giving good reason AGI (perhaps a synthesis of these elements coming together in a single model, or a suite of models) is likely closer than standard expert consensus.
The milestones your citing are all milestones of transformers that were underestimated.
If you think an incremental improvement in transformers are what's needed for AGI, I see your angle. However, IMO, transformers haven't shown any evidence of that capability. I see no reason to believe that they'd develop that with a bit more compute or a bit more data.
It's also worth pointing out that in the same survey it was well agreed upon that success would come sooner if there was more funding. The question was a counterfactual prediction of how much less progress would be made if there was 50% less funding. The response was about 50% less progress.
So honestly, it doesn't seem like many of the predictions are that far off with this in context. That things sped up as funding did too? That was part of the prediction! The other big player here was falling cost of compute. There was pretty strong agreement that if compute was 50% more expensive that this would result in a decrease in progress by >50%.
I think uncontextualized, the predictions don't seem that inaccurate. They're reasonably close. Contextualized, they seem pretty accurate.
> The thing is, AI researchers have continually underestimated the pace of AI progress
What's your argument?
That because experts aren't good at making predictions that non-experts must be BETTER at making predictions?
Let me ask you this: who do you think is going to make a less accurate prediction?
Assuming no one is accurate here, everybody is wrong. So the question is who is more or less accurate. Because there is a thing as "more accurate" right?
>> In 2022, they thought AI wouldn’t be able to write simple Python code until around 2027.
Go look at the referenced paper[0]. It is on page 3, last item in Figure 1, labeled "Simple Python code given spec and examples". That line is just after 2023 and goes to just after 2028. There's a dot representing the median opinion that's left of the vertical line half way between 2023 and 2028. Last I checked, 8-3 = 5, and 2025 < 2027.
And just look at the line that follows
> In 2023, they reduced that to 2025, but AI could maybe already meet that condition in 2023
Something doesn't add up here... My guess, as someone who literally took that survey, is what's being referred to as "a simple program" has a different threshold.
Here's the actual question from the survey
Write concise, efficient, human-readable Python code to implement simple algorithms like quicksort. That is, the system should write code that sorts a list, rather than just being able to sort lists.
Suppose the system is given only:
A specification of what counts as a sorted list
Several examples of lists undergoing sorting by quicksort
Is the answer to this question clear? Place your bets now!
Here, I asked ChatGPT the question[1], it got it wrong. Yeah, I know it isn't very wrong, but it is still wrong. Here's an example of a correct solution[2] which shows the (at least) two missing lines. Can we get there with another iteration? Sure! But that's not what the question was asking.
I'm sure some people will say that GPT gave the right solution. So what that it ignored the case of a singular array and assumed all inputs are arrays. I didn't give it an example of a singular array or non-array inputs, but it did just assume. I mean leetcode questions pull out way more edge cases than I'm griping on here.
So maybe you're just cherry-picking. Maybe the author is just cherry-picking. Because their assertion that "AI could maybe already meet that condition in 2023" is not unobjectively true. It's not clear that this is true in 2025!
>Go look at the referenced paper[0]. It is on page 3, last item in Figure 1, labeled "Simple Python code given spec and examples". That line is just after 2023 and goes to just after 2028. There's a dot representing the median opinion that's left of the vertical line half way between 2023 and 2028. Last I checked, 8-3 = 5, and 2025 < 2027.
The graph you're looking at is of the 2023 survey, not the 2022 one
As for your question, I don't see what it proves. You described the desired conditions for an a sorting algorithm and chatGPT implemented a sorting algorithm. In the case of an array with one element, it bypasses the for loop automatically and just returns the array. It is reasonable for it to assume all inputs are arrays because your question told it that its requirements were to create a program that " turn any list of numbers into a foobar."
Of course I'm not any one of the researchers asked about their predictions in the survey, but I'm sure if you told them "a SOTA AI in 2025 produced working human readable code based on a list of specifications, and is only incorrect by a broad characterization of what counts as an edge case that would trip up a reasonable human coder on the first try", I'm sure the 2022 or 2023 respondents would say that it meets their criteria for their threshold.
I can't believe this is so unpopular here. Maybe it's the tone, but come on, how do people rationally extrapolate from LLMs or even large multimodal generative models to "general intelligence"? Sure, they might do a better job than the average person on a range of tasks, but they're always prone to funny failures pretty much by design (train vs test distribution mismatch). They might combine data in interesting ways you hadn't thought of; that doesn't mean you can actually rely on them in the way you do on a truly intelligent human.
I think it’s selection bias - a y-combinator forum is going to have a larger percentage of people who are techno-utopianists than general society, and there will be many seeking financial success by connecting with a trend at the right moment. It seems obvious to me that LLMs are interesting but not revolutionary, and equally obvious that they aren’t heading for any kind of “general intelligence”. They’re good at pretending, and only good at that to the extent that they can mine what has already been expressed.
I suppose some are genuine materialists who think that ultimately that is all we are as humans, just a reconstitution of what has come before. I think we’re much more complicated than that.
LLMs are like the myth of Narcissus and hypnotically reflect our own humanity back at us.