AGIs probably comes from neurosymbolic AI.
But LLMs could be the neuro-part of that.
On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.
Idk man, I use GPT to one-shot admin tasks all day long.
"Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."
Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.
I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.
Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.
You can't possibly use LLMs day to day if you think the benchmarks are solely gamed. Yes, there's been some cases, but the progress in real-life usage tracks the benchmarks overall. Gemini 2.5 Pro for example is absurdly more capable than models from a year ago.
They aren't lying in the way that LLMs have been seeing improvement, but benchmarks suggesting that LLMs are still scaling exponentially are not reflective of where they truly are.
AI 2027 had a good hint at what LLMs cannot do: robotics. So perhaps the singularity is near, after all, since this is pretty much my feeling too: LLMs are not skynet. But it is easier to pay people off in capitalism, than to engineer the torment nexus and threaten them into following. So it does not need killer robots+factories, if human have better chances in life by cooperating with LLMs instead.
Amusingly enough, people writing stuff like the above, to my mind come over as doing what they are accusing LLMs of doing. :-)
And in discussions "is it or isn't it, AI smarter than HI already", reminds me to "remember how 'smart' an average HI is, then remember half are to the left of that center". :-O
On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.