But the only thing I've seen in my life that most resembles what is happening with AI, the hype, its usefulness beyond the hype, vapid projects, solid projects, etc, is the rise of the internet.
Based on this I would say we're in the 1999-2000 era. If it's true what does it mean for the future?
Well, there’s a fundamental difference: the Internet blew up because it enabled people to connect with each other more easily: culturally, economically, politically.
AI is more-or-less replacing people, not connecting them. In many cases this is economically valuable, but in others I think it just pushes the human connection into another venue. I wouldn’t be surprised if in-person meetup groups really make a comeback, for example.
So if a prediction about AI involves it replacing human cultural activities (say, the idea that YouTube will just be replaced by AI videos and real people will be left out of a job), then I’m quite bearish. People will find other ways to connect with each other instead.
Businesses are overly optimistic about AI replacing people.
For very simple jobs, like working in a call center? Sure.
But the vast majority of all jobs aren't ones that AI can replace. Anything that requires any amount of context sensitive human decision making, for example.
There's no way that AI can deliver on the hype we have now, and it's going to crash. The only question is how hard - a whimper or a bang?
As a customer, nothing infurates me like an AI call center. If I have to call, it's because I have an account problem that requires me to speak with someone to resolve it.
I moved states and Xfinity was billing me for the month after I cancelled. I called, pressed 5 (or whatever) for billing. "It looks like your cable modem is disconnected. Power-cycling your modem resolves most problems. Would you like to do that now?" No. "Most problems can be resolved by power-cycling your modem, would you like to try that now?" No, my problem is about billing, and my modem is off-line because I CANCELLED MY SERVICE! They asked three more times (for a total of five) before I could progress. For reasons I have now forgot I had to call back several times, going through the whole thing again.
There is a name for someone who pays no attention to what you say, and none of them are complimentary. AI is, fundamentally, an inhuman jerk.
(It turned out that they can only get their database to update once a month, or something, and despite the fact that nobody could help me, they issued me a refund in a month when their database finally updated. The local people wanted to help, but could not because my new state is in a different region and the regions cannot access each other.)
In a classically disruptive way, the internet provided an existing service (information exchange) in a way that was in many ways far less pleasant than existing channels (newspapers, libraries, phone). Remember that the early Internet was mostly text, very low resolution, un credentialed, flaky, expensive, and too technical for most people.
The only reason that we can have such nice things today like retina display screens and live video and secure payment processing is because the original Internet provided enough value without these things.
In my first and maybe only ever comment on this website defending AI, I do believe that in 30 or 40 years we might see this first wave of generative AI in a similar way to the early Internet.
Connecting to other people, I think, is going to see a surge of desire. I've been feeling it, many I talk to are feeling. One strange piece of anecdata is that I've been working out (with my son) at a gym for the past 3 years, very consistently. Everybody has always just done their thing, almost entirely individuals working out, with a handful of pairs. I'm old, so I remember when working out at gyms would be a much more social situation, but times change.
There are a number of regulars that we've seen there for years now. Barely have interacted with them. Suddenly, the past week, through a chance interaction, two of them individually have talked to us and ended up introducing themselves. Plus I see other people talking more and more.
I think we've hit a on-your-own saturation point and the pendulum might swing the other way.
But I'm really staying away from many "predict the AI future" because we just don't know where it is going. I've read thousands of scifi books and nonfiction and futurist and any number of options are open and no one, and I mean no one, knows what will happen with any kind of certainty.
There's also an element of economic drag caused by AI that did not exist with the internet: for example, the proliferation of slop content and AI-driven scams. The internet did not almost immediately make previous modes of doing business more difficult but instead slowly replaced them with more efficient alternatives. AI has already enshittified things like Google search and product reviews. In many ways AI is undoing some of the gains of the internet without providing a replacement.
Classic repeat of the Gartner Hype Cycle. This bubble pop will dwarf the dot-bomb era. There's also no guarantee that the "slope of enlightenment" phase will amount to much beyond coding assistants. GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives.
This bubble also seems to combine the worst of the two huge previous bubbles; the hype of the dot-com bubble plus the housing bubble in the way of massive data center buildout using massive debt and security bundling.
These things, as they are right now, are essentially at the performance level of an intern or recent graduate in approximately all academic topics (but not necessarily practical topics), that can run on high-end consumer hardware. The learning curves suggest to me limited opportunities for further quality improvements within the foreseeable future… though "foreseeable future" here means "18 months".
I definitely agree it's a bubble. Many of these companies are priced with the assumption that they get most of the market; they obviously can't all get most of the market, and because these models are accessible to the upper end of consumer hardware, there's a reasonable chance none of them will be able to capture any of the market because open models will be zero cost and the inference hardware is something you had anyway so it's all running locally.
Other than that, to the extent that I agree with you that:
> GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives
I do so only in that not everyone wants (or would even benefit from) a book-smart-no-practical-experience intern, and not all economic tasks are such that book-smarts count for much anyway. This set of AI advancements didn't suddenly cause all cars manufacturers to suddenly agree that this was the one weird trick holding back level 5 self driving, for example.
But for those of us who can make use of them, these models are already useful (and, like all power tools, dangerous when used incautiously) beyond merely being coding assistants.
> GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives
No, but GenAI in it's current form is insanely useful and is already shifting the productivity gears into a higher level. Even without 100% reliable "agentic" task execution and AGI, this is already some next level stuff, especially for non-technical people.
How do people trust the output of LLMs? In the fields I know about, sometimes the answers are impressive, sometimes totally wrong (hallucinations). When the answer is correct, I always feel like I could have simply googled the issue and some variation of the answer lies deep in some pages of some forum or stack exchange or reddit.
However, in the fields I'm not familiar with, I'm clueless how much I can trust the answer.
1. For coding, and the reason coders are so excited about GenAI is it can often be 90% right, but it's doing all of the writing and researching for me. If I can reduce how much I need to actually type/write to more reviewing/editing, that's a huge improvement day to day. And the other 10% can be covered by tests or adding human code to verify correctness.
2. There are cases where 90% right is better than the current state. Go look at Amazon product descriptions, especially things sold from Asia in the United States. They're probably closer to 50% or 70% right. An LLM being "less wrong" is actually an improvement, and while you might argue a product description should simply be correct, the market already disagrees with you.
3. For something like a medical question, the magic is really just taking plain language questions and giving concise results. As you said, you can find this in Google / other search engines, but they dropped the ball so badly on summaries and aggregating content in favor of serving ads that people immediately saw the value of AI chat interfaces. Should you trust what it tells you? Absolutely not! But in terms of "give me a concise answer to the question as I asked it" it is a step above traditional searches. Is the information wrong? Maybe! But I'd argue that if you wanted to ask your doctor about something that quick LLM response might be better than what you'd find on Internet forums.
One of the key use cases for me other than coding is as a much better search engine.
You can ask a really detailed and specific question that would be really hard to Google, and o3 or whatever high end model will know a lot about exactly this question.
It's up to you as a thinking human to decide what to do with that. You can use that as a starting point for in depth literature research, think through the arguments it makes from first principles, follow it up with Google searches for key terms it surfaces...
There's a whole class of searches I would never have done on Google because they would have taken half a day to do properly that you can do in fifteen minutes like this.
I went through my ChatGPT history to pick a few examples that I'm both comfortable sharing and that illustrate the use-case well:
> There are some classic supply chain challenges such as the bullwhip effect. How come modern supply chains seem so resilient? Such effects don't really seem to occur anymore, at least not in big volume products.
> When the US used nuclear weapons against Japan, did Japan know what it was? That is, did they understood the possibility in principle of a weapon based on a nuclear chain reaction?
> As of July 2025, equities have shown a remarkable resilience since the great financial crisis. Even COVID was only a temporary issue in equity prices. What are the main macroeconomic reasons behind this strength of equities.
> If I have two consecutive legs of my air trip booked on separate tickets, but it's the same airline (also answer this for same alliance), will they allow me to check my baggage to the final destination across the two tickets?
> what would be the primary naics code for the business with website at [redacted]
I probably wouldn't have bothered to search any of these on Google because it would just have been too tedious.
With the airline one, for example, the goal is to get a number of relevant links directly to various airline's official regulations, which o3 did successfully (along with some IATA regulations).
For something like the first or second, the goal is to surface the names of the relevant people / theories involved, so that you know where to dig if you wish.
But I've seen some harnesses (i.e., whatever Gemini Pro uses) do impressive things. The way I model it is like this: an LLM, like a person, has a chance to produce wrong output. A quorum of people and some experiments/study usually arrives to a "less wrong" answer. The same can be done with an LLM, and to an extent, is being done by things like Gemini Pro and o3 and their agentic "eyes" and "arms". As the price of hardware and compute goes down (if it does, which is a big "if"), harnesses will become better by being able to deploy more computation, even if the LLM models themselves remain at their current level.
Here's an example: there is a certain kind of work we haven't quite yet figured how to have LLMs do: creating frameworks and sticking to them, e.g. creating and structuring a codebase in a consistent way. But, in theory, if one could have 10 instances of an LLM "discuss" if a function in code conforms to an agreed convention, well, that would solve that problem.
There are also avenues of improvement that open with more computation. Namely, today we use "one-shot" models... you train them, then you use them many times. But the structure, the weights of the model aren't being retrained on the output of their actions. Doing that in a per-model-instance basis is also a matter of having sufficient computation at some affordable price. Doing that in a per-model basis is practical already today, the only limitation are legal terms, NDAs, and regulation.
I say all of this objectively. I don't like where this is going; I think this is going to take us to a wild world where most things are gonna be way tougher for us humans. But I don't want to (be forced to) enter that world wearing rosy lenses.
I think the primary benefit of LLMs for me is as an entrypoint into an area I know nothing about. For instance, if I’m building a new kind of system which I haven’t built before, then I’m missing lots of information about it — like what are the most common ways to approach this problem, is there academic research I should read, what are the common terms/paradigms/etc. For this kind of thing LLMs are good because they just need to be approximately correct to be useful, and they can also provide links to enough primary sources that you can verify what they say. It’s similar if I’m using a new library I haven’t used before, or something like that. I use LLMs much less for things that I am already an expert in.
We place plenty of trust with strangers to do their jobs to keep society going. What’s their error rate?
It all ends up with the track record, perception and experience of the LLMs. Kinda like self-driving cars.
Strangers have an economic incentive to perform. AI does not. What AI program is currently able to modify its behavior autonomously to increase its own profitablity? Most if not all current public models are simply chat bots trained on old data scraped off the web. Wow we have created an economy based on cultivated Wikipedia and Reddit content from the 2010s linked together by bots that can make grammatical sentences and cogent sounding paragraphs. Isn't that great? I don't know, about 10 years ago before google broke itself, I could find information on any topic easily and judge its truth using my grounded human intelligence better than any AI today.
For one thing AI can not even count. Ask google's AI to draw a woman wearing a straw hat. More often than not the woman is wearing a well drawn hat while holding another in her hand. Why? Frequently she has three arms. Why? Tesla self driving vision can't differentiate between the sky and a light colored tractor trailer turning across traffic resulting in a fatality in Florida.
For something to be intelligent it needs to be able to think and evaluate the correctness of its thinking correctly. Not just regurgitate old web scrapings.
It is pathetic realy.
Show me one application where black box LLM ai is generating a profit that an effectively trained human or rules based system couldn't do better.
Even if AI is able to replace a human in some tasks this is not a good thing for a consumption based economy with an already low labor force participation rate.
During the first industrial revolution human labor was scarce so machines could economically replace and augnent labor and raise standards of living. In the present time labor is not scarce so automation is a solution in search of a problem and a problem itself if it increasingly leads to unemployment without universal bssic income to support consumption. If your economy produces too much with nobody to buy it then economic contraction follows. Already young people today struggle to buy a house. Instead of investing in chat bots maybe our economy should be employing more people in building trades and production occupations where they can earn an income to support consumption including of durable items like a house or a car. Instead because of the fomo and hype about AI investors are looking for greater returns by directing money toward scifi fantasy and when that doesn't materialize an economic contraction will result.
My point is humans make mistakes too, and we trust them, not because we inspect everything they say or do, but from how society is set up.
I'm not sure how up to date you are but most AIs with tool calling can do math. Image generation hasn't been generating weird stuff since last year. Waymo sees >82% fewer injuries/crashes than human drivers[1].
RL _is_ modifying its behavior to increase its own profitability, and companies training these models will optimize for revenue when the wallet runs dry.
I do feel the bit about being economically replaced. As a frontend-focused dev, nowadays LLMs can run circles around me. I'm uncertain where we go, but I would hate for people to have to do menial jobs just to make a living.
Your internal verifier model in your head is actually good enough and not random. It knows how the world works and subconsciously applies a lot of sniff tests it has learned over the years.
Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.
Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.
The problem isn't that content is AI generated, the problem is that the content is generated to maximize ad revenue (or some other kind of revenue) rather than maximize truth and usefulness. This has been the case pretty much since the Internet went commercial. Google was in a lot of ways created to solve this problem and it's been a constant struggle.
The problem isn't AI, the problem is the idea that advertising and PR markets are useful tools for organizing information rather than vaguely anarchist self-organizing collectives like Wikipedia or StackOverflow.
that's where i disagree. the noise is not that high at all and is vastly exaggerated. of course if you go too deep into niche topics you will experience this.
Yeah niche topics like the technical questions I have left over after doing embedded development for more than a decade. Mostly questions like “can you dig up a pdf for this obsolete wire format.” And google used to be able to do that but now all I get is hundreds of identical results telling me about the protocol’s existence but nothing else.
One of the most amusing things to me is the amount of AI testimonials that basically go "once I help the AI over the things I know that it struggles with, when it gets to the things I don't know, wow, it's amazing at how much it knows and can do!" It's not so much Gell-Mann amnesia as it is Gell-Mann whiplash.
The people who use llms to write reports for other people who use llms to read said reports ? It may alleviate a few pain points but it generates an insane amount of useless noise
Considering they were already creating useless noise, they can create it faster now.
But once you get out of the tech circles and bullshit jobs, there is a lot of quality usage, as much as there is shit usage. I've met everyone from lawyers and doctors to architects and accountants who are using some form of GenAI actively in their work.
Yes, it makes mistakes, yes, it hallucinates, but it gets a lot of fluff work out of the way, letting people deal with actual problems.
I would love to figure out why I don't see this at my company. People still shipping at the same rate as before, customers bringing up more and more bugs, problems that require planning for scale are not being thought about (more bugs), zero tests still being written. All the code I am seeing generated is like one shot garbage, with no context around our system or the codebase as a whole.
I fully agree that there will be a pop, there must be. Current evaluations and investments are based on monumentally society destroying assumptions. But with every disappointing, incremental and non evolutionary model generation the chance increases that the world at large realizes that those assumptions are wrong.
What should I do with my ETF? Sell now, wait for the inevitable crash? Be all modern long term investment style: "just keep invested what you don't need in the next 10 years bro"?
If you're sure enough that there is going to be a big crash I would move the money into gold, bonds or other more secure assets. After a crash you can invest again.
I don't know why buffet sold a lot of shares over the last few years to sit on a huge pile of cash, but I could guess.
The Job market looks like shit, people have no money to buy stuff and credit card debt is skyrocketing. When people can't buy stuff it is bad for the economy. Even if AI is revolutionary then we would need people spending money to keep the economy going, and with more AI taking jobs that wouldn't happen.
If AI doesn't work out the market is going to crash and the only companies keeping the market growing are going to wipe out all that growth.
No matter how I look at it I don't see a thriving market.
Last week I tested out the agent mode of chatGPT by asking it to plan a week's meals, then add all the ingredients to an online shopping basket for me. It worked pretty much flawlessly, the only problem was it ran out of time before it could add the last few ingredients, which doesn't exactly seem like an unsolvable problem.
Exactly, it took an evolution, but there was no discontinuity. At some point, things evolved enough for people like Tim O'Reilly to say that we know have "Web 2.0", but it was all just small steps by people like those of us here on this thread, gradually making things better and more reliable.
"It is difficult to make predictions, especially about the future" - Yogi Berra (?)
But let’s assume we can for a moment.
If we’re living in a 1999 moment, then we might be on a Gartner Hype Cycle like curve. And I assume we’re on the first peak.
Which means that the "trough of disillusionment" will follow.
This is a phase in Hype Cycle, following the initial peak of inflated expectations, where interest in a technology wanes as it fails to deliver on early promises.
It definitely feels identical. We had companies that never had any hope of being profitable (or even doing anything related to the early internet to begin with), but put .com in your name and suddenly you are flooded with hype and cash.
Same thing now with AI. The capital is going to dry up eventually, no one is profitable right now and its questionable whether or not they can be at a price consumers would be willing or able to pay.
Models are going to become a commodity, just being an "AI Company" isn't a moat and yet every one of the big names are being invested in as if they are going to capture the entire market, or if there even will be a market in the first place.
Investors are going to get nervous, eventually, and start expecting a return, just like .com. Once everyone realizes AGI isn't going to happen, and realize you aren't going to meet the expected return running a $200/month chatbot, it'll be game over.
I lived through dot-com, and there are so many parallels. Large amount of money is chasing a dream which wont materialize in the near term.
Recent deja-vus are articles like this:
"The 20-Somethings Are Swarming San Francisco’s A.I. Boom" and
"Tech Billboards Are All Over San Francisco. Can You Decode Them?"
If I recall correctly, after the 2000 bust, folks fled silicon valley abandoning their leased BMWs at the SFO airport. 101 had no traffic jams. I wonder if that will repeat this time around.
Almost three years ago I gave a talk to a Staff+ group where I work and told them that AI felt like the internet in 1995 timeframe. It still does, and I agree we seem to be in the late-90s now. However, we need to be careful, this feels similar in a world-changing way, but it is different on so many levels. Amongst other things, the world is far more tech savvy than in the 90s. Plus nearly everyone alive and making decisions now were alive and making decisions in the 90s. That's why we're seeing this insane "all in" mentality - there is almost certainly going to be new Googles/Facebooks/Apples/Amazons that come out of this, or at least that is what capital believes.
I hate, hate, hate the Gartner Hype Cycle thing, it's just a dumb statement that things can get overhyped. Instead, I see it as a Cambrian explosion of everybody everywhere trying to find use cases for this new tech. Nearly all of them will fail, but many won't and we'll have a different world in a few years. There will be crashes and crazes and (hopefully) generally positive changes in the world.
It could be Terminator or Handmaid's Tale or 1984, too. The main difference between the 90s and now is that the world is much more educated on the downsides of tech and so there isn't that giddy sense of coolness from the 90s, and that bums me out.
But the only thing I've seen in my life that most resembles what is happening with AI, the hype, its usefulness beyond the hype, vapid projects, solid projects, etc, is the rise of the internet.
Based on this I would say we're in the 1999-2000 era. If it's true what does it mean for the future?