If it were any good I would assume there would be no need to hype it up.
My theory is that LLMs will get commoditized within the next year. The edge that OpenAI had over the competition is arguably lost. If the trend continues we will be looking at inference like commodity prices, where the most efficient like cerebras and groq will be the only ones actually making money at the end.
I don't think so, look at how Sora changed every... Well Operator was a game changer for.. Hmm, but what about gpt-4.5 or PhD level o3... o3-pro...? I mean, the 10k/mon agents are definitely coming... any day now...
With this comparison you are saying the original Iphone was like version 6 of an well established line of products in a market that had seen major releases a few times a year for about three years.
That's certainly not how the first iphone is usually described.
"My theory is that LLMs will get commoditized within the next year."
Incredibly bad theory, it's like you're saying every LLM is the same because they can all talk, even though the newer ones continue to smash through benchmarks the older ones couldn't. And now it happens quarterly instead of yearly, so you can't even say it's slowing down.
At the moment most of the dollars are coming from consumer, inclusive of business, subscriptions. That’s where the valuations are getting pegged and most API dollars are probably seen as experimental. The model quality matters but product experience is what is driving revenue. In that sense OpenAI is doing quite well.
If that is the case, the $300 billion question is whether someone can create a product experience that is as good as OpenAI’s.
In my mind there are really three dimensions they can differentiate on: cost, speed, and quality. Cost is hard because they’re already losing money. Speed is hard because differentiation would require better hardware (more capex).
For many tasks, perhaps even a majority right now, quality of free models is approaching good enough.
OpenAI could create models which are unambiguously more reliable than the competition, or ones which are able to answer questions no other model can. Neither of those has happened yet afaik.
Competitors just need to wait for OpenAI to burn all their free money and dig themselves a debt hole they can’t easily climb out of, and then offer a similar experience at a price that barely breaks even or makes a tiny profit, and they win.
> three dimensions they can differentiate on: cost, speed, and quality
The fourth dimension is likely to be the most powerful of the differentiators: specificity.
Think Cursor or Lovable, but tailored for other industries.
There's a weird thing where engineers tend to be highly paid, but people who employ engineers are hesitant to spend highly on tools to make their engineers more productive. Hence all Cursor's magic only gets its base price to ~50% of Intercom's entry-level fee for a tool for people who do customer support.
LLMs applied to high-value industries outside of tech are going to be a big differentiator. And the companies that build such solutions will not have the giant costs associated with building the next foundation model, or potentially operating any models at all.
The fact that xAI only exists for Elon Musk's personal spite and they produced a top performing model certainly implies that model training isn't any kind of moat. It's certainly very expensive but not mysterious.
My theory is that LLMs will get commoditized within the next year. The edge that OpenAI had over the competition is arguably lost. If the trend continues we will be looking at inference like commodity prices, where the most efficient like cerebras and groq will be the only ones actually making money at the end.