This is wrong. First mover advantage is strong. This is why OpenAI is much bigger than Mixtral despite what you said.
First mover advantage acquired and keeps subscribers.
No one really cares if you matched GPT4o one year later. OpenAI has had a full year to optimize the model, build tools around the model, and used the model to generate better data for their next generation foundational model.
I think it's worth double clicking here. Why did Google have significantly better search results for a long time?
1) There was a data flywheel effect, wherein Google was able to improve search results by analyzing the vast amount of user activity on its site.
2) There were real economies of scale in managing the cost of data centers and servers
3) Their advertising business model benefited from network effects, wherein advertisers don't want to bother giving money to a search engine with a much smaller user base. This profitability funded R&D that competitors couldn't match.
There are probably more that I'm missing, but I think the primary takeaway is that Google's scale, in and of itself, led to a better product.
Can the same be said for OpenAI? I can't think of any strong economies of scale or network effects for them, but maybe I'm missing something. Put another way, how does OpenAI's product or business model get significantly better as more people use their service?
You are forgetting a bit, I worked in some of the large datacenters where both Google and Yahoo had cages.
1) Google copied the hotmail model of strapping commodity PC components to cheap boards and building software to deal with complexity.
2) Yahoo had a much larger cage, filled with very very expensive and large DEC machines, with one poor guy sitting in a desk in there almost full time rebooting the systems etc....I hope he has any hearing left today.
3) Just right before the .com crash, I was in a cage next to Google's racking dozens of brand new Netra T1s, which were pretty slow and expensive...that company I was working for died in the crash.
Google grew to be profitable because they controlled costs, invested in software vs service contracts and enterprise gear, had a simple non-intrusive text based ad model etc...
Most of what you mention above was well after that model focused on users and thrift allowed them to scale and is survivorship bias. Internal incentives that directed capitol expenditures to meet the mission vs protect peoples back was absolutely a related to their survival.
Even though it was a metasearch, my personal preference was SavvySearch until it was bought and killed or what ever that story way.
In theory, the more people use the product, the more OpenAI knows what they are asking about and what they do after the first result, the better it can align its model to deliver better results.
A similar dynamic occurred in the early days of search engines.
I call it the experience flywheel. Humans come with problems, AI asistant generates some ideas, human tries them out and comes back to iterate. The model gets feedback on prior ideas. So you could say AI tested an idea in the real world, using a human. This happens many times over for 300M users at OpenAI. They put a trillion tokens into human brains, and as many into their logs. The influence is bidirectional. People adapt to the model, and the model adapts to us.. But that is in theory.
In practice I never heard OpenAI mention how they use chat logs for improving the model. They are either afraid to say, for privacy reasons, or want to keep it secret for technical advantage. But just think about the billions of sessions per month. A large number of them contain extensive problem solving. So the LLMs can collect experience, and use it to improve problem solving. This makes them into a flywheel of human experience.
All these "OpenAI has no moat" arguments will only make sense whenever there's a material, observable (as in not imaginary), shift on their market share.
The same one that underpins the entire existence of a little company called Spotify: I'm just too lazy to cancel my subscription and move to a newer player.
Not exactly a good sign for OpenAI considering Spotify has no power to increase prices enough such that it can earn a decent profit. Spotify’s potential is capped at whatever Apple/Amazon/Alphabet let them earn.
OpenAI does not have a business model that is cashflow positive at this point and/or a product that gives them a significant leg up in the same moat sense Office/Teams might give to Microsoft.
High interest rates are supposed to force the remaining businesses out there to be profitable, so in theory, the startups of today should be far faster to profitability or they burn out.
Nobody expects it but what we know for sure is that they have burnt billions of dollars. If other startups can get there spending millions, the fact is that openai won't ever be profitable.
And more important (for us), let the hiring frenzy start again :)
They have a ton of revenue and high gross margins. They burn billions because they need to keep training ever better models until the market slows and competition consolidates.
When the market matures, there will be fewer competitors so they won’t need to sustain the level of investment.
The market always consolidates when it matures. Every time. The market always consolidates into 2-3 big players. Often a duopoly. OpenAI is trying to be one of the two or three companies left standing.
> First mover advantage acquired and keeps subscribers.
Does it? As a chat-based (Claude Pro, ChatGPT Plus etc.) user, LLMs have zero stickiness to me right now, and the APIs hardly can be called moats either.
If it's for mass consumer market then it does matter. Ask any non-technical person around you. High chance is that they know ChatGPT but can't name a single other AI model or service. Gemini, just a distant maybe. Claude, definitely not -- I'm positive I'm hard pressed to find anyone in my technical friends who knows about Claude.
First mover advantage acquired and keeps subscribers.
No one really cares if you matched GPT4o one year later. OpenAI has had a full year to optimize the model, build tools around the model, and used the model to generate better data for their next generation foundational model.