> I’m not sure we went through the same dot-com era, but in my experience, it was extremely expensive to spin up anything. You’d have to run your own servers, buy your own T1 lines, develop with rudimentary cgi… it was a very expensive mess - just like AI today
To make your own competing LLM today you need hundreds of millions of dollars, the "very expensive" of this is on a whole different level. You could afford the things you talked about on a software engineering salary, it would be a lot of money for that engineer but at least he could do it, no way anyone but a billionaire could fund a new competing LLM today.
I think the foundation models are a commodity, anyway. The bulk of the economic value, as usual, will be realized at the application layer. Building apps that use LLMs, including fine-tuning them for particular purposes, is well within reach even of indie/solo devs.
That’s why Sam Altman makes so much noise about “safety” - OpenAI would really like a government-backed monopoly position so they can charge higher rents and capture more of that value for themselves. Fortunately, I think that llama has already left the barn.
I think openai/anthropic/etc are banking on foundation models being the equivalent of the "datacenters" or AWS-equivalents of AI - there'll be PaaSes (eg replicate), and most businesses will just pay the "rent"
Only if you're creating a foundation model. The equivalent would be competing with a well-funded Amazon, back in 1999. You can compete in building LLM-powered products with much, much less money - less than a regular web app in 99
To make your own competing LLM today you need hundreds of millions of dollars, the "very expensive" of this is on a whole different level. You could afford the things you talked about on a software engineering salary, it would be a lot of money for that engineer but at least he could do it, no way anyone but a billionaire could fund a new competing LLM today.