Hacker News new | past | comments | ask | show | jobs | submit login
AI models that cost $1B to train are underway, $100B models coming (tomshardware.com)
34 points by pulse7 7 months ago | hide | past | favorite | 58 comments



No company can afford to spend $100B on something that will be obsolete a year later, you just can't recover the investment from sales that quickly.

$100m is manageable, if you've got 100m paying subscribers or companies using your API for a year you can recoup the costs, but there aren't many companies with 100m users to monetise for it. $1B feels like it's pushing it, only a few companies in the world can monetise, and realistically it's about lasting through the next round to be able to continue competing, not about making the money back.

$100B though, that's a whole different game again. That's like asking for the biggest private investment ever made, for capex that depreciates at $50B a year. You'd have to be stupid to do it. The public markets wouldn't take it.

Investing that much in hardware that depreciates over 5+ years and is theoretically still usable at the end, maybe, but even then the biggest companies in the world are still spending an order of magnitude less per year, so the numbers end up working out very differently. Plus that's companies with 1B users ready to monetise.


The business model here is the same as semi-conductor fabrication/design. 2022 kick started the foundation model race, teams were readily able to raise 5-25 MM to chase foundation models. In early 2024, several of those teams began to run out of money due to the realization that competitive modeling efforts were in the 1-10 Billion dollar range.

If a 100 Billion dollar training run produces the highest quality model in the land across all metrics and capabilities. That will be the model thats used, at most there would be 1-2 other firsm willing to spend 100 Billion to chase the market.


This. The seemingly neverending run for foundation models only works as long as companies can afford it. If one of them spends 100+B, it will be a long time before compute catches up to the point that a competitor could reproduce it at reasonable budgets. This is essentially the race of who's going to own AGI and it shouldn't be surprising that people are willing to spend these amounts.


Given how quickly AI is progressing from the software side, and how poorly AI scales from just throwing raw compute time at a model, I don't see a company holding onto the lead for very long with that strategy.

If I can come out with a model a year later, and it can provide 95% of the performance while costing 10% as much to run, I think I would end up stealing a lot of customers before they had a chance to break even.

Take Llama3-8B for example, this is an 8 billion parameter model from 2024 that performs about as well the the original ChatGPT, a 175 billion parameter model from 2022. It only took 2 years before a model that can run on a desktop could compete with a model that required a data center.


LLMs actually scale extremely well just by throwing compute at them. That's the whole reason they took off. Training a bigger model or training it longer or increasing the dataset all work more or less equally well. Now that we've saturated the dataset component (at least for human written text) pretty much, everyone throws their compute at bigger models or more epochs.


It's totally reasonable to take both bets. It's unclear that the company betting 100B wouldn't also be the company making the 1 MM bet.

If you're MSFT - you don't care who wins as long as you have cost competitive rights to embed the AI in all of your products - earlier than others.


> Investing that much in hardware that depreciates over 5+ years and is theoretically still usable at the end, maybe

Isn’t that exactly what’s happening?

A $300k 8x H100 pod with 5kW power supply burns at most $6k per year in power at $0.15/kWh. The majority of the money is going to capital equipment for the first time in the software industry in decades.

These top of the line chips last for much longer in the depreciation game. The A100 was released in 2020 but cloud providers still have trouble meeting demand and charge a premium for them.


How are you getting 5kW?

NVIDIA claim 10.2kW for a DGX H100 pod. https://docs.nvidia.com/dgx/dgxh100-user-guide/introduction-....

Your point still stands though where power is a fraction of the cost.

The bigger issue is power + cooling and how many units are needed to train the better models.


Serves me right for doing back of napkin math instead of just looking it up. The SXM4 A100s I use have a TDP of 400W and I hadn't realized the higher power H100 are up to 800W.


That’s true for AI, but it is not the right way to think about AGI.

For AGI, the bet is that someone will build an AI capable enough to automate AI development. Once we get there it will pay for itself. The question is what the cost-speed tradeoff to get there looks like.


Pay for itself? Who will pay for this? I don’t think you realize how much $100B is. To put it in perspective, a cutting edge fab costs almost $10B (TSMC) and only three companies can barely afford that.


"Pay for itself" is probably a bit misleading. The development of transformative AGI, under the control of a single company, has the potential to render the concept of "getting your investment back" quaint and obsolete for that company. They'd never say it, and maybe some of them don't believe it, but they're effectively competing to be in control of the future of civilization. (Meanwhile, the amount of CO2 they're burning in their bid is actively reducing the chances of such a future existing...)


Well the total world economy is in the $100T region, so I'd expect the argument is that if you only need to replace a small fraction of that with AI to generate an ROI.


I agree with you. Large tech corporations are making big bets to reach AGI first. As an example, if you are CEO of Google, do you want Microsoft or Meta to achieve AGI first.

This seems less like doing business as usual and more like betting big to be part of something really transformative.


for 100B they would probably want a realistic description of how they get to AGI. Thats a bit too much money for the handwavy answers we have right now for the path between LLMs and AGI (which doesn't even have a great definition)


it's not a "once", it's an "if"

it may never happen, especially with this current approach

at which point you've burnt hundreds of billions of dollars, emitted millions of tonnes of CO2 and all you've got out of it is a marginally better array of doubles


Yeah I think there's too much faith based investment going on right now. It all smells like the argument that bitcoin would be the future of money so the price doesn't matter just buy and hold and once it takes over you would be part of the owning class.


automating “development” does not necessarily lead to AGI. An LLM could make minor efficiency improvements all day long and still not change the fundamental approach.


I agree and do not think any company would make that investment directly. Nvidia selling to Microsoft renting to OpenAI, I'm sure you could make that add up to $100B on paper. In the long run the economics are likely much more complicated and consist of "agreements worth $x".


Even if they did, they would be the largest target for hackers or corporate espionage. I would find it hard to believe, that they would get any sort of good return on this before it was all over the internet, or at least in the hands of several competitors.


TSMC spends ~$30B every 2 years


X to Doubt.

This is the Anthropic CEO talking up his company's capital needs to the Norwegian Sovereign Wealth Fund ( Norges Bank Investment Management ) and trying to justify some absurd 100bn valuation.


Yes. The release of GPT-5 will make or break the AI movement. If the capabilities are not another quantum leap, it will become clear the scaling laws are not all. These investments will be unsustainable on the basis of any economic metrics you use.


> If the capabilities are not another quantum leap

While I don't disagree 100%, my question to you is:

who/what says this is the case/why? GPT-3.5 was released/made popular "to the masses" not too long ago. Where do you feel the pressure for a quantum leap "quickly" is coming from?


To steal from another comment in the thread:

> That’s true for AI, but it is not the right way to think about AGI. For AGI, the bet is that someone will build an AI capable enough to automate AI development. Once we get there it will pay for itself. The question is what the cost-speed tradeoff to get there looks like.

I don't think people are treating AI as a typical investment. They are valuing it as a potential replacement for like 95% of human workers. Once the plateau becomes obvious to even the biggest fanatics, people are going to realize all this money has been used to create really good chatbots that just make shit up 25% of the time. The whole sales pitch for the last 1-2 years has been that AGI is "just around the corner" and we can get there via the magic of exponential growth.


"AGI is just around the corner" but we haven't be able to build "fully automatic car driving" for a decade... So first I would like to see car drivers replaced by non-general AI, then I will start believing in AGI...


Automated driving is more a political problem than a technical one. We could make it happen today (and I'm not saying that would be a good thing or a bad one), but it would require leadership we simply don't have.

See also standardization of EV battery form factors -- a problem that, had it been tackled by government several years ago, would have avoided the chicken-and-egg problem that is impeding adoption now.


"We could make it happen today" -> As of today, human drivers make less mistakes than automated driving... and Tesla is promising it for a decade now, but has come to level 2 only (where drivers must hold the steering wheel) and Mercedes has achieved level 3 for highways only... So we are nowhere close to "level 5 on all roads with substantially less mistakes than human drivers"... and that would be just a single part of AGI...


The mistakes made by automated driving usually involve interaction with either human drivers or impaired pedestrians, if you neglect stupid shit like Teslas losing fights with fire trucks because "Cameras Are All You Need." Nonstandard or missing signs and lane markings are close to the top of the list as well.

Doing automated driving right means restructuring our entire surface transportation system to support it. I doubt it can happen without a serious commitment from government... which is in no condition to commit to anything.


But human drivers don't need such restructuring. They can also handle impaired pedestrians, nonstandard and missing signs/lane markings. And if AGI will be "better than most humans at most things" no restructuring should be needed...


For values of "They can handle it" equal to about 30,000 deaths a year in the US alone, yeah, I guess they can handle it.


Waymo's cars are fully automatic self-driving cars, just limited in where they can go.

It's entirely possible we could see an AGI develop within a specific medium, e.g. text-only to start.

But also, don't you think those two problems are convergent? If we make an algorithm smart enough to drive a car anywhere more safely than a human would, it seems likely to me that that will either be AGI or right on the cusp of it.


The fact that we can't even define what AGI is, or what intelligence is, and yet we are throwing massive amounts of compute and data blindly at this problem... And why? Because chat bots impressed everyone in 2023? The whole thing smells.


The problem isn't that we can't define intelligence, the problem (IMHO) is that people aren't comfortable of the implications of any given definition.

I like to play a game whenever I meet a new group of programmers/lawyers/board gamers/pedants. I ask them to define a sandwich, then I point out all of the edge cases until we get to the point where I can either say their definition is bad because it e.g. doesn't include subs (short for submarine sandwiches), or I can say that poptarts are sandwiches.

The problem is that reality is blurry and clear definitions are fundamentally incapable of capturing the nuance there.


> The whole sales pitch for the last 1-2 years has been that AGI is "just around the corner" and we can get there via the magic of exponential growth.

I am pretty sure that's less than half the sales pitch. Current LLMs are economically valuable, they are useful to all kinds of knowledge work, despite their flaws.


Well, I guess the question I have is, what exactly does he mean by the "cost to train"? As in, just the cost of the electricity used to train that one model? That seems really excessive.

Or is it the total overall cost of buying TPUs / GPUs, developing infrastructure, constructing data centers, putting together quality data sets, doing R&D, paying salaries, etc. as well as training the model itself? I could see that overall investment into AI scaling into the tens of billions over the next few years.


If you had an extra $100 Billion, some people could think of something better to spend it on, some not.


Metaverse! oh wait that is too old and forgotten already.


web3 metaverse powered by genAI and NFTs?


And we put that in the cloud


I could see the US subsidizing most of that $100B, just because they can, and more importantly, it would be the kind of tactical advantage that’s needed to make sure US tech companies stay relevant in a world where there’s a growing desire to breakaway from them in-favor of homegrown solutions.


elmo-arms-up-world-burning.gif



Somewhere, a Tamarian is posting shaka-when-the-walls-fell.jpg


It took an entire thread of nostradamus-tier bullshit before we finally got the first serious response. Bravo.


BigTech wants all your sovereign money


What will the benefit be of more expensive models? More facts, because it's consumed more information? More ability to, say, adjust writing style? Or is this all necessary just to filter out the garbage recycled AI content it's now consuming?


Right around the time GPT-4 was first announced, OpenAI published a paper that basically said that training can "just keep going" with no obvious end in sight. Recently, Meta tried to train a model 75x as long as is naively optimal, and it just kept getting better.

Better in this case means some combination of "less errors for the same size" and/or "bigger and smarter". Fundamentally, they're still the same thing, just more and better.

Unfortunately, the scaling is (roughly) logarithmic. So for every 10x increase in scale you get a +1 better model. Scaling up 1,000x gets you just a +3 improvement, and so on.


And what, exactly is the ROI on "better"? Who cares if the model is better, is it 100B better? Who are going to buy these services, what consumer will pay for it?


Meta has something like half a million modern GPUs that they’ve purchased outright. They can afford to keep training models “forever”.

This is useful to eke out every last drop of quality per gigabyte of model file size. It also keeps the models up to date with current events.

Obviously this scaling becomes too inefficient at infinite scale not just because of training costs that’ll never be recouped but also increasing inference cost with larger models.

Some fundamentally new architectures will need to be developed to take much better advantage of increased computer power.

I suspect the major players are investing in hardware now in the hope that some revolutionary new algorithm is invented soon and they’ll be ready for it.

It’s… a bit of a gamble!


> What will the benefit be of more expensive models?

Bleed investors dry before the next fad pops up


> What will the benefit be of more expensive models?

A G650 to fly to your 85m yacht in the med doesnt come cheap.


All this burn and recruiters and bots still match on keywords in CV.


If only this had came before crypto. We could have had a system that underwrites international finance and pays for training on the cheap.

I wonder which timelines had this scenario…


That sounds like a great idea for our next bubble


Wow, this CEO entitlement and wealth pissing contests are laughable.


altman, and amodei are speaking their book, but in doing so seem like shady snake-oil salesmen.

they would be better off not bullshitting their investors.


the people not bullshitting their investors have no investors

investors with huge piles of cash should buy themselves a brain and stop funding bullshitters




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: