Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What happens is they go out of business: "these firms spent five hundred and sixty billion dollars on A.I.-related capital expenditures in the past eighteen months, while their A.I. revenues were only about thirty-five billion."

DeepSeek (and the like) will prevent the kind of price increases necessary for them to pay back hundreds of billions of dollars already spent, much less pay for more. If they don't find a way to make LLMs do significantly more than they do thus far, and a market willing to pay hundreds of billions of dollars for them to do it, and some kind of "moat" to prevent DeepSeek and the like from undercutting them, they will collapse under the weight of their own expenses.



DeepSeek is also undercutting itself. No one is making a profit here, everyone is trying to gobble market share. Even if you have the best model and don't care to make a dime, inference is very expensive.


I'd be surprised if Google weren't closer to profitability than basically anyone else, as they have their own hardware and have been running these kinds of applications for much longer than anyone else.


Google is still investing heavily in data centres. Presumably without the AI they could lift their foot off that throttle.


Google are well set up to monetize compared to the others. They just need people to see their ads.


They are also the ones with more to lose to LLMs


I'm not familiar with the actual strategy, but one strategy of something like DeepSeek or any open model could be, roughly, to avoid moat creation by other companies. E.g. one reason Google maintained dominance for so long is they simply paid off every smart person and company that might build competitors.


Matt Levine of Bloomberg said he believes most big hedge funds have a proprietary LLM (and various other machine learning models), which they use for their own purposes anyway, and therefore it was relatively straightforward for them to get into the business.


> inference is very expensive

I am surprised that this claim keeps getting made, given the observed prices.

Even if one thinks that the losses of big model providers are due to selling below operating costs (rather than below that plus training costs plus the cost of growth), then even big open-weights models that need beefy machines, look like they eventually* amortise the cost so low that electricity is what matters; so when (and *only* when) the quality is good enough, inference is cheaper than the food needed to have a human work for peanuts — and I mean literally peanuts, not metaphorical peanuts, as in the calories and protein content of bags of peanuts sufficient to not die.

* this would not happen if computers were still following the improvements trends of the 90s, because then we'd be replacing them every few years; a £10k machine that you replace every 3 years cost you £9.13/day even if it did nothing.

https://www.tesco.com/groceries/en-GB/products/300283810 -> £0.59 per bag * (2500 per day/645 per bag) = £2.29/day; then combine your pick about which model, which model of home server, electricity costs etc. with your estimate of how many useful tokens a human does in 8,760 hours per calendar year given your assumptions about hours per working week and days of holiday or sick leave.

I know that even just order-of 100k useful tokens is implausible for any human because that would be like writing a novel a day, every day; and this article (https://aichatonline.org/blog-lets-run-openai-gptoss-officia...) claims a Mac Studio can output 65.9/second = 65.9 * 3600 * 24 = 5,693,760 / day or ~= 2e9/year, compare to a deliberate over-estimate of human output (100k/day * 5 days a week * 47 weeks a year = 2.35e7/year)

The top-end Mac Studio has a maximum power draw of 270 W: https://support.apple.com/en-us/102027

270 W for *at least (2e9/year / 2.35e7/year) 85 times* the quantity (this only matters when the quality is sufficient, and as we all know AI often isn't that good yet) of output that a human can do with 100 W, is a bit over 31 times the raw energy efficiency, and electricity is much cheaper than calories — cheaper food than peanuts could get the cost of the human down to perhaps about £1/day, but even £1/day is equivalent to electricity costing £1/(24 hours * 100 W) = £0.416666… / kWh


Running a local model is not an apples comparison. Yes, if you run a small model 24/7 without a care for output latency and utilization is completely static with no bursts, then it can look cheap. But most people want output now, not in 10 hours. And they want it from the best models. And they want large context windows. And when you combine that with serving millions of users, it gets complicated and expensive.


When you combine that with serving millions of users, it also gets amortised over several million users.

> But most people want output now, not in 10 hours.

At 65t/s, that's 2.5 million tokens output.


Yes, but usage is not uniform even when you have millions of users. It smooths the usage lines, but the peaks and troughs become more extreme the more users you have. At 3am usage in the US goes down to effectively 0. Maybe you can use the compute for Asia customers, but then you compete with local compute that has far better latency.

Then you have seasonal peaks/troughs, such as the school year vs summer.

When you want 4 9s of uptime and good latency, you either have to overprovision hardware and eat idling costs, or rent compute and pay overhead. Both cost a lot.


DeepSeek doesn’t need to make a profit to be successful.


I could think of several reasons this could be so, but it would be good to hear your logic on this claim?


State funded innit


> If they don't find a way to make LLMs do significantly more than they do thus far...

They only need two things, really: A large user base and a way to include advertising in the responses. The market willing to pay hundreds of billions of dollars will soon follow.

The businesses are currently in the user base building stage. Hemorrhaging money to get them is simply the cost of doing business. Once they feel that is stable, adding advertising is relatively easy.

> and some kind of "moat" to prevent DeepSeek and the like from undercutting them*

Once users are accustomed to using a service, you have to do some pretty horrendous things to get them to leave. "Give me your best hamburger recipe" -> "Sure, here is my best burger recipe [...] However, if you don't feel like cooking tonight, give the Big Mac a try!". wouldn't be enough to see any meaningful loss of users.


That's nothing new. The question is, will those users be willing and able to pay ten times as much or more for those same services they get for a significant discount now.


Will they pay ten times more for a Big Mac? Probably not, but why would they need to? Hundreds of billions is Facebook's revenue. The businesses in this space are there if they can take those customers alone, never mind all the other places where advertising takes place. The market exists, is sufficiently large, and willing to spend. All these "AI" businesses need to do is show that the users are spending time on their services instead, which is exactly what they are working on right now.

The question is really only: Will users actually want to continue to use these services once the novelty wears off? The assumption is that they are useful enough to become an integral part of our lives, but time will tell...


The world is not a zero-sum game, but I don't think it's likely that companies will double their advertising. So either Google, Meta, etc. will lose the AI game and most of the advertising revenue will shift to different companies or Google, Meta, etc. win and they get to keep their advertising income.

I don't see how suddenly hundreds of billions of additional ad revenue will appear.

I think some of AI companies truly wanted to make a fortune replacing all white collar workers through model exclusivity, but that open models (initially Llama and then a sequence of really good Chinese models) threw a wrench in the cogs. There is not as much you can make anymore if everyone can host their own 'workers' near cost price.


> So either Google, Meta, etc. will lose the AI game

They might lose entirely (search, social media, etc.) if users are more likely to direct their eyeballs to "AI" services. And that isn't an impossible scenario. What do you need traditional web search, stupid internet comments, etc. for when an LLM will generate all that and more immediately at your behest? The newspaper companies lost when other forms of media came along. "AI" could easily push things the same way.

But the businesses are still trying to prove that. It is still early days. Only time will tell if they will actually get the aforementioned user base.


They might lose entirely (search, social media, etc.) if users are more likely to direct their eyeballs to "AI" services.

Sorry that I wasn't clear enough. That was what I was trying to imply - in that case the ad purchases that currently go to Google etc. will go to a new AI company. This is why both Google is investing so heavily in LLMs, they know that LLMs can kill search + website visits and thus cause them to lose their ad income.

But either way, it's questionable if it will increase all ad spending. So either AI companies collapse and a lot of VC money is burned or Google et al. collapse and a lot capital is burned (pension funds and whatnot).


This is why Google has android and chrome, and why Facebook bought WhatsApp and Instagram. It isn’t so much about spending billions on growth, it is about hedging their bets and protecting their monopolistic cash cows.


Why would ad spending need to increase? Google, Meta, et al. collapsing and an "AI" business taking their place is fine. The universe doesn't care.


> The question is really only: Will users actually want to continue to use these services once the novelty wears off? The assumption is that they are useful enough to become an integral part of our lives, but time will tell...

But LLMs do have some niche, stable applications already. For example, they replaced Stack Overflow to a large extent, because you can get the answer you need faster, and it's often better adapted to your situation. So you could argue the novelty of SO wore off a long time ago but people were still using it when LLMs appeared. ChatGPT is no more en vogue, people are ashamed to (and shamed for) using it, but it still has some uses, helping people in their jobs and lives in general.


Stack Overflow wasn't bringing in hundreds of billions in revenue and never could have hoped to. A niche won't cut it.

I mean, should it all come crashing down and once the dust settles there is no doubt room for a niche service to rise from the ashes. Many have predicted exactly that AI will have its own "dark fibre" moment. But the current crop of providers seeking "world domination" won't survive if they can only carve out a SO-style niche.


> Stack Overflow wasn't bringing in hundreds of billions in revenue

Neither do LLMs. Instead, they cost hundreds of billions. Whether those investments can ever be recovered is still an unanswered question.


> Neither do LLMs.

Not yet. Of course, if you had read the thread you'd know that LLM businesses are trying to see if they can capture Facebook-scale/beyond user bases in order to serve ads to them at Facebook-scale/beyond.

> Whether those investments can ever be recovered is still an unanswered question.

Yes, that's the question we are discussing. Welcome to many comments ago.


I don’t see how advertising is going to work with agents, especially if they’re being used by companies to replace or supplement jobs. Am I going to have comments in my code with ads for McDonalds? Will the AI support agent start trying to sell me a VPN?

I don’t think any of these AI companies can justify their expenses without meaningfully automating a significant amount of white collar work, which is yet to happen.


> I don’t think any of these AI companies can justify their expenses without meaningfully automating a significant amount of white collar work

The businesses in question are building chatbots, not trying to automate jobs. Their primary goal at this stage is to get legions of people hooked on chatting with their service on a regular basis. Once (if) they have succeeded with that, then they can move on to step two.

> Am I going to have comments in my code with ads for McDonalds?

"Give me a function which stores a supplied string to a file in $HOME." -> "I have updated your code with said function. It [...] Have you considered storing the file on Amazon S3 for additional robustness and reliability?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: