Hacker Newsnew | past | comments | ask | show | jobs | submit | Analemma_'s commentslogin

Uber sold something like $50 billion in equity and debt before it went public, and although they're profitable now, to me it doesn't seem like they have answer to Waymo coming up fast in the rear-view mirror. I think Uber is still a scam, just one where the earlier investors fleeced the later ones who are never going to see the returns they paid for.

Uber is still >$10bn net negative over its lifetime. Claude estimates about –$19bn.

It's still quite possible that Uber doesn't make that up in the near future.


It's not. The NT kernel and some others have genuine callbacks in some of their syscalls, where you pass a userspace function pointer which the kernel invokes on completion; io_uring isn't that and Linux doesn't have anything like that.

Huh, I've actually been pleasantly surprised at how much hype there isn't every time one of these companies with a quantum side gig (Google, Microsoft) announces some new paper or finding. They're usually accompanied by levelheaded press releases announcing what was done and why, not breathless hyperbole. There are generic optimistic statements from executives, but nothing like what happens with AI. I dunno, it feels to me like they're pretty realistic about what QC might and might not accomplish in the coming decade.

For QC startups it's a different story, but of course startups have to hype themselves into the stratosphere to survive, so I don't really hold it against them (or at least, not any more than I would any other startup).


The 1970s, 1990s, and 2010s had huge waves of QC hype that amounted to nothing. The public is simply tired of it.

The 70s? Manin and Feynman proposed the idea in 1980.

Sorry, the 80's

Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)

The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.


I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.

I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.

OpenAI is now valued at $500bn though. I doubt the investors are too wrecked yet.

It may be like looking at the early Google and saying they are spending loads on compute and haven't even figured how to monetize search, the investors are doomed.


Google was founded in 1998 and IPOed in 2004. If OpenAI was feeling confident they'd find ways to set up a company and IPO, 9 years after founding. It's all mostly fictional money at this point.

It's not about confidence. OpenAI would be huge on the public markets, but since they can raise plenty of money in the private market there is no reason to deal with that hassle - yet.

If OpenAI is a public company today, I would bet almost anything that it'd be a $1+ trillion company immediately on opening day.

A really did few days ago gpt-3.5-fast is a great model for certain tasks and cost wise via the API. Lots of solutions being built on the today’s latest are for tomorrow’s legacy model — if it works just pin the version.

> And the Nvidia chips used to train it have barely retained any value either

Oh, I'd love to get a cheap H100! Where can I find one? You'll find it costs almost as much used as it's new.


But is it a bit like a game of musical chairs?

At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.


Not necessarily? That assumes that the first "good enough" model is a defensible moat - i.e., the first ones to get there becomes the sole purveyors of the Good AI.

In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.

And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.

More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.


Perhaps the first thing the owners ask the first true AGI is “how do I dominate the world?” and the AGI outlines how to stop any competitor getting AGI..?

> money on fire forever just to jog in place.

Why?

I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?

I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.

We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?


They aren't. They are obsequious. This is much worse than it seems at first glance, and you can tell it is a big deal because a lot of effort going into training the new models is to mitigate it.

>I don't see why these companies can't just stop training at some point.

Because training isn't just about making brand new models with better capabilities, it's also about updating old models to stay current with new information. Even the most sophisticated present-day model with a knowledge cutoff date of 2025 would be severely crippled by 2027 and utterly useless by 2030.

Unless there is some breakthrough that lets existing models cheaply incrementally update their weights to add new information, I don't see any way around this.


Ain't never hearda rag

There is no evidence that RAG delivers equivalent performance to retraining on new data. Merely having information in the context window is very different from having it baked into the model weights. Relying solely on RAG to keep model results current would also degrade with time, as more and more information would have to be incorporated into the context window the longer it's been since the knowledge cutoff date.

I honestly do not think that we should be training models to regurgitate training data anyway.

Humans do this to a minimum degree, but the things that we can recount from memory are simpler than the contents of an entire paper, as an example.

There's a reason we invented writing stuff down. And I do wonder if future models should be trying to optimise for rag with their training; train for reasoning and stringing coherent sentences together, sure, but with a focus on using that to connect hard data found in the context.

And who says models won't have massive or unbounded contexts in the future? Or that predicting a single token (or even a sub-sequence of tokens) still remains a one shot/synchronous activity?


Not to mention nobody bothered chasing Amazon-- by the time potential competitors like Walmart realized what was up, it was way too late and Amazon had a 15-year head start. OpenAI had a head start with models for a bit, but now their models are basically as good (maybe a little better, maybe a little worse) than the ones from Anthropic and Google, so they can't stay still for a second. Not to mention switching costs are minimal: you just can't have much of a moat around a product which is fundamentally a "function (prompt: String): String", it can always be abstracted away, commoditized, and swapped out for a competitor.

This right here. AI has no moat and none of these companies has a product that isn't easily replaced by another provider.

Unless one of these companies really produces a leapfrog product or model that can't be replicated within a short timeframe I don't see how this changes.

Most of OpenAI's users are freeloaders and if they turn off the free plan they're just going to divert those users to Google.


AI has no moat - yet here I'm been paying for ChatGPT Plus since the very start.

The real test of a moat is pricing power - would you still stick with OpenAI if they increased the Plus subscription to $40/mo?

Well, web search is also function(query: String): String in a sense, and that has one heck of a moat.

Right, because just like the Amazon case, potential competitors didn't realize at the time what a threat it was, and so they gave Google a 15-year head start (Microsoft half-heartedly made "Live Search" circa 2007 and didn't really get at all serious about Bing until ~2010).

That's very different from the world where everyone immediately realized what a threat Chat-GPT was and instantly began pouring billions into competitor products; if that had happened with search+adtech in 1998, I think Google would have had no moat and search would've been a commoditized "function (query: String): String" service.


It's not just the head start, it's the network effect.

Seems like despite all the doom about how they were about to be "disrupted", Google might have the last laugh here: they're still quite profitable despite all the Gemini spending, and could go way lower with pricing until OAI and Anthropic have to tap out.

Google also has the advantage of having their own hardware. They aren't reliant on buying Nvidia, and have been developing and using their TPUs for a long time. Google's been an "AI" company since forever

The objection was never the hegemony of a specific ideology: private evangelical universities like Liberty, Bob Jones, etc. have had much stricter demands on ideological conformity for much longer, and nobody raised a peep. The problem was always that it wasn't their ideology being promoted; anyone who thought otherwise and that their complaints about groupthink or whatever were genuine was a useful idiot for the regime.

Everyone is guilty here, because once one sector forms a monopoly, they have monopoly pricing power, and so their counterparty sectors have to form a monopoly as well to keep leverage in negotiations.

50 years ago, there were many more pharma companies, many more insurance companies, and many more hospitals under individual ownership. First the pharma companies consolidated, which give them monopoly pricing power over insurers. So then the insurance companies consolidated to they could negotiate on equal footing, but then they had monopoly pricing power over the hospitals. So then hospitals consolidated so they could negotiate. And now after decades of this, we're right back we're started, except for consumers, who can't consolidate and hence get fucked.

The two solutions here are either breaking up all the monopolies at the same time-- pharma, insurance, and hospitals-- so that everyone has market competition again, or letting health care consumers consolidate so they have pricing leverage-- i.e., forming a single-payer health-care system where the government negotiates deals on behalf of a 330+-million payer pool.

It does not make sense to either blame or spare one single sector: the pharmas, insurers and hospitals are all guilty, though in a sense all of their hands were forced by their counterparties. It's a coordination problem of exactly the kind government is supposed to solve, hence why government-run health care eventually seems like the only option.


You're also neglecting the insurers and PBMs.

Health insurers are limited by law as to profit margins. So how to make more money? Raise prices, or signal to providers that you'll pay higher. Because if your incoming premiums have to rise, then that percentage that can be captured as profit rises.

But wait... what if you (an insurer) build/buy a middle-man to route prescription money through? That isn't covered by those profit margin constraints. So you can just up the prices of prescriptions and siphon profits that way.

Even better, you can do entirely sketchy BS (looking at you, Aetna, but also others): "Sure, you can get your scripts filled at your local pharmacy... but only for <=30 day supplies. We'll reject any script authorization for a supply of 31+ days, like those extremely convenient 90 day refills... unless you use the mail-order pharmacy that is wholly owned by us", thus making people choose between convenience and pricing.


A lot of corporate insurance is self funded by the company, with the insurance company being paid for administration of the plan rather than underwriting.

I suppose it is possible that the buyers of these plans agree to link the payments to the cost of the care provided, but I doubt it.


I used to work for a company that built claims benefit management systems, for both direct insurers, and then TPAs (third party administrators).

The flip side of what you say is this - employers are not actuaries in the world of healthcare. So, while an employer can say "hey, whatever else we're doing, we want to give every employee a massage a week, covered 100%, no copay" and the TPA will facilitate the pricing of that, for the general spectrum of care, they will say "We want basically this level of care" and really just choose a plan already provided by the insurer, because all the actuarial effort has been done and the employer has less risk of getting slammed with a multi-million bill because of unexpected incidences.


I don't think govt run healthcare is the only option... but could serve as a baseline competition if Federal Employees, VA Medical, Medicare/Medicaid were all serviced by an NPO that is govt funded in terms of providing for those federal groups AND allow anyone to buy into a policy as an individual or employer. As an NPO it would provide a baseline for competition and a minimal cost floor with greater negotiating power, that has been artificially limited by the current implementations.

On the Pharma and Devices side, there should be hard FDA requirements for dual sourcing (completely separate ownership structures) and 50% domestic production (for security) as a requirement to even offer medications/devices requiring a prescription in the US.

It could still allow for private competition for better servicing and support without federalizing everything.


I feel like you explained how we got there and our options to fix it perfectly. As you point out we have monopolies (or close to) at every single step. Whatever bandaid people and politician can come up with will quickly be neutralized by these conglomerates, at this point, any half measure is basically useless or has severe tradeoffs.

>As you point out we have monopolies (or close to) at every single step.

This is happening to a huge number of industries in the US, not just healthcare.


Fair enough, but

- Healthcare is almost 20% of the economy

- Demand for Healthcare is largely inelastic


> So then hospitals consolidated so they could negotiate. And now after decades of this, we're right back we're started, except for consumers, who can't consolidate and hence get fucked.

Consumer consolidation is called voting. Its too bad most consumers have voted in politicians who don't represent their best interests


17% of the US GDP is healthcare so that's probably about 20% of the country that will scream bloody murder if you try and touch it in any way that makes it cost the other 80% less.

The tragedy is that Copilot actually was a brilliant name, back in the beginning when it was just a coding assistant. It was both evocative and descriptive: it's meant to help, but you are still in control. It was probably the best product name Microsoft had come up with years, so naturally there was no choice but to fuck it up.

Yeah, I'm always annoyed at how blasé people are about these transition periods, where they seem to think the ends justify any means, and any chaos which happens as a result of the means.

You see this all the time when some new technology, especially an information disintermediation technology, gets compared to the printing press. "The printing press broke the monopoly on knowledge and brought Europe out of the Dark Ages!" Yeah, but first it killed millions of people in a century of warfare. Do the people in an equivalent position now get a vote, or are they acceptable casualties for the glorious hypothetical future?


Chaos can be avoided if we start talks to facilitate this "AI age". But are we talking seriously about UBI? Nope (we can't even fund healthcare). What about training? Training in-job has been on the decline for decades, no one's helping with transitions. Are people's lives at least getting better? The sentiment in surveys say no thus far.

As usual it seems like there's only one box left when a new technology tries to strongarm its way into society. The invention of the personal computer avoided a lot of chaos by doing all of the above.


> Yeah, but first it killed millions of people in a century of warfare. Do the people in an equivalent position now get a vote, or are they acceptable casualties for the glorious hypothetical future?

The answer seems to be we get no vote

I'm not happy about it


Can you present a proposal for how should we adopt technology with less chaos?

Are you accounting for the lives saved through better technology?


Yes. We compensate disrupted industries and make long term plans to adjust around the technologies. The advent of the PC exposed it early to kids, it advertised its usefulness to the public, and it offered digital trainings to adjust to a new workflow. In the worst case, we help any redundant roles get a new job in the indystry or offer other benefits like early pensions.

As we see here, this tech is only taking and not giving much back.


You only want this because it is tech that is getting disrupted - the compensation would be provided to you now

Not really. I don't think I "need" it myself. My sector will be much more resistant to the advent of AI compared to others anyway, so I'm not worried of being displaced short term. My industry didn't need AI as an excuse to lay off a bunch of people every year.

I'm just demonstrating that advancing technology and throwing a whole job sector out with nowhere to go is not mutually exclusive. And that the lack of any such conversation means we'll just go down the same bloody path as last time in history.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: