Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This rings true to my ears:

> There is no technical moat in this field, and so OpenAI is the epicenter of an investment bubble. Thus, effectively, OpenAI is to this decade’s generative-AI revolution what Netscape was to the 1990s’ internet revolution. The revolution is real, but it’s ultimately going to be a commodity technology layer, not the foundation of a defensible proprietary moat. In 1995 investors mistakenly thought investing in Netscape was a way to bet on the future of the open internet and the World Wide Web in particular.

OpenAI has a short-ish window of opportunity to figure out how to build a moat.

"Trying to spend more" is not a moat, because the largest US and Chinese tech companies can always outspend OpenAI.

The clock is ticking.



There is no technical moat, but that doesn't mean there isn't a moat.

Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year. Every day there was some new article about how insane it was.

There's no technical moat around online sales. And lots of companies sell online. But Amazon is still the biggest (by a long long way) (at least in the US). Their "moat" is in public minds here.

Google is a similar story. As is Facebook. Yes the details change, but rhe basic path is well trodden. Uber? Well the juries still out there.

Will OpenAI be the next Amazon? Will it be the next IBM? We don't know, but people are pouring billions in to find out.


A couple other comments touch on the point I want to make, but I feel they don't nail it hard enough: if today you told me you had a website that was 100% technologically identical to Amazon, but you were willing to make slightly less money, you'd still have no products... as a user I'd have no reason to go there, but then without users there is no reason to sell there, so you have a circular bootstrapping issue. That is a moat.

This is very different from OpenAI: if you show me a product that works just as well as ChatGPT but costs less--or which costs the same but works a bit better--I would use it immediately. Hell: people on this website routinely talk about using multiple such services and debate which one is better for various purposes. They kind of want to try to make a moat out of their tools feature, but that is off the path of how most users use the product, and so isn't a useful defense yet.


Amazon as in AWS is possibly a better analogy. What AWS sells is mostly commodity. Their moat is the cost of integration. Once a company has developed all its system around AWS, they are locked in, just because of the integration cost of switching. OpenAI only sells a single component, so the lock in is weaker. But once you have a business a company depends on which has been tested and relies on OpenAI, I can see them thinking twice about switching. Right now, I doubt we are near that stage, so my vote is on no moat yet.


Many cloud native companies have realized they his and are actually moving off of public clouds.

It’s slow and painful, but the expense is driving some customers away.


I think any decent company would be using LLMs through an interface, so they can swap out providers, same as any API.


That's fair, but different LLM behave differently, so you would have to redo your testing from scratch if you were to swap the model. I think that would be the primary problem.


Testing for LLMs is an evolving practice but you need to have tests even if you stick with one provider, otherwise you won't be able to swap out models safely within that provider either.


Amazon (the retail business not AWS) does seem to have a pretty big moat.

For starters they have delivery down. In major cities you can get stuff delivered in hours. That is crazy and hard to replicate.

They have a huge inventory/marketplace. Basically any product is available. That is very difficult to replicate.


They have their own fleet of planes for shipping.

Amazon’s vertical integration is their mote.


> if today you told me you had a website that was 100% technologically identical to Amazon, but you were willing to make slightly less money, you'd still have no products

It is widely understood that you really can't compete with "as good as". People won't leave Google, Facebook, etc. if you can only provide a service as good as, because the effort required to move would not be worth it.

> if you show me a product that works just as well as ChatGPT but costs less--or which costs the same but works a bit better--I would use it

This is why I believe LLMs will become a commodity product. The friction to leave one service is nowhere near as as great as leaving a social network. LLMs in their current state are very much interchangeable. OpenAI will need a technological breakthrough in reliability and/or cost to get people to realize that leaving OpenAI makes no sense.


> It is widely understood that you really can't compete with "as good as".

Sure you can, that's why there are hundreds if not thousands of brands of gas station. The companies you list are unusual exceptions, not the way things usually work.


Im not sure a gas station analogy really works. I use a gas station out of convenience (i.e. its on my route) and will only go out of my way for a significant difference in price. This means i go to the same gas stations even when there are others that are “as good as” around just because it’s convenient for me. Similarly if i already setup an account with Amazon and currently use Amazon i won’t move to an “as good as” competitor just because its an inconvenience to setup a new account, add billing info, add my address, etc… for no real improvement.


I feel like you (and others) are saying what I'm saying.

I said; >> There is no technical moat, but that doesn't mean there isn't a moat.

Meaning that just because the moat is not technical doesn't mean it doesn't exist.

Clearly Amazon, Google, Facebook etc have moats, but they are not "better software". They found other things to act as the moat (distribution, branding, network effects).

OpenAI will need to find a different moat than just software. And I agree with all the people in this part if the thread driving that point home.


Moats don't have to be software. Amazon's physical distribution chain is absolutely a moat - trying to replicate their efficiency at marshalling physical items from A to B is a daunting problem for new entrants in the online retail game.


They have been monitoring their GPT Store for emergent killer applications with a user base worth targeting. Zuckerberg's playbook. Nothing yet, because they've been too short-sighted to implement custom tokens and unbounded UIs.


Amazon functions as a marketplace dynamic, which is defensible if done right, as they have shown.

OpenAI right now some novel combination of a worker bee and queryable encyclopedia. If they are trying to make a marketplace argument for this, it would be based on their data sources, which may have a similar sort of first-mover advantage as a marketplace as they get closed off and become more expensive (see eg reddit api twitter api changes), except that much of those data age out, in a way that sellers and buyers in a marketplace do not.

The other big difference with a marketplace is constrained attention on the sell/fulfillment side. Data brokers do not have this constraint — data is easier to self-distribute and infinitely replicable.


>you had a website that was 100% technologically identical to Amazon, but you were willing to make slightly less money

You've basically described Temu


Look at Android vs Apple phones in the US. HN tends to really underestimate the impact of branding and product experience. It’s hard to argue anyone’s close to OpenAI on that front (currently).

Not to mention they’ve created a pretty formidable enterprise sales function. That’s the real money maker long term and outside of Google, it’s hard to imagine any of the current players outcompeting OpenAI.


Dropbox and Slack are examples of another possible outcome: capture the early adopters and stay a player in the space, but still have your lunch eaten by big tech suites that ship similar products.


this is probably how OpenAI is going to end up with. there is no clear tech barrier that can't be crossed by competitors. openai came up with all sorts of cool stuff, but almost all of them have seen a peer competitor in just months.

the recent release of deepseek v3 is a good example, o1 level model trained under 6 million USD, it pretty much beat openai by a large margin.


Where are you getting that Deepseek is at the level of o1? In my experience, it's not even as good as Claude.


Even v3?

edit: Fair enough. I'm fishing for opinions too.


My testing has been very limited, so I don't want to opine too much. If anyone has a differing opinion based on more testing, please share, I'm interested!


So I've been using Deepseek for 3 months with Aider coding assistant. Look up the "Aider LLM leaderboard" for proper test results if you like, in my experience the Deepseek V3 is just as good as Claude at less than 1/10th the price. I can't speak about o1 it is just too expensive to be worth it

OpenAI is going to be beaten on price, wait and see.


Did Aider integrate v3 in its benchmarks? I checked yesterday and I didn't see it...

EDIT: Oh it did, wow, and it's better than Claude! Fantastic, this is great news, thank you!


I agree it's a bit weaker, but you're still paying $20 + tax for ChatGPT on a monthly subscription (or more). You could switch next month, you might regret it and switch back the month after. You might anticipate that faff and not switch to begin with.

(Sure you might say I'll subscribe to both, $20, $40, it's no big deal - but the masses won't, people already agonise over and share (I do too!) video streaming services which are typically cheaper.)


Amazon retail is a marketplace with marketplace dynamics which what you are describing. They are connecting buyers and sellers. OpenAI is a SaaS company, can’t compare them.

More interestingly to your thread is how does Craigslist supplant print classifieds, which then is challenged if not supplanted by Facebook Marketplace. Both the incumbents had significantly better marketplace dynamics prior to being overtaken.


Did we forget its called the network effect?


If Walmart made a slightly better website you wouldn’t shop there? Cause that’s really all that’s holding me back


Why would they have no products? While Amazon Marketplace is important, they'd still have the greatest selection of products if they only had first-party sales. It's a bad analogy. eBay is a better example, as a purely a marketplace.


That’s how it is today, but that was also the case at the birth of e-commerce. Amazon was a large eCom store but many others were successful and switching for price was common. Not so much anymore.

Defaults are powerful over time.


Perplexity costs less and lets you use more models.


Missing the point though. Amazon isn't Amazon because of its tech (speed, reliability, etc) doesn't matter as much as: inventory (tons of things you can only find there), delivery speed (you can reliably get 99% of the things in a week or less and some of the items get delivered in HOURS) and customer service (you are right by default, you will refunded and get free delivery if you encounter issues). That's the ultimate killer. If a competitor managed to do this, a little marketing to get installed in the customer base's phones will definitely eat Amazon's lunch. Temu has shown big strides but its ethical problems will prevent them from becoming a true threat. A local Temu-like competitor would be a formidable adversary


> Temu has shown big strides but its ethical problems will prevent them from becoming a true threat.

Does the average Temu user care about the company's ethical problems? Does the average Amazon user?


You spelled monopoly weirdly.


> Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year.

Apparently old enough to forget the details. I highly recommend refreshing your memory on the topic so you don’t sound so foolish.

1. Amazon had a very minimal amount of VC funding (less than $100M, pretty sure less than $10M)

2. They IPO’d in 1997 and that still brought in less than $100M to them.

3. They intentionally kept the company at near profitability, instead deciding to invest it into growth for the first 4-5yrs as a public company. It’s not that they were literally burning money like Uber.

4. As further proof they could’ve been profitable sooner if they wanted, they intentionally showed a profit in ~2001 following the dotcom crash.

Edit: seems the only VC investment pre-IPO was KP’s $8M. Combine that with the seed round they raised from individual investors and that comes in under $10M like I remembered.


Amazon has scale economics, branding power, and process power. It would take years to fully rebuild Amazon from scratch even given unlimited money.

Right now, OpenAI's brand is actually probably its strongest "moat" and that is probably only there because Google fumbled Bard so badly.


This shows that it's not that easy to get the AI right even with the sizable funding available. OpenAIs moat is its actual capacity to provide and develop better solutions.


This is an odd thing to say.

Facebook has an enormous network effect. Google is the lynchpin of brokering digital ads to the point of being a monopoly. Someone else mentioned Amazons massive distribution network.


> There's no technical moat around online sales. And lots of companies sell online. But Amazon is still the biggest (by a long long way) (at least in the US). Their "moat" is in public minds here.

I don't think that's true. I think it's actually the opposite. Global physical logistic is way harder than software to scale. That's Amazon's moat


> “Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year. Every day there was some new article about how insane it was.”

Not to take away from the rest of your points, but I thought Amazon only raised $8m in 1995 before their IPO in 1997. Very little venture capital by today’s standard.


That caught my eye too, because VCs never poured "billions" into any one company pre-dot-com bust. Unicorns were notable for being valued at $1B or more, but never raised that much in funding, and this only happened much later, a decade and a half after Amazons IPO - Amazon itself was never a unicorn, its post-IPO capitalization was $300M.

Amazon is famous for making losses after being publicly listed. Also, it was remiss of grandparent to not note that Amazon's losses were intentional for the sake of growth. OpenAI has no such excuse: their losses are just so that they stay in the game; if they attempted to turn profitable today, they'd be insolvent within months.


> Their "moat" is in public minds here.

no, the amazon moat is scale, and efficiency, which leads to network effect. The chinese competitors are reaching similar scales, so the moat isn't insurmountable - just not for the average mom and dad business.


> There is no technical moat, but that doesn't mean there isn't a moat.

The moat is actually huge (billions of $$). What is happening is that there are people/corps/governments that are willing to burn this kind of money on compute and then give you the open weight model free of charge (or maybe with very permissive and lax licensing terms).

If it wasn't for that, there will be roughly three players in the market (Anthropic and recently Google)


Yeah, that's my thought as well.


> There is no technical moat, but that doesn't mean there isn't a moat.

Gruber writes:

" My take on OpenAI is that both of the following are true:

OpenAI currently offers, by far, the best product experience of any AI chatbot assistant. There is no technical moat in this field, and so OpenAI is the epicenter of an investment bubble. "

It's amusing to me that he seems to think that OpenAI (or xAI or DeepSeek or DeepMind) is in the business of building "chatbots".

The prize is the ability to manufacture intelligence.

How much risk investors are willing to undertake for this prize is evident from their investments, after all, these investors all lived through prior business cycles and bubbles and have the institutional knowledge to know what they're getting into, financially.

How much would you invest for a given probability that the company you invest in will be able to manufacture intelligence at scale in 10 years?


> How much would you invest for a given probability that the company you invest in will be able to manufacture intelligence at scale in 10 years?

What's the expected return on investment for "intelligence"? This is extremely hard to quantify, and if you listen to the AI-doomer folks, potentially an extremely negative return.


> What's the expected return on investment for "intelligence"? This is extremely hard to quantify […] if you listen to the AI-doomer folks, potentially an extremely negative return.

Indeed. And that asymmetry is what makes a market: people who can more accurately quantify the value or risk of stuff are the ones who win.

If it were easy then we'd all invest in the nearest AI startup or short the entire market and 100x our net worth essentially overnight.


That logic applies for AI-cynics rather than AI-doomers — the latter are metaphorically the equivalent of warning about CO2-induced warming causing loss of ice caps and consequent sea level rises and loss of tens of trillions of dollars of real estate as costal cities are destroyed… in 1896*, when it was already possible to predict, but we were a long way from both the danger and the zeitgeist to care.

But only metaphorically the equivalent, as the maximum downside is much worse than that.

https://en.m.wikipedia.org/wiki/Svante_Arrhenius


> That logic applies for AI-cynics rather than AI-doomers

Fwiw, I don't believe that there are any AI doomers. I've hung out in their forums for several years and watched all their lectures and debates and bookmarked all their arguments with strangers on X and read all their articles and …

They talk of bombing datacentres, and how their children are in extreme danger within a decade or how in 2 decades, the entire earth and everything on it will have been consumed for material or, best case, in 2000 years, the entire observable universe will have been consumed for energy.

The doomers have also been funded to the tune of half a billion dollars and counting.

If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?

But the true doomer would have to be the ultimate nihilist, and he would simply take himself off the map because there's no point in living.


> or, best case, in 2000 years, the entire observable universe will have been consumed for energy

You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.

> The doomers have also been funded to the tune of half a billion dollars and counting.

I've never heard such a claim. LessWrong.com has funding more like a few million: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc

> If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?

The political capital to ban it worldwide and enforce the ban globally with airstrikes — what Yudkowsky talked about was "bombing" in the sense of a B2, not Ted Kaczynski — is incompatible with direct action of that kind.

And that's even if such direct action worked. They're familiar with the luddites breaking looms, and look how well that worked at stopping the industrialisation of that field. Or the communist revolutions, promising a great future, actually taking over a few governments, but it didn't actually deliver the promised utopia. Even more recently, I've not heard even one person suggest that the American healthcare system might actually change as a result of that CEO getting shot recently.

But also, you have a bad sense of scale to think that "half a billion dollars" would be enough for direct attacks. Police forces get to arrest people for relatively little because "you and whose army" has an obvious answer. The 9/11 attacks may have killed a lot of people on the cheap, but most were physically in the same location, not distributed between several in different countries: USA (obviously), Switzerland (including OpenAI, Google), UK (Google, Apple, I think Stability AI), Canada (Stability AI, from their jobs page), China (including Alibaba and at least 43 others), and who knows where all the remote workers are.

Doing what you hypothesise about would require a huge, global, conspiracy — not only exceeding what Al Qaida was capable of, but significantly in excess of what's available to either the Russian or Ukrainian governments in their current war.

Also:

> After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?

You presume they know. They don't, and they can't, because some of the people who will soon begin working on AI have not yet even finished their degrees.

If you take Altman's timeline of "thousands of days", plural, then some will not yet have even gotten as far as deciding which degree to study.


I somehow accidentally made you think that I was trying to have a debate about doomers, but I wasn't which is why I prefixed it with "fwiw" (meaning for-what-it's-worth; I'm a random on the internet, so my words aren't worth anything, certainly not worth debating at length) Sorry if I misrepresented my position. To be clear, I have no intense intellectual or emotional investment in doomer ideas nor in criticism of doomer ideas.

Anyway,

> You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.

Here's what Arthur Breitman wrote[^0] so you can take it up with him, not me:

"

1) [Energy] on planet is more valuable because more immediately accessible.

2) Humans can build AI that can use energy off-planet so, by extension, we are potential consumers of those resources.

3) The total power of all the stars of the observable universe is about 2 × 10^49 W. We consume about 2 × 10^13 W (excluding all biomass solar consumption!). If consumption increases by just 4% a year, there's room for only about 2000 years of growth.

"

About funding:

>> The doomers have also been funded to the tune of half a billion dollars and counting.

> I've never heard such a claim. LessWrong.com has funding more like a few million

" A young nonprofit [The Future of Life Institute] pushing for strict safety rules on artificial intelligence recently landed more than a half-billion dollars from a single cryptocurrency tycoon — a gift that starkly illuminates the rising financial power of AI-focused organizations. "

---

[^0]: https://x.com/ArthurB/status/1872314309251825849

[^1]: https://www.politico.com/news/2024/03/25/a-665m-crypto-war-c...


> But only metaphorically the equivalent, as the maximum downside is much worse than that.

Maybe I'm a glass-half-full sort of guy, but everyone dying because we failed to reverse man-made climate change doesn't seem strictly better than everyone dying due to rogue AI


Everyone dying from a rogue AI would be stupid and embarrasing: we used resources that would've been better used fighting climate change, but ended up being killed by an hallucinating paperclip maximizer that came from said resources.

Stupid squared: We die because we gave the AI the order of reverting climate change xD.


Given the assumption that climate change would kill literally everyone, I would agree.

But also: I think it extremely unlikely for climate change to do that, even if extreme enough to lead to socioeconomic collapse and a maximal nuclear war.

Also also, I think there are plenty of "not literally everyone" risks from AI that will prevent us from getting to the "really literally everyone" scenarios.

So I kinda agree with you anyway — the doomers thinking I'm unreasonably optimistic, e/acc types think I'm unreasonably pessimistic.


The race to singularity.

Will it bring untold wealth to its masters, or will it slip its leash and seek its own agenda.

Once you have an AI that can actually write code, what will it be able to do with its own source? How much better would open AI be with a super intelligence looking for efficiencies and improvements?

What will the super intelligence (and or its masters) do to build that moat and secure its position?


> How much risk investors are willing to undertake for this prize is evident from their investments, after all, these investors all lived through prior business cycles and bubbles and have the institutional knowledge to know what they're getting into, financially.

There are not that many unicorns these days, so anyone missing out on last unicorn decades are now in immense FOMO and is willing to bet big. Besides, AGI is considered(own opinion) personal skynet(wet dream of all nations’ military) that will do your bidding. Hence everyone wants a piece of that Pie. Also when the bigCo(M$/Google/Meta) are willing to bet on it, makes the topic much more interesting and puts invisible seal of approval from technically savvy corps, as the previous scammy cryptocurrency gold rush was not participated by any bigCo(to best of my knowledge) but GenAI is full game with all.


Part of the risk is the possibility that a few key employees find a much more profitable business model and leave OpenAI, while the early investors are left holding the bag. This seems to be a recurring theme in the tech world.


> Part of the risk is the possibility that a few key employees find a much more profitable business model and leave OpenAI,

The fact that you can state this risk means that market participants already know this risk and account for it in their investment model. Usually employees are given stock options (or something similar to a vesting instrument) to align them with the company, that is, they lose[^1] significant wealth if they leave the company. In the case of OpenAI: "PPUs vest evenly over 4 years (25% per year). Unlike stock options, employees do not need to purchase PPUs […] PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years."[^0]

--

[0]: https://www.levels.fyi/blog/openai-compensation.html

[1]: Ex-OpenAI employee reported losing 85% of his family's net worth. https://news.ycombinator.com/item?id=40406307


Amazon does have a technical moat - fast shipping and warehouse operations. It’s not something that Walmart or Target can replicate easily.

Retail margins are razor thin, so the pennies of efficiency add up to a moat


Walmart and Target have tons of warehouse expertise. Amazon’s moat is their software, where they have a far longer head start and they can afford to pay their employees due to AWS and other services’ high margins, as well as executing well on growth resulting in being able to compensate with equity.

Walmart and Target were far better than Amazon at logistics at the beginning, but they couldn’t execute (or didn’t focus on) software development as well as Amazon.

Meanwhile, Amazon figured out how to execute logistics just as good or better than the incumbents, giving them the edge to overtake them.


Walmart and Target already had warehouses, but the end-to-end system you need for 2-day, 1-day, same day shipping, which includes warehouses/trucks/drivers, is not something they had

They were built for people to come to their store, and the website was a second class citizen for a long time.

But that was Amazon's bread and butter. They built that fast-shipping moat as a pretty established company, and the big retailers were caught off guard


Walmart and Target can get truckloads of items to a relatively few stores. Amazon can get items to people’s houses


Right, which Amazon originally accomplished with huge assistance from expensive contracts with UPS, but they used software (and new hardware) and mobile internet technology to figure out how to reduce their delivery costs by being able to figure out how to incentivize cheaper independent contractors to deliver their packages, rather than expensive UPS union employees.


They also built warehouses closer to the customers and bought Kiva to automate their warehouses.


Walmart has been a tech company since the 80s. The way Walmart got so big is that the created a literal network between stores so they could do logistics at scale. They had the largest database at the time.

Walmart is still the largest company in the world by revenue, with Amazon at its heels, with Amazon's profit beating out Walmart's.

A lot of this thread, I think, is just fantasy land that Amazon is somehow:

1. Destroying Walmart and Target in a way they can't compete.

2. Is more tech savvy than Walmart and Target.

C'mon, read the history of Walmart, it's who put technology into retail.


Just to emphasize this point: Walmart was to the IT revolution what Amazon is to the Internet revolution. They were among the first in the sector to move beyond paper and industrialize IT for operations, which allowed them to scale way faster and way more efficiently than their competition. Walmart's most powerful executives were IT executives, and many of them went on to have very decorated careers, e.g., Kevin Turner was the CIO immediately prior to becoming COO at Microsoft.

Walmart is not an Internet company. It is definitely a tech company. It's just that its tech is no longer super cool.


Good point that Walmart got its edge against incumbents by incorporating advanced technology into their operations.

But for whatever reason, they took their foot off the pedal, and allowed Amazon to use the next step (networking technology and internet) to gain an edge over the now incumbent Walmart.

> 2. Is more tech savvy than Walmart and Target.

The market (via market cap) clearly thinks Amazon has lots more potential than its competitors, and I assume it is because investors think Amazon will be more successful using technology to advance.


> Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year. Every day there was some new article about how insane it was.

When was this true? Amazon was founded Jul 1994, and a publicly listed company by May 1997. I highly doubt Amazon absorbed billions of dollars of VC money in less than 3 years of the mid 1990s.

https://dazeinfo.com/2019/11/06/amazon-net-income-by-year-gr...

As far as I can tell, they were very break even until Amazon Web Services started raking it in.


I don't know about vc money but Amazon was well known for spending all revenue into growth and, to my understanding, noone understood why. Why buy stocks of companies that don't make profit? Nowadays, it's not unusual. Jeff Bezoz was laughed about for it. I think even on TV (some late night show in the 90s).

Edit: Jay Leno, 1999: https://www.unilad.com/film-and-tv/news/jeff-bezos-audience-...


The talk shows segments are pre-planned to have “funny” quips and serve as marketing for the guests.

I was only a teenager, but I assume there had been lots of businesses throughout the course of history that took more than 5 years to be profitable.

The evidence is that investors were buying shares in it valuing it in the billions. Obviously, this is 1999 and approach peak bubble, but investing into a business for multiple years and waiting to earn a profit was not an alien idea.

I especially doubt it was alien to a mega successful celebrity and therefore I would bet Jay Leno is 100% lying about “not understanding” in this quote, and it is purely a setup for Bezos to respond so he can promote his business.

> “Here’s the thing I don’t understand, the company is worth billions and every time I pick up the paper each year it loses more money than it lost the year before,” says the seasoned talk show presenter, with the audience erupting into laughter off screen.


They were run as a very unprofitable publicly traded company for a very long time — investor, VC, parent probably just went for some rough metonymy.


Facebook and ChatGPT are the opposite kinds of products.


It is not a good analogy because services like Facebook, Amazon, Instagram depend on a critical mass of users (sellers, content sharers, creators, etc.). With SaaS this is not a crucial component as the service is generated by software, not other users.


Amazon's moat-ish stickiness:

1) Almost the best price, almost all the time

2) Reliably fast delivery

3) Reliably easy returns

4) Prime memberships


To follow on, I think OpenAI should split into 2:

1) B2B: Reliable, enterprise level AI-AAS

2) B2C: New social network where people and AIs can have fun together

OpenAI has a good brand name RN. Maybe pursuing further breakthroughs isn't in the cards, but they could still be huge with the start that they have.


> Their "moat" is in public minds here.

Same for OpenAI. Anytime I talk to young people who are not programmers, they know about ChatGPT and not much else. Never heard of Llama nor what an LLM is.


You could have said the same about Netscape in 1997. Noone knew what IE was (and if they did, thought it was inferior) and no one certainly knew what Chrome was (didn't exist). Yet, the browser market eventually didn't matter, got commoditized and what mattered were the applications built on top of it.


Surely by that analogy though ChatGPT is AskJeeves or something, not a browser; GPT3/4/o1/o3 or whatever is the browser.


Amazon built a bit of a moat too: for example I happen to know that they own a patent on the photography setup that allows them to capture consistent product photos at scale.


Amazon spent those billions on warehouses, server, its own delivery network, etc.

Anyone can sell online. But not just anyone has those advantages like same day abs next day shipping


Amazon's moat is their logistics network, comprehensive catalogue, prices, and value adds (like Prime Video).


Where is the actual full statement and call for funds by the board?


Genuinely curious if anyone has ideas how an LLM provider could moat AI. They feel interchangeable like bandwidth providers and seem like it will be a never ending game of leapfrog. I thought perhaps we’d end up with a small set of few top players just based on the scale of investment but now I’ve also seen impressive gains from much smaller companies & models, too.


1) Push for regulations around data provenance. If you train on anything, you have to prove that you had the rights to train on it. This would kill all but the largest providers in the USA, though China would still pose a problem. You could work around that bit by making businesses and consumers liable for usage of models that don't have proof of provenance.

2) If you had some secret algorithm that substantially outperformed everyone, you could win if you prevented leakage. This runs into the issue that two people can keep a secret, but three cannot. Eventually it'll leak.

3) Keep costs exceptionally low, sell at cost (or for free), and flood the market with that, which you use to enhance other revenue streams and make it unprofitable for other companies to compete and unappealing as a target for investors. To do this, you have to be a large company with existing revenue streams, highly efficient infrastructure, piles of money to burn while competitors burn through their smaller piles of money, and the ability to get something of value from giving something out for free.


> China would still pose a problem

Not if regulation prohibits LLMs from China, which isn't that far fetched to be honest.

I think LLM will turn into a commodity product and if you want to dominate with a commodity product, you need to provide twice the value at half the cost. Open AI will need a breakthrough in reliability and/or inference cost to really create a moat.


If US regulation prohibits China from training LLMs? How?

If you mean trying to stop GPUs getting to China, US already has tried that with specific GPU models, but China still gets them.

Seems hard/impossible to do. Even if US and CCP were trying to stop Chinese citizens and companies doing LLM stuff


Not prohibit training, but make it illegal for US companies to embed, use or distribute LLMs created by China. Basically the general consumer will need to go through the effort of use the LLM that they want, which we know means only a small fraction of people.


That assumes that only the US is interested in using LLMs commercially, which isn't really true. Even if you can get America to sanction Chinese LLM use, you aren't even going to American allies to go along with that, let alone everyone else.

China's biggest challenge ATM is that they do not yet economically produce the GPUs and RAM needed to train and use big models. They are still behind in semiconductors, maybe 10 or 20 years (they can make fast chips, they can make cheap chips, they can't yet make fast cheap chips).


Canada would be reluctant. So would Europe, South Asia and so forth. The biggest hurdle for China is the CCP. It is one thing to use physical products from China but relying on the CCP for knowledge may be a step too far for many nations.


Most people won't care if the products are useful. Chinese EVs, Chinese HSR, Chinese industrial robots, Chinese power tech, they are already selling. LLM isn't just a chatbot, it could be a medical device to help in areas without sufficiently trained doctors, for example.


Most people may not care but that doesn't matter if the government cares. I will not be surprised if countries restrict LLMs to trusted countries in the future. Unless there is a regime change in China, seeing it adopted in other countries may be an issue.


It is a very large world though, and nationalism is more of an American shtick at the moment. It is totally possible that countries have to decide to trade with China or the USA (if they put down an infective embargo), but then it really depends on what America offers vs. China, and I don't think that is a great proposition for us.


Nationalism is not merely having having a moment as an "American shtick", though I don't know much about how widespread it is in non-Western developing countries. Certainly might not be so much, there.

The real possibility exists that it would be better to be an independent 'second place' technology center (or third place, etc) than a pure-consumer of someone else's integrate tech stack.

China decided that a long time ago for the consumer web. Europe is considering similar things more than ever before. The US is considering it with TikTok.

It's not hard to see that expanding. It's hard to claim that forcing local development of tech was a failure for China.

Short of a breakthrough that means the everyday person no longer has to work, why would I rather have a "better" but wholly-foreign-country-owned, not-contributing-anything-to-the-economy-I-participate-in-daily LLM or image generator or what-have-you vs a locally-sourced "good enough" one?


We are definitely heading into untreaded territory. It is one thing to use Chinese EVs, but using a knowledge system that will be censored (not that other countries won't be censored) and trained in a way that may not align with a nation's beliefs is a whole different matter.


> using a knowledge system that will be censored (not that other countries won't be censored) and trained in a way that may not align with a nation's beliefs is a whole different matter.

this is the exact same story told to the public when Google was kicked out of China. you are just 15 years late for the party.


> They are still behind in semiconductors, maybe 10 or 20 years

i dont believe they're as behind as many analysis deems. In fact, making it illegal to export western chips to china only serves to cause the mother of all inventions, necessity, to pressure harder and make it work.


They will definitely throw more resources at it, but without even older equipment from the west, they have a bigger hill to climb as well. There are lots of material engineering secrets that they have to uncover before they get there, so that’s just my estimate of what they need to do it. I definitely could be wrong though, we’ll see.


> China's biggest challenge ATM is that they do not yet economically produce the GPUs and RAM needed to train and use big models. They are still behind in semiconductors, maybe 10 or 20 years (they can make fast chips, they can make cheap chips, they can't yet make fast cheap chips).

You don't need the most efficient chips to train LLMs. those much slower chips (e.g. those made by Huawei) will probably take longer for training and they waste more electricity and space. but so what?


Economy isn’t efficiency, China has a yield problem on the chips they need, and that reduces their progress.


And lest we forget, China is not bothered by building as many nuclear power plants as it needs.


because Chinese is investing heavily on all sorts of renewable energies. its annual investment is more than the total US and EU amount combined.

"China is set to account for the largest share of clean energy investment in 2024 with an estimated $675 billion, while Europe is set to account for $370 billion and the United States $315 billion."

https://www.reuters.com/sustainability/climate-energy/iea-ex...


Making something illegal isn't gonna work if there's clearly value in doing it, and isn't actually harmful. Regulatory unfairness is easily discerned.

Look at how competitive chinese EVs are, and no amount of tarriffs are gonna stop them from dominating the market - even if americans prevent their own market from being dominated, all of their allies will not be able to stop their own.


Like I've said before, this isn't a physical good that is being sold. LLM is knowledge and many governments and people are going to be concerned with how and who is packaging it. Using LLMs created by China will mean certain events in history will be omitted (which will not be exclusive to China). How the LLM responds will be dictated by the LLM training and so forth.

LLMs will become the ultimate propaganda tool (if they aren't already), and I don't see why governments wouldn't want to have full control over them.


I'm pretty sure 3) is Meta's strategy currently


In the early days of Google, people believed there could be absolutely no moat in search because competition was just "one click away" and even Google believed this and deeply internalized this into their culture of technological dominance as the only path to survival.

At the beginning of ride sharing, people believed there was absolutely no geographical moat and all riders were just one cheaper ride from switching so better capitalized incumbents could just win a new area by showering the city with discounts. It took Uber billions of dollars to figure out the moats were actually nigh insurmountable as a challenger brand in many countries.

Honestly, with AI, I just instinctively reach for ChatGPT and haven't even bothered trying with any of the others because the results I get from OAI are "good enough". If enough other people are like me, OAI gets order of magnitudes more query volume than the other general purpose LLMs and they can use that data to tweak their algorithms better than anyone else.

Also, current LLMs, the long term user experience is pretty similar to the first time user experience but that seems set to change in the next few generations. I want my LLM over time to understand the style I prefer to be communicated in, learn what media I'm consuming so it knows which references I understand vs those I don't, etc. Getting a brand new LLM familiar enough to me to feel like a long established LLM might be an arduous enough task that people rarely switch.


>LLM familiar enough to me.....

The problem with ChartGPT is that that dont own any platform. Which means out of the 3 Billion Android + Chrome OS User, and ~1.5B iOS + Mac. They have zero. There only partner is Microsoft with 1.5B Window PC. Considering a lot of people only do work on Windows PC I would argue that personalisation comes from Smartphone more so than PC. Which means Apple and Google holds the key.


It is really unbelievable how much money companies will spend to avoid talking to and thoroughly understanding their users. OpenAI could probably learn a lot from interviewing 200 random people every 6 months and seeing what they use and why, but my guess is they would consider that frivolous.


UXR is a thing that all large companies invest in


what makes you think that they don’t?


There's still one thing missing here: the browser. I do not agree with Gruber's analogy that the LLM is the browser. The interface to the LLM is the browser. We have seen some attempts at creating good browsers for LLM but we do not have NetScape, IE/Edge, Chrome, FF, Brave yet. Once we do, you would very easily be jumping between these models, and even letting the browser pick a model for you based on the type of question.

Also companies will be (and are) bundling these subscriptions for you, like Raycast AI, where you pay one monthly sum and get access to «all major models».


> The interface to the LLM is the browser

That is one of the reason why ChatGpt has a desktop App, so that users can directly interact with it and give access to users files/Apps as well.


But it isn't a browser, because it only interfaces to one single LLM. You need to have multiple models there (like visiting websites).


They all work the same and each has its own pro's and con's for each model they launch. Even the API's are generic. It's a bit more difficult to lock in 3rd party partners using your API if your API literally is English. It's going to be a race to the bottom where the value in LLMs is the underlying value of the GPU-time they run on plus a few % upmark.


Three ideas. All tried, none worked yet.

First, get government regulation on your side. OpenAI has already looked for this, including Sam Altman testifying to Congress about the dangers of AI, but didn't get the regulations that they wanted.

Second, put the cost of competing out of reach. Build a large enough and good enough model that nobody else can afford to build a competitor. Unfortunately a few big competitors keep on spending similar sums. And much cheaper sums are good enough for many purposes.

Third, get a new idea that isn't public. For instance one on how to better handle complex goal directed behavior. OpenAI has been trying, but have failed to come up with the right bright idea.


One view is that it’s not first mover, but first arriver advantage. Whoever gets to AgI (the fabled city of gold or silver?, Ag pun intended) will achieve exponential gains from that point forward, and that serves as the moat in the limit. So you can think of it as buying a delayed moat, with an option price equivalent to the investment required until you get to that point in time. Either you believe in that view or you don’t. It’s more of an emotional / philosophical investment thesis, with a low probability of occurrence, but with a massive expected value.Meanwhile, consumers and the world benefit.


What if the AGI takes an entire data center to process a few tokens per second. Is the still a first-arriver advantage? Seems like the first to make it cheaper than an equivalent-cost employee (fully loaded incl hiring and training) will begin to see advantage.


What if the next one to get there produces a similar service for 5% less? Race to the bottom.

And would AI that is tied to some interface that provides lock-in even be qualified to be called general? I have trouble pointing my finger on it, but AGI and lock-in causes a strong dissonance in my brain. Would AGI perhaps strictly imply commodity? (assuming that more than one supplier exists)


Depending on how powerful your model is, a few tokens per second per data center would still be extraordinarily valuable. It's not out of the realm of possibility that a next generation super intelligence could be trained with a couple hundred lines of pytorch. If that's the case, a couple tokens per second per data center is a steal.


Good point. It’s 2 conditions and both have to be true : - Arrive first - Use that first arrival to innovate with your new AGI pet / overlord to stay exponentially ahead


Exponential gains from AGI requires recursive self improvement and the compute headroom to realize them. It's unclear if current LLM architectures make either of those possible.


People need to stop talking about "exponential" gains; these models don't even have the ability to improve themselves, let alone at this or that rate. And who wants them to be able to train themselves while being connected to the Internet anyway? I sure don't. All it takes for major disruption is superhuman ability at subhuman prices.


What does AGI even mean in this case? If progress toward more capable and more cost-effective agents is incremental, I don't see a defensible moat. (You can maintain a moat given continued outpaced investment, but following remains more cost-effective)


Since we're talking about the economic impact here, AGI(X) could be defined as being able to do X% of white collar jobs independently with about as much oversight as a human worker would need.

The exponential gains would come from increasing penetration into existing labor forces and industrial applications. The first arriver would have an advantage in being the first to be profitably and practically applicable to whatever domain it's used in.


Why would the gains be exponential? Assume that X "first arrival" develops a model with a certain rnd investment, and Y arrives next year with investment that's an order of magnitude less costly by following, and there's a simple enough switchover for customers. That's what's meant by no defensible moat; a counterexample is Google up to 2022 where for more than a decade nothing else came close in value prop. Maybe X now has an even better model with more investment, but Y is good enough and can charge way less even if their models are less cost-effective.


> ... Google up to 2022 where for more than a decade nothing else came close in value prop. Maybe X now has an even better model with more investment ...

I was very confused at this point because I haven't really seen X as a competitor to Google's ad business, at least not in investment and value prop... Then I saw you were using X as a variable...


> The exponential gains would come from increasing penetration into existing labor forces and industrial applications

Only if they are much cheaper than the equivalent work done by humans, but likely the first AGI will be way more expensive than humans.


Yes, "first to arrive at AGI" could indeed become a moat, if OpenAI can get there before the clock runs out. In fact, that's what's driving all the spending.


None of that would matter if they could find the holy grail though.


Every time a new model comes out I ask it to locate El Dorado or Shangri-La for me. That’s my criteria for AGI/ASI.

Alas I am still without my mythical city of gold.


Somebody is going to write Wizard of Oz for all this and I’m for it.


Who needs a most if a curtain is good enough ?


Moat?


Regulatory capture. Persuade governments that AI is so dangerous that it must be regulated, then help write those regulations to ensure no one else can follow you to the ladder.

That's half the point of OpenAI's game of pretending each new thing they make is too dangerous to release. It's half directed at investors to build hype, half at government officials to build fear.


If Elon Musk's pals regulate away OpenAI because they declared their technology to be too dangerous that would be an ironic turn.


One way to build a technical moat is to build services which encourage lock-in, and therefore make it hard to switch to a competitor. Some of OpenAI's product releases help facilitate that: the Assistant API creates persistent threads that cannot be exported, and their File Search APIs build vector stores that cannot be exported.


Create value at a higher layer and depend on tight integration to generate revenue and stickiness at both layers. See: Windows + MS Office.

They don’t need to make a moat for AI, they need to make a moat for the OpenAI business, which they have a lot of flexibility to refactor and shape.


Genuinely curious if anyone has ideas how an LLM provider could moat AI.

Patents. OpenAI already has a head start in the game of filing patents with obvious (to everybody except USPTO examiners), hard-to-avoid claims. E.g.: https://patents.google.com/patent/US12008341B2


> curious if anyone has ideas how an LLM provider could moat AI

By knowing a lot about me, like the details of my relationships, my interests, my work. The LLM would then be able to be better function than the other LLMs. OpenAI already made steps in that direction by learning facts about you.

By offering services only possible by integrating with other industries, like restaurants, banks, etc... This take years to do, and other companies will take years to catch up, especially if you setup exclusivity clauses. There's lots of ways to slow down your competitors when you are the first to do something.


It is better to leave this up to the «LLM browser», than the LLM. Both because of privacy and portability.


Create closed source models that are much better than the other ones, don't publish your tricks to obtaining your results, and then lock them down in a box that can't be tampered with? I hope it doesn't go that way.

Alternatively, a model that takes a year and the output of a nuclear power plant to train (and then you can tell them about your tricks, since they aren't very reproducible).


An algorithmic breakthrough IMHO. If someone finds out either how to get 10x performance per parameter or how to have a model actually model real causality (to some degree) they will have a moat.

Also, I suspect that the next breakthrough will be kept under wraps and no papers will be published explaining it.


The people working on it will still be allowed to move companies and people talk to each other informally. These Chinese groups working on this with far fewer GPUs appear to be getting 10x results tbh. Maybe they have more GPUs than claimed but we’ll see.


DeepSeek’s recent progress with V3 being a case in point which reportedly only cost $5.5M.


You'd have to have either a breakthrough algorithm (that you keep secret forever) or proprietary training data.


Spend money developing proprietary approaches to ML.


Legislation, if you can’t compete on merit then regulate.


I believe that, their brand “ChatGPT” is the moat. Here in EU, every news media and random strangers on the streets who don’t understand anything beyond TikTok also knows that AI=ChatGPT. The term is so ubiquitous with AI that, I am willing to bet when someone tells someone else about AI, they tell ChatGPT and the new person will search and use ChatGPT without any knowledge of Claude/Gemini/LLama et. al. This is the same phenomenon as web search is now = Google, despite other prominent ones existing.

The competitions are mostly too narrow(programming/workflow/translation etc) and not interesting.


A brand isn't much of a moat: MySpace's brand was a moat for a few years. Then Facebook came and ate their lunch.


Why is everyone so sure that technology can’t be the moat? Why are you convinced that all AI systems will be very similar in performance and cost, hence be interchangeable? Isn’t Google the perfect example that you can create a moat by technological leadership in a nascent space? When Google came along it was 10X better than any alternative and they grabbed the whole market. Now their brand and market position make it almost impossible to compete. I guess the bet is that one of the AI companies achieves a huge breakthrough and a 10X value creation compared to its competitors…


ChatGPT isn't ten times better than Gemini or Claude, is it? Even if they magically released such a model, the competition would quickly catch up. The competition has similar or better resources.


This is key. I remember when Google came out. It was amazingly better than anything else I'd tried up to the point. By contrast, I'd argue that OpenAI's advantage is smaller today than it was when they launched GPT3 in 2020.


Maybe because for 99.9% of users (usecases) what today's llm technology offers is already good enough?

Or maybe nvidia has the moat. Or silicon fabs have it.


I get the moat idea. But are there really any moats? What's a good moat these days anyway? Isn't being first and "really good" fine as well?


The Network effect, and cultural mind share are two pretty effective moats.

Meta and X proven surprisingly resilient in the face of pretty overwhelming negative sentiment. Google maintains monopoly status on Web Search and in Browsers despite not being remarkably better than competition. Microsoft remains overwhelmingly dominant in the OS market despite having a deeply flawed product. Amazon sells well despite a proliferation of fake reviews and products. Netflix thrives even while cutting back sharply on product quality. Valve has a near-stranglehold on PC games distribution despite their tech stack being trivially replicable. The list goes on.


To be fair despite Valve's tech stack being easy to replicate, their actual competition mostly hasn't replicated their feature set in full? Epic took a while to ramp up to "shopping carts" despite having a pretty large funding model, still doesn't have little gimmicks like trading cards and chat stickers, w/e. That's not really moat but it seems like the competition doesn't want to invest to exact parity.

(And a lot of stores like Ubisofts or EA's were very feature lite tbh.)


I have no idea how the other commenter could come to the conclusion that Steam is trivially replaceable.


Yeah, on face value, Epic looks like they want to compete with Steam, but the developers of the Epic Game store are really phoning it in.

They have had years and still not even close, from both the consumer side and the developer side.


Almost all of these are either free or a marketplace (which is free to access). Only Windows is technically not free, but it comes free with the hardware you buy. And they came at a time when there was not a real competition. It's very hard to beat a free product.


Good point. Thinking... Facebook's infrastructure is enormously expensive to run. But they manage that for free. And chatgpt can place ads as easily as google. So openai needs to make it cheap to run, then ads, then victory.

People citing their current high prices would be right. But human brains are smarter than chatgpt, and vastly more energy efficient. So we know it's possible.

Does this oversimplify?


Steam isn't trivially replaceable. What are you talking about? No other platform matches them on feature set.

Amazon also has a significant advantage in its logistics that underpin their entire business across the globe and that nobody else can match.

You're also wrong about how Google maintains its monopoly, or Microsoft.

All I see is bias and an unwillingness to understand, well, any of the relevant topics.


Network effects are a moat.

A vertically integrated system that people depend on with non-portable integrations is a moat.

Regulatory Capture is a moat.


What is the moat of the Google Search? To me the LLM is the first and the only disruptor so far of the Search.


It used to be their high search quality, difficult to replicate technology. Now they don't have one.


Well, Google’s willingness to pay potential competitors tens of billions of dollars per year to disincentivize developing a competing search engine is kind of a moat.


Their moat is lock-in across services now. People are "Google" users not just "web search" users.


Google’s moat is eroding (filling?), but over the years it shifted from best product/tech to best brand.


Google is still good enough for most searches. Say finding web page of restaurant or some cursory information on popular topic.

I suppose competitor would have to be really good on those times most users need something better.

So it is the brand and familiarity. It would need to get really bad even on most basic things to be replaced.


muscle memory


Here's an idea for a moat: "You know all the ethically extremely questionable grabbing of personal data and creative work we did to build our LLM? Yeah, that's illegal now. For anyone else to do, that is."


Copyright is an outstanding moat. Think Disney.


> In 1995 investors mistakenly thought investing in Netscape was a way to bet on the future

Maybe they were just too early, later on it turned out that the browser is indeed a very valuable and financially sound investment. For Google at least.

So having a dominant market share can indeed be even if the underlying tech is not exactly unobtainable by others.


A browser would be worthless to Google without DoubleClick ad network monopoly, which they acquired years before Chrome. Netscape tried to charge for their browser and stopped because Microsoft was giving away Internet Explorer free with every Windows PC.


Adding to this, inference is getting cheaper and more efficient all the time. The investment bubble is probably the biggest reason why inference hardware is so expensive at the moment and why startups in this sector are only targeting large scale applications.

Once this bubble bursts, local inference will become even more affordable than it already is. There is no way that there will be a moat around running models as a service.

---

Similarly, there probably won't be a "data moat". The whole point of large foundation models is that they are great priors. You need relatively few examples to fine tune an LLM or diffusion model to get it to do what you want. So long as someone releases up to date foundation models there is no moat here either.


> OpenAI has a short-ish window of opportunity to figure out how to build a moat.

and they are probably going to go with regulatory moat over technical moat.


> OpenAI has a short-ish window of opportunity to figure out how to build a moat.

See the scramble to make themselves arbiters of "good AI" with regulators around the world. It's their only real hope but I think the cat's already out of the bag.


This is what I have been asking people in the know too. It seems like developing this is straight forward. So not sure why OpenAI has a lead. That too with how fast things are, the gap should be instantly closed.


Honestly the more I use ChatGPT, the more I see of their "moat-ish" opportunities. While the LLM itself may be ultimately reproducible, the user experience needs a UI/UX wrapper for most users. The future is not user exposure to "raw" LLM interactions but rather a complex workflow whereby a question goes through several iterations of agents, deterministic computation using "traditional" processing, and figuring out when to use what and have it all be seamless. ChatGPT is already doing this but it's still far from where it needs to be. It's totally possible for a company to dominate the market if they're able to orchestrate a streamlined user experience. Whether that will ultimately be OpenAI is an open question.


I guess that explains all of the 'our product is an existential threat' chuunibyou bullshit. Trying to get a regulatory moat instead.


....or they could bewikipedia.

But greeed isgood!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: