It's the next logical step after companies shoving AI into every corner of their own products regardless of whether their users want it - now they're paying other companies to shove AI things into their products regardless of whether their users want it. Genuine user interest doesn't come close to justifying their insane valuations so they have to put their thumb on the scale by shoving it everyones face and then pretending that's the same thing.
See also: Googles AI summaries, which always get top billing so they can tally nearly every search up as an "AI engagement" regardless of user intent, and can't be disabled because that would get in the way of what's clearly the actual goal (to juice the AI metrics as hard as possible, user experience be damned).
It's absolutely wild and scary watching how much money is being spent on pushing AI down the unwilling public's throats. Nobody wants this. Yet we're hiring expensive AI researchers and developers, buying datacenters full of GPUs, and now paying "partner" companies, to deliver this thing that nobody is asking for. What in the world is going on here? What am I not understanding?
> What in the world is going on here? What am I not understanding?
I don't think you're wrong in any way. I've been in denial for the past few years because the world is going crazy with AI and politics. But it's actually very good for me because I'm shunning all that shit and I focus on local people and local problems more: taking care of the finances of a non-profit, being more available for my friends and relatives, solving actual problems that people may have, etc. Denial is great and it makes more active. The downside is that I now have the calendar of a CEO and less time for me, but I believe the world need some care and we all can do something about it by doing small stuff.
What's not to understand? Enormous amounts of money have accrued to a tiny proportion of humanity in the past 30 or so years. There is no way there wouldn't be tons of waste when spending decisions are made by so few people.
Now add in the fact that these decision makers are often openly avaricious egomaniacs who don't even make symbolic efforts help the poor and vulnerable, that narrows the scope of their spending to wasteful, sometimes outright harmful investments.
> Enormous amounts of money have accrued to a tiny proportion of humanity in the past 30 or so years. There is no way there wouldn't be tons of waste when spending decisions are made by so few people.
the other issue with that isn't just the decision making but the fact so much capital is accruing at the top they have nowhere else to put it all, meanwhile average people are struggling to pay rent and buy food...
Reaganism set the wheels in motion but those wheels didn't actually come off until events like the dotcom boom normalized billion dollar valuations for half baked MVPs, creating a generation of future nutters like Thiel, Bezos, Zuck and Musk. Things accelerated even further with zero interest rate policy post-2008, making capital free for this "job creator" class while working people were charged "market rates" for home and education loans.
But the land they're grabbing is desert with no water and no access roads. Does anyone besides the few with their wealth invested in AI believe that AI is the next iPhone moment?
> Does anyone besides the few with their wealth invested in AI believe that AI is the next iPhone moment?
It doesn't matter because for such an industry-wide hype, there are no consequences for being wrong. If a CEO ignores AI and it does become the next iPhone moment, they'll be deposed in short order. If "everyone" is wrong and nothing comes off AI, they'll write off some investments, write some "What we learned" LinkedIn posts, and carry on. Our existing framework has no incentives to correct or innoculate agaisnt hypes led by the management/capital classes
I think it’s probably as simple as some old fool on Sand Hill Rd got suckered into writing a check for this nonsense with promises of world domination by AI’s promised infinite profit with minimal cost. And to keep the whole charade going, everyone has to pretend that this will eventually see some returns otherwise the whole farcical system will come crashing down. We can only hope that happens and some correction rears its head.
The end result being all of us suffers in some way for the greed of a handful.
The corporate world is overrun with executives designing products that look like solutions to other executives but that don't solve any problems problems people in the real world actually have.
It is funny seing xAI, the trash-tier AI company, integrate with Telegram, the trash-tier messaging service.
Enshitification is often a company-wide culture problem, but the fish does rot from the head.
There are a variety of reasons why a company might begin to over-incentivize short-term gain (or high-stakes risk-taking) at the expense of customer happiness and possibly to the detriment of the company's long-term interests.
For example: Growth stagnation, an existential threat, a pessimistic long-term financial outlook, bad reward structure, low customer regard, organizational infighting, low employee retention, etc.
The sudden emergence of AI and volatile economy are triggering several of those for a boat load of companies. And, well, show me the incentive and I'll show you the outcome.
Example: Since this deal was cash + equity I wondered whether telegram has a public valuation. I searched google and got an AI summary saying that the market capitalization of TELEGRAM is $7.4767.
That's dollars not billion dollars, because google's AI summary was referring to some scam coin which has a total marketcap of a big mac and fries plus or minus. It seems now to have updated to refer to the messaging app and _their_ (probably also scam)coin.
xAI has essentially zero market outside of twitter, and with recent system prompt shenanigans from Elon, I cannot imagine anyone signing up to pay for API hits when there's a non-zero chance your application will suddenly start complaining about white genocide. They're painted themselves into a corner with the product and Elon's increasingly erratic behavior, they now have to pay companies to use their service.
Are they paying for API hits and developing application stacks around the service? It's clear who uses it, it's a lot less clear who might be willing to pay for it.
Because it isn't disabling them - that would be an option to actually disable them.
Of course there are hacks round it, but ultimately Google want us to have the AI summaries, don't care about user choice, and the best course of action is to change search engine.
or instead of searching for "best dog breeds for apartments" change it to "best f'in dog breeds for tiny s-hole apartments" - feels much more cathartic
We've seen multiple companies in the last twelve months blast past any past benchmark of fastest growing company ever. It's become pedestrian for some of these companies to scale to $10m ARR in a quarter which has never happened before.
"Genuine user interest doesn't come close to justifying their insane valuations" - classic HN copium
See also: Googles AI summaries, which always get top billing so they can tally nearly every search up as an "AI engagement" regardless of user intent, and can't be disabled because that would get in the way of what's clearly the actual goal (to juice the AI metrics as hard as possible, user experience be damned).