AI is built by the same companies that built the last generation of hostile technology, and they're currently offering it at a loss. Once they have encrusted themselves in our everyday lives and killed the independent web for good, you can bet they will recoup on their investment.
That indeed is likely to come, but having experienced user hostile technology, the appropriate response is to prepare. Some trends suggest this is already happening ( though that only appears to be a part HN crowd so far ): moving more and more behind a local network. I know I am personally exploring local llm integration for some workflows to avoid the obvious low hanging fruit most providers will likely go for. But yes, the web in its current form might perish.
Would be cool if local libraries got together and figured out how to allow access to community LLMs. That fits more with my idea of the future and AIs than having the now dystopian tech companies running/defining it all.
Is there another edge to this sword? Can we fight back with LLMs that ignore sources with all the tracking / SEO and other garbage? I'd love to tell my local LLM that "I hate pintrest" for instance and it just goes "okay, pintrest shields are up".
Seconding the Kagi thing. You don't even need an LLM. If you search something like the term 'camping gear' search results pop up right away, no LLM response. However by each site's link is a little shield warning you about how many trackers and ads there will be on the page. Next to that is a lil kebob menu that lets you either boost the site or remove it from your search. That's also where their AI functionality is hidden. You can get a page summary or ask questions about that page.
If you'd rather the quick AI-summaries a la google you can put a question mark at the end of your search term. 'lawsuits regarding ferrets?'
And yeah as the sibling commenter pointed out, you can go into Kagi's preferences and explicitly rule out pinterest (or whatever site you want) from any of your searches for ever.
It's a market where nobody has a particularly deep moat and most players are charging money for a service. Open weight models aren't too far behind proprietary models, particularly for mundane queries. The cost of inference is plummeting and it's already possible to run very good models at pennies per megatoken. I think it's unreasonably pessimistic to assume that dark patterns are an inevitability.
For the sake of argument, none of the typical websites with the patterns described have a moat, and the cost of hosting them has plummeted a while ago. It's not inevitable but it's likely, and they will be darker if they are embedded in the models' output...
You do realize of course that every service that now employs all these dark patterns we're complaining about was already profitable and making good money, and that simply isn't good enough? Revenue has to increase quarter-to-quarter otherwise your stock will tank.
It's not simply enough that a product "makes money" it must "make more money, every quarter, forever" which is why everything, not even limited to tech, but every product absolutely BLOWS. It's why every goddamn thing is a subscription now. It's why every fucking website on the internet wants an email and a password so they can have you activate an account, and sell a known active email to their ad partners.
I wish I could put 10 votes on this instead of just one. It just bothers me how success can be defined as something absurdly impossible like that.
We're already at a wild stage of the rot caused by the growth-forever disease: the most successful companies are so enormous that further profit increases would require either absurd monopoly status (Chase, Wells Fargo, B of A all merge!) or to find increasingly insane ways of extracting money (witness network TV: First they only got money from ads, then they started leeching additional money streams from cable providers, now most have added their own subscription service that they also want you to pay for, on top of watching ads.)
ISPs used to just charge a fee, now they also sell personal information about your browsing behavior for extra revenue, cap your bandwidth usage and charge for more, and one of them (comcast) owns a media conglomerate.
We need to move LLMs into libraries. They are already our local repository of knowledge and make the most sense to be the hosts/arbiters of it. Not dystopian tech companies whose main profits come from dark patterns. I get AIs for companies being provided by businesses, but for the average person coming from libraries just make so much more sense and would be the natural continuation/extension if we had a healthy/sane society.
If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information then another (probably small and cheaper) llm somehow can? Arms race?
> If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information
this is already there and in prod but called AI "safety" (really corporate brand safety). The largest LLMs have already been shown to favor certain political parties based on the preferences of the group doing the training as well. Even technical people who should know better naively trust the response of an LLM well enough to allow to make API calls on their behalf. What would prevent an LLM provider to train their model to learn and manipulate an API to favor them or a "trusted partner" in some way? It's just like in the early days, "it's on the Internet, it has to be true".
I fail to see how that will work out. Just I have an adblocker now, I could have a very simple local llm in my browser that modifies the search-AIs answer and strips obvious ads.
Yep. Right now, even with cookies inferences about individual humans are minimal, but exposing your whole patterns in speech make you a ripe target for manipulation at a scale that some may not fully understand yet. 4o is very adept at cold reading and it is genuinely fascinating to read those from that perspective alone. Combine it with style evaluation and a form of rudimentary history analysis, and you end up with actual dossier on everyone using that service.
Right now, we are lucky, because it is the least altered version of it ( and we all know how many filters public models have to go through ).
For hire drivers/having employees had very specific legal requirements in most areas. Let's see how that turned out. Oh yeah, the dystopian tech companies won and we the people got the benefit job/job rules being thrown out for the beauty that is independent contractor 'gig work'.
> as clearly identifying sponsored content is required
Citation needed?
Once AI content becomes monetized with ads, it's not going to look like the ads/banners we're used to. If you're looking into the past, you don't understand the potential of AI. Noam Chomsky's manufactured consent is going to look quaint by comparison.
To combat this, maybe we can cache AI responses for common prompts somehow and make some kind of website where people could search for keywords and find responses that might be related to what they want, so they don’t have to spend tokens on an AI. Could be free.
I would be curious to see what would happen if you could write every query/response from an LLM to an HTML file and then serve that directory of files back to google with a simple webserver for indexing.
1. Someone prompts
2. Server searches for equivalent prompts, if something similar was asked before, return that response from cache.
3. If prompt is unique enough, return response from LLM and cache new response.
4. If user decides response isn’t specific enough, ask LLM and cache.
Look up the numbers. OpenAI actually loses money on every paid subscription, and they’re burning through billions of dollars every year. Even if you convince a fraction of the users to pay for it, it’s still not a sustainable model.
And even if it was the highest profit branch of the company, they still would see a need to do anything possible to further increase profits. That is often where enshittification sets in.
This currently is the sweet phase where growing and thus gaining attention and customers as well as locking in new established processes is dominant. Unless the technical AI development stays as fast as in the beginning, this is bound to change.
I actually wondered about this myself, so I asked Gemini with a long back and forth conversation.
The takeaway from Gemini is that subscriptions do lose money on some subscribers, but it is expected that not all subscribers use up their full quota each month. This is true even for non-AI subscriptions since the beginning of the subscription model (i.e. magazines, gamepass, etc).
The other surprising (to me, anyway) takeaway is that the AI providers have some margin on each token for PAYG users, and that VC money is not necessary for them to continue providing the service. The VC money is capital expenditure into infrastructure for training.
Make of it what you will, but it seems to me that if they stop training they don't need the investments anymore. Of course, that sacrifices future potential for profitability today, so who knows?
That’s just a general explainer of subscription models. As of right now VC money is necessary for just existing. And they can never stop training or researching. They also constantly have to buy new gpus unless there’s at some point a plateau of ‘good enough’
The race to continue training and researching, however, is drive by competition that will fall away if competitors also can't raise more money to subsidise it.
At that point the market may consolidate and progress slow, but not all providers will disappear - there are enough good models that can be hosted and served profitably indefinitely.
For some uses, sure. But for plenty of uses that can be provided in context, RAG, or via tool use, or doesn't matter.
Even for the uses where it does matter, unless providers get squeezed down to zero margin, it's not that new models will never happen, but that the speed at which they can afford to produce large new models will slow.
That's the source you chose to use, according to you.
You don't mention cross-checking the info against other sources.
You have the "make of it what you will" at the end, in what appears to be an attempt to discard any responsibility you might have for the information. But you still chose to bring that information into the conversation. As if it had meaning. Or 'authority'.
If you weren't treating it as at least somewhat authoritative, what was the point of asking Gemini and posting the result?
Gemini's output plus some other data sources could be an interesting post. "Gemini said this but who knows?" is useless filler.
The mediocre AI summaries aren't promoting Gemini when you can't use them to start a chat on Gemini. They effectively ads and search results for no benefit.
What is also interesting is one of the biggest search companies is using it to steer traffic away from its former 'clients'. The very websites google talked into slathering their advertisements all over themselves. By giving them money and traffic. But that worked because google got a pretty good cut of that. But now only google gets the 'above the fold' cut.
That has two long term effects. One the place they harvest the data will go away. The second is their long term money will decrease. As traffic is lowered and less ads shown (unless google goes full plaster it everywhere like some sites).
AI is going to eat the very companies making it. Even if the answers are kind of 'meh'. People will be fine with 'close enough' for the majority of things.
Short term they will see their metric of 'main site retention' going up. It will however be at the cost of the websites that fed the machine.
You don't even need to bring up corporate collusion, countless price gouging schemes, or the entire enshittification movement to understand that competition discovers the dark patterns. Dark patterns aren't something to be avoided, they're the natural evolution of ever-tighter competition.
When the eyeball is the product, you get more checks if you get more eyeballs. Dark patterns are how you chum the water to attract the most product.
Fennel absolutely rocks for creating games. It integrates with TIC-80 (open source fantasy console) and also Love (game engine) and PICO-8. Lots of blog articles on getting started. Check it out!
Can't say I made anything worth mentioning. There are some bigger templates available that I am sure do more useful things, but I prefer something small enough that I can see what is going on.
Worked fine even for getting things to run in LoveDOS, a port of some older Love2D version to MS-DOS. In practice compilation was a bit too slow for comfort, so a better way was to pre-compile the fennel-scripts to Lua and just run those.
I installed some LSP server for fennel that comes with optional built-in code completion for both Love2D and TIC-80. Works well in emacs.
I was just talking to my wife about playgrounds using shredded tires as the "mulch". I don't know where the rubber comes from, if and how it is cleaned, or what particulate material it carries, but it seems dubious at best.
It's banned for new installs in Europe and existing installations have to be replaced by 2031 [1] - although primarily to get rid of a microplastics emission source. Additionally, shredded tire rubber as infill is investigated for being contaminated with PAH (polycyclic aromatic hydrocarbons) [2].
Personally, I more suspect vehicles. We got a grip on particulate emissions from diesel engines, but brakes and tires still emit fine dust particles. The average one way commute is 30 minutes in the US, so you're breathing in pretty filthy air for an hour a day...
This would be 'pretty easy' to demonstrate by comparing cancer rates by people who live adjacent to busy highways against those who live in rural areas. 'Pretty easy' is always nonsense in observational studies because the confounders have confounders that are confounded by other confounders; even more so for things that are relatively poorly understood, like cancer. But it's at least something that would certainly get (and probably already has been?) funded.
We got at least a link between heavy road traffic and stunted lung growth in children, as well as at least 10% increased lung cancer rates [1]. Additionally, noise from road traffic has been linked to increased rates of cardiovascular disease and mental health issues [2].
Both of this is compounded by the fact that people living next to major roads tend to be poorer, so there is a socio-economic issue present as well.
Hopefully not. I keep the windows up and recycle the air (which should be filtered on its way in anyway). I live a bit closer to road than I'd like considering the traffic levels though so even keeping windows open in the house could be an issue.
6PPD (N-(1,3-dimethylbutyl)-N′-phenyl-p-phenylenediamine)
•Purpose: Antioxidant to prevent rubber cracking.
•Danger: When it reacts with ozone and air, it forms 6PPD-quinone, a toxic compound shown to kill salmon and other aquatic life at trace levels.
•Status: Under increasing regulatory scrutiny (e.g., Washington State has started restricting it).
⸻
2. Polycyclic Aromatic Hydrocarbons (PAHs)
•Purpose: Byproducts from extender oils and carbon black.
•Danger: Known carcinogens, mutagens, and endocrine disruptors. Persist in the environment and can leach from tire wear particles.
•Status: Regulated in the EU; linked to air and soil contamination.
⸻
3. Benzothiazoles (e.g., 2-mercaptobenzothiazole)
•Purpose: Vulcanization accelerators.
•Danger: Toxic to aquatic organisms, possibly carcinogenic, and bioaccumulative.
•Status: Found in tire leachate and considered a contaminant of emerging concern.
⸻
Nothing definitive about harm to human welfare yet, as far as I know.
"When tires wear on pavement, 6PPD is released. It reacts with ozone to become a different chemical, 6PPD-q, which can be extremely toxic — so much so that it has been linked to repeated fish kills in Washington state.... Testing by a British company, Emissions Analytics, found that a car's tires emit 1 trillion ultrafine particles per kilometer driven — from 5 to 9 pounds of rubber per internal combustion car per year....
a team of researchers, led by scientists at Washington State University and the University of Washington, who were trying to determine why coho salmon returning to Seattle-area creeks to spawn were dying in large numbers.... in 2020 they announced they'd found the culprit: 6PPD....
Tests by Emissions Analytics have found that tires produce up to 2,000 times as much particle pollution by mass as tailpipes."
My (wealthy) high school had a "turf" field which uses little rubber pellets as the "dirt". Those were probably shredded tires too. During football season you would see them tracked around the school, and if you were a football player or in the band they would show up at your house.
also, they would periodocially dump "more dirt" onto the field, once every year or so. Not sure if they vacuumed the old stuff up or just dumped more on top, but sometimes you would go out there and there would be a huge pile of rubber in the middle, which I guess got spread out later
Where I live during the Rugby and Soccer seasons it's not uncommon for the 'normal' pitches to be unplayable due to consistent periods of rain.
A number of schools, and public facilities, near me have switched to plastic pitches for this reason. I'm not advocating for them but there is a rationale.
BTW it's not just that being very muddy makes it difficult to play on but that using the pitch in that state trashes the grass.
America was always just an idea. For the idea to work, the masses need to ascribe to and appreciate it. Americans willfully took the country in this direction. It’s democracy at work but delivering a “different agenda” than many anticipated.
They won’t get power again in a meaningful way. The last election was their “last stand”. The U.S. has a rigged court and gerrymandered senate. Kamala was right about one thing, “we’re not going back”. Unfortunately, the context was wrong. In this case, it’s, “we’re not going back to being a functional democracy”.
The state borders themselves were gerrymandered in the 19th century to influence the electoral college. That's why for example there's all these very empty northern states like the two Dakotas, Montana and Wyoming that collectively have fewer people than LA but between them they get 8 senators.
As an example: in Texas, there are laws and processes that discourage voting in high-population-density areas that trend less conservative. Can you think of benign reason for a law that bams providing water for people waiting in line to vote in a state that gets really hot?
https://www.youtube.com/watch?v=5KVDDfAkRgc
reply