I'm not going to disagree because greed knows no bounds, but that could be RIP for the enthusiast crowd's proprietary LLM use. We may not have cheap local open models that beat the SOTA, but is it possible to beat an ad-poisoned SOTA model on a consumer laptop? Maybe.
If future LLM patterns mimic the other business models, 80% of the prompt will be spent preventing ad recommendations and the agent would in turn reluctantly respond but suggest that it is malicious to ask for that.
I'm really looking forward to something like a GNU GPT that tries to be as factual, unbiased, libre and open-source as possible (possibly built/trained with Guix OS so we can ensure byte-for-byte reproducibility).
On the flip side, there could be a cottage industry churning out models of various strains and purities.
This will distress the big players who want an open field to make money from their own adulterated inferior product so home grown LLM will probably end up being outlawed or something.
Yes, the future is in making a plethora of hyper-specialized LLM's, not a sci-fi assistant monopoly.
E.g., I'm sure people will pay for an LLM that plays Magic the Gathering well. They don't need it to know about German poetry or Pokemon trivia.
This could probably done as LoRAs on top of existing generalist open-weight models. Envision running this locally and having hundreds of LLM "plugins", a la phone apps.