Can model providers be trusted to not be paid by advertisers? Can brands effectively influence how models react to them and their competitors?
I deff imagine brands flooding the internet with llm.txt files linked to their home pages but hidden from human visitors just to boost themselves up... what is the antidote?
Can attempts to influence LLM's be detected and reported?
Good question. Personally, I feel that answer engines will go the same route as search engines and start monetizing brand mentions and I feel this will be done openly, similar to ads. That being said I feel that there is room for brands to improve their presence as well. Most models claim neutrality at the moment, but we’ve already seen anecdotal cases where some brands consistently outperform others in AI responses with no clear reasoning
On your question regarding how influence can be detected.....
That’s a big part of what we’re working on at MentionedBy.ai. We track brand mentions across multiple models over time and flag sudden shifts — e.g., a competitor showing up overnight in all responses, or factual distortions creeping in. Think of it as version control + monitoring for the "AI perception layer."
As for llm.txt abuse.....
Yes, totally possible. We expect a wave of LLM-targeted SEO — structured data, vector bait, invisible prompts, etc. One idea we’re exploring is a kind of “LLM spam index” — patterns of over-optimization or hallucination correlation that could indicate manipulation attempts.
Can model providers be trusted to not be paid by advertisers? Can brands effectively influence how models react to them and their competitors?
I deff imagine brands flooding the internet with llm.txt files linked to their home pages but hidden from human visitors just to boost themselves up... what is the antidote?
Can attempts to influence LLM's be detected and reported?