Comment by wrs
19 hours ago
This is where LLM advertising will inevitably end up: completely invisible. It's the ultimate "influencer".
Or not even advertising, just conflict of interest. A canary for this would be whether Gemini skews toward building stuff on GCP.
Considering how little data needed to poison llm https://www.anthropic.com/research/small-samples-poison , this is a way to replace SEO by llm product placement:
1. create several hundreds github repos with projects that use your product ( may be clones or AI generated )
2. create website with similar instructions, connect to hundred domains
3. generate reddit, facebook, X posts, wikipedia pages with the same information
Wait half a year ? until scrappers collect it and use to train new models
Profit...
from my understanding Anthropic are now hiring a lot of experts in different who are writing content used to post-train models to make these decisions and they're constantly adjusted by the anthropic team themselves
this is why the stacks in the report and what cc suggests closely match latest developer "consensus"
your suggestion would degrade user experience and be noticed very quickly
I guess that’s why I’m not seeing anyone trying to build a skills marketplace for agent skills files. The llm api will read in any skills you want to add to context in plain text, and then use your content to help populate their own skills files.
2 replies →
That sounds too expensive to be viable when the giveaway phase ends.
2 replies →
This is the major point the anti-scraping crowd misses.
If you want your ideas to be appreciated, you should do everything in your power to put those ideas into the brains of LLMs. Like it or not, LLMs is how people interact with the world now.
https://www.bbc.com/future/article/20260218-i-hacked-chatgpt... says it took way less than half a year to 'pollute' a LLM
that's very different and was more akin to prompt injection or engineering, depending on your perspective, with a very specific query to make it happen (required a web fetch).
Richard Thaler must be proud. This is the ultimate implementation of "Nudge"
In my last conversation with a Google support person, I was sent a clearly LLM-generated recommendation to switch to a competitor's product. Either they're not doing this, or the support person wasn't using Gemini.
It's standard practice for customer support people to chase away unprofitable customers (in the US; no idea how Google works). Human or LLM, they may simply not want your business.
Influencer seems like an insufficient word? Like, in the glorious agentic future where the coding agents are making their own decisions about what to build and how, you don't even have to persuade a human at all. They never see the options or even know what they are building on. The supply chain is just whatever the LLMs decide it is.
Probably closer to the Walmart / Amazon model where it's the arbiter of shelf space, and proceed to create their own alternatives (Great Value, Amazon Brand) once they see what features people want from their various SaaS.
An obvious one will be tax software.
how is it a conflict of interest for a google product to have a bias towards using google products?
As users we must hold some accountability. AI is aiming to substitute for humans in the workforce, and humans would get fired for recommending competitor products for use-cases their own company is targeting.
If we want a tool that is focused on the best interest of the public users, then it needs to be owned by the public.
I wonder if aggregators will emerge (something like Ground News does for news sources)
LLM pattern [0] will probably eventually emerge as the best way to fight those biases. This way everyone benefits from token burn!
[0](https://github.com/karpathy/llm-council)
Advertisers will only pay if AI providers will provide them data on the equivalent of “ad impressions”. And unlabeled/non-evident advertisements are illegal in many (most?) countries.
It doesn't necessarily have to be advertisers paying AI providers. It could be advertisers working to ensure they get recommended by the latest models. The next form of SEO.
That's called LLM SEO now I believe.
5 replies →
> data on the equivalent of “ad impressions”.
1. They can skip impressions and go right to collect affiliate fees. 2. Yes, the ad has to be labeled or disclosed... but if some agent does it and no one sees it, is it really an ad.
So much to work out.
How would it be paid for?
1 reply →
Maybe. Historically lots of ads had little to no stats and those ads were wildly more effective than anything we have today.
The AI provider still has to prove that they actually deployed the ad.
> A canary for this would be whether Gemini skews toward building stuff on GCP
Sure it doesn't prefer THE Borg?