← Back to context

Comment by atleastoptimal

4 hours ago

It costs money to run AI models. The company serving you tokens has to make it up somehow.

This demo however undersells the tactically insidious way ads could be run in an AI chat. All it would need to do is merely recommend a product at a slightly higher percentage. In fact the chat could be biased in imperceptible ways which drive the user's thinking, aims and behavior patterns towards an outcome which leads them to seeking out a specific brand, website, app, etc. In aggregate, the ads are served, just not without making it ever obvious.

Even if there is "auditing" on the behavior of models, it is possible to train preferences into models without any of those preferences being specifically stated in the training data:

https://alignment.anthropic.com/2025/subliminal-learning/

And it seems that in very subtle ways, this holds true for humans too.

https://pmc.ncbi.nlm.nih.gov/articles/PMC6430776/

> In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine’s preferred label over relevant foils—even for images described as “totally unrecognizable to human eyes”.