← Back to context

Comment by kardos

12 hours ago

> Once the ads are injected directly into the main response is when things get interesting.

This would be where you post-process the LLM response with a second LLM to remove the ad..

I think it will be difficult to remove bias when you ask a model to compare alternative products. The model will simply lie, as with a biased human opinion and you will need to consult multiple models for a diversity of opinion and presumably use a "trusted" model to fuse the results. Anonymity will be a key tool in reducing the model's ability to engage in algorithmic pricing.

Super easy. Barely an inconvenience.

  • Not only that, but the underlying model may be tuned to omit mentions or data about competitors entirely, an absence which can't easily be filtered.

    Extortionate economic shadowbanning, here we come.

  • > will simply lie, as with a biased human opinion

    Is this really how bias works?

    • Writers have many options to deceive their audience without outright lying.

      If a journalist is given an all-expenses-paid trip to an exotic location for the launch of a new product, and they review the product and say it's great - are they lying?

      If a reviewer writes an article comparing certain types of product, but their review only includes products where affiliate links pay a 10% commission - are they lying?

      If a journalist is vaguely aware of rumours about newsworthy, under-reported Event X but also that their publication has a big sponsorship deal with folks that Event X makes look bad, and they don't investigate the rumours or report on them - are they lying?

      If a reviewer hears a claim from X, and they report the claim credulously, without adding the context that X has a history making false claims - are they lying?

    • Oh no. Definitely not. Humans would never just lie. They always lie only if they're biased. That is, after all, the definition of how a bias works.

      /s

      1 reply →

This is already how email works in the corporate world.

A writes email with chatgpt to B.

B sees big blob of text and summarizes email with chatgpt.

Adding an LLM in the middle is just the next step.

Then you just end up in an arms race that ultimately leads to photocopy-of-a-photocopy output.