← Back to context

Comment by ehnto

3 months ago

I think search engines should be worried, because people will silently lose faith in their results and start using AI chat instead.

If search engines fail to find genuine, authentic content for me, and they just pipe me to LLM articles, I may as as well go straight to the LLM.

That will, if it is really adopted that widely, result in a freeze on available information.

  • Thing is LLM can be trained on things not available on the public internet, unlike search engines having to return public URLs.

    I’m sure all these agentic AI are slurping in all proprietary codebases and their documentations and training on them. It’s the only way to one up the competition.

  • Why would anybody even bother publishing or adding new content if the only thing that ever reads or interacts with it are bots?

    I use the shit out LLM’s but you know what they can’t do? Create brand new ideas. They can refine yours, sure. They can take existing knowledge and map it into whatever you’re cooking. But on their own, nope. They just repeat what is in their training data and context window.

    If all “new” content comes from LLM’s drawing from a huge pool of other LLM content… it’s just one giant echo chamber with nothing new being added. A planet wide circle jerk of LLMs complementing each other on what excellent ideas they all have and how they are really cutting to the heart of the issue. “Now I see the issue” they all say based on the slop context ingested from some other LLM who “saw the issue” from a third LLM. It’s LLMs all the way down.

    • Yes, it is very much a one-way street and the value of original creation is reduced to nil because they just steal it.