← Back to context

Comment by bbor

2 years ago

You’re missing an important point - it’s not /trained/ on live internet content, it /reads/ that content at runtime. I mean it is trained on the internet but please try to separate the concerns. Remember that the goal of this model is language, not learning facts about the world - they could’ve trained it completely on fictional novels if there was a big enough corpus.

The only way that LLM-enhanced search returns misinformation is if the internet is full of misinformation. So yeah we’re still in trouble, but the inclusion of the LLM isn’t going to affect that factor either way IMO

EDIT: this is completely separate from using LLMs to, say, write political statements for Facebook bots and drown out all human conversations. That’s obviously terrifying, but not related to their use in search engines IMO.