← Back to context

Comment by alganet

2 months ago

LLMs that can replace search (their primary and self-declared goal) cannot survive without repeated training. As I said, this is one of its main purposes.

The other main purpose (military application, surveillance, autonomous psyops) is also highly dependent on continous training. Without it, properly educated healthy humans can overcome its reasoning power very quickly.

All other user profiles are just cannon fodder. Companies don't give a fuck about people running older models. They'll do whatever they can to make you use a more recent one.

That's why I'm being provocative with the "let's stop training new shit" argument. I'm aiming for the heel.

If somebody told you that LLMs are a good replacement for search that person was misleading you.

  • LLM companies are doing that themselves with their actions. There is a clear connection between search and assistants, historically way back to "Ask Jeeves".

    People literate in IT know the implementation difference. For those not literate, the difference is way less proeminent. To most people, it's the same thing and companies know it and abuse this.

    They are obviously competing for the same market. That market being "the stuff you go to when you need knowledge".

    Anyway, you are deviating from the point. Even if that's not the case, my argument still holds: the article is full of shit regarding the environmental sustainability of the lifecycle of model training.