← Back to context

Comment by morgengold

8 hours ago

My father just changed his car key battery with the help of ai and he likes that. He also consulted it about about car insurance regulations and he got more out of it than searching the web himself.

For most simple mainstream questions I just ask ai instead of googling shitty results.

Most of the time ai is good enough and often better than the status ante.

People do not care if it is a stupid token prediction machine as long as the job gets done.

But those are mostly things that were possible before basic web search became nearly unusable.

I don't disagree with you at all, I have found that I turn to LLMs to answer questions that I would have just searched with Google before.

It feels like a case of companies creating a problem to sell you the solution. The problem in their eyes is that they couldn't squeeze any more money out of search. So they bring us LLMs to replace it at what is sure to be a much higher cost. But they had to torpedo search to force users to use LLMs.

Until you take a baby to the vaccine clinic, the nurse googles which vaccine to give at his age, and blindly trusts the highlighted AI snippet at the top.

Not a fictitious example.

  • I saw a doctor use AI to do some maths recently. I checked it and it was right, but trusting LLMs to do statistics is not a good idea.

  • Sure, but that's an incredible level of incompetence, that I can only see would be expose otherwise outside of AI use. The entire list of what vaccines to give to what age group could fit on a single piece of paper.

    • And much of what people use AI for now could be easily done without it. How many steps of a Claude Code /plan are just running basic ls commands, all for a few thousand tokens?

      The entire thing reeks of laziness and incompetence. It's neat an all but its a giant sucking maw that is threatening to gobble up whats left of anything good.