← Back to context

Comment by sky2224

6 days ago

It's getting downvoted because it's the equivalent of saying "google it".

And because LLMs will "explain" things that contain outright hallucinations - a beginner won't know which parts are real and which parts are suspect.

  • Exactly this. The thing which irritates and worries me, is that I notice a lot of junior folks tend to try and apply these machines in solving open-ended problems the machines don't have the context for. The lawsuits with made-up referent cases are just the beginning I am afraid, we're in for a lot more slop endangering our services and tools.

Exactly. Nothing wrong with LLMs, but we’re trying to have a human conversation here – which would be impossible if people would have all their conversations with LLMs instead.