← Back to context

Comment by _heimdall

1 day ago

> competent professionals

That requires a lot of clarity and definition if you want to claim that LLMs aren't competent professionals. I assume we'd ultimately agree that LLMs aren't, but I'd add that many humans paid for a task aren't competent professionals either and, more importantly, that I can't distinguish the competent professionals from others without myself being competent enough in the topic.

My point was that people have a long history of outsourcing to someone else, often to someone they have never met and never will. We do it for things that we have no real idea about and trust that the person doing it must have known what they were doing. I fully expect people to end up taking the same view of LLMs.

We also have a lot of systems (references, the tort system) that just don't apply in any practical way to LLM output. I mean, I guess you could try to sue Anthropic or OpenAI if their chat bot gives you bad advice, but... good luck with that. The closest thing I can think of is benchmark performance. But I trust those numbers a lot less than I would trust a reference from a friend for, say, a plumber.

I understand a lot of people use LLMs for things they don't understand well. I just don't think that is the best way to get productivity out of these tools right now. Regardless of how people may or may not be used to outsourcing things to other humans.

  • > I just don't think that is the best way to get productivity out of these tools right now.

    Well that I completely agree with. I don't think people should outsource to an LLM without the skills to validate the output.

    At that point I don't see the value, if I have the skills and will proofread/validate the output anyway it mostly just saved me keystrokes and risks me missing a subtle but very important bug in the output.