← Back to context

Comment by marcusb

1 day ago

We also have a lot of systems (references, the tort system) that just don't apply in any practical way to LLM output. I mean, I guess you could try to sue Anthropic or OpenAI if their chat bot gives you bad advice, but... good luck with that. The closest thing I can think of is benchmark performance. But I trust those numbers a lot less than I would trust a reference from a friend for, say, a plumber.

I understand a lot of people use LLMs for things they don't understand well. I just don't think that is the best way to get productivity out of these tools right now. Regardless of how people may or may not be used to outsourcing things to other humans.

> I just don't think that is the best way to get productivity out of these tools right now.

Well that I completely agree with. I don't think people should outsource to an LLM without the skills to validate the output.

At that point I don't see the value, if I have the skills and will proofread/validate the output anyway it mostly just saved me keystrokes and risks me missing a subtle but very important bug in the output.