Comment by jmathai
2 years ago
Are you suggesting that failure cases are lower when interacting with humans? I don't think that's my experience at all.
Maybe I've only ever seen terrible doctors but I always cross reference what doctors say with reputable sources like WebMD (which I understand likely contain errors). Sometimes I'll go straight to WebMD.
This isn't a knock on doctors - they're humans and prone to errors. Lawyers, engineers, product managers, teachers too.
You think you ask your legal assistant to find some precedents related to your current case and they will come back with an A4 page full of made up cases that sound vaguely related and convincing but are not real? I don't think you understand the failure case at all.
That example seems a bit hyperbolic. Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
What I'm saying is that the tolerance for mistakes is strongly correlated to the value ChatGPT creates. I think both will need to be improved but there's probably more opportunity in creating higher value.
I don't have a horse in the race.
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
I generally agree with you, but it's funny that you use this as an example when it already happened. https://arstechnica.com/tech-policy/2023/06/lawyers-have-rea...
1 reply →
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research
I really don’t recommend using ChatGPT (even GPT-4) for legal research or analysis. It’s simply terrible at it if you’re examining anything remotely novel. I suspect there is a valuable RAG application to be built for searching and summarizing case law, but the “reasoning” ability and stored knowledge of these models is worse than useless.
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
You don't?
https://fortune.com/2023/06/23/lawyers-fined-filing-chatgpt-...
What would be the point of a lawyer using chatGPT if it had to root through every single reference chatGPT relied upon? I don't have to doublecheck every reference of a junior attorney, because they actually know what they are doing, and when they don't, it's easy to tell and wont come with fraudulently created decisions/pleadings, etc
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
Oh dear.