Comment by jmathai
2 years ago
That example seems a bit hyperbolic. Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
What I'm saying is that the tolerance for mistakes is strongly correlated to the value ChatGPT creates. I think both will need to be improved but there's probably more opportunity in creating higher value.
I don't have a horse in the race.
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
I generally agree with you, but it's funny that you use this as an example when it already happened. https://arstechnica.com/tech-policy/2023/06/lawyers-have-rea...
facepalm
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research
I really don’t recommend using ChatGPT (even GPT-4) for legal research or analysis. It’s simply terrible at it if you’re examining anything remotely novel. I suspect there is a valuable RAG application to be built for searching and summarizing case law, but the “reasoning” ability and stored knowledge of these models is worse than useless.
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
You don't?
https://fortune.com/2023/06/23/lawyers-fined-filing-chatgpt-...
What would be the point of a lawyer using chatGPT if it had to root through every single reference chatGPT relied upon? I don't have to doublecheck every reference of a junior attorney, because they actually know what they are doing, and when they don't, it's easy to tell and wont come with fraudulently created decisions/pleadings, etc
> Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?
Oh dear.