← Back to context

Comment by stefan_

2 years ago

Let's see, so we exclude law, we exclude medical.. it's certainly not a "vast minority" and the failure cases are nothing at all like search or human experts.

Are you suggesting that failure cases are lower when interacting with humans? I don't think that's my experience at all.

Maybe I've only ever seen terrible doctors but I always cross reference what doctors say with reputable sources like WebMD (which I understand likely contain errors). Sometimes I'll go straight to WebMD.

This isn't a knock on doctors - they're humans and prone to errors. Lawyers, engineers, product managers, teachers too.

  • You think you ask your legal assistant to find some precedents related to your current case and they will come back with an A4 page full of made up cases that sound vaguely related and convincing but are not real? I don't think you understand the failure case at all.

    • That example seems a bit hyperbolic. Do you think lawyers who leverage ChatGPT will take the made up cases and present them to a judge without doing some additional research?

      What I'm saying is that the tolerance for mistakes is strongly correlated to the value ChatGPT creates. I think both will need to be improved but there's probably more opportunity in creating higher value.

      I don't have a horse in the race.

      6 replies →