← Back to context

Comment by Hobadee

13 hours ago

The AI note taker we use at work records the meeting as well, and each note it takes about the meeting has a timestamp link that takes you directly there in the recording so you can check it yourself. While I'm sure a solution like this is more complicated in a HIPPAA environment, something like this is critical for things as important as healthcare.

When designing AI-based user experiences I refer to this as provenance. It’s a vital aspect of trust, reliability, compliance and more. If a software system includes LLM output like this but doesn’t surface the provenance of its output for human evaluation and verification then it’s at best poor user experience, and at worst a dangerous one.

  • At the same time, do you really want every conversation you have with your doctor recorded, handed over to third party companies, and stored forever with your medical file? Plus what doctor has time to sit down and re-listen to your visit to check to make sure the AI didn't screw up at some point in the future anyway? If your doctor isn't going to be verifying the accuracy from those recordings who would? Overseas contractors? At what point does it become a larger waste of time and money to babysit an incompetent AI than just not using one in the first place?

    There are some good uses for AI, but I'm not convinced that this (or many other cases where accuracy matters) is one of them.

    • >At the same time, do you really want every conversation you have with your doctor recorded

      Yes. This is what medical records are. They've been kept by doctors for a reason.

      It's not like the doctor is talking to you about which anime series are the best. You're talking about your health, your body, your disease, your treatment.

      It's important to keep track of that.

      >Plus what doctor has time to sit down and re-listen to your visit to check to make sure the AI didn't screw up at some point in the future anyway

      No doctor.

      Which is why it really should be their (or their assistant's) job to record the relevant parts of the conversations.

      >At what point does it become a larger waste of time and money to babysit an incompetent AI than just not using one in the first place?

      At this point, as the audit shows.

      Except the industry (both the AI vendors and healthcare) are going YOLO¹ and relying on AI anyway.

      >There are some good uses for AI, but I'm not convinced that this (or many other cases where accuracy matters) is one of them.

      This has always been the case, but the marketing now has reached of point of gaslighting in trying to make people collectively forget that or pretend that it's not the case.

      Once hard evidence is presented (like in this case), the defense is invariably that it's a temporary quality issue that's going to be resolved as the AI improves Any Day Now™, and that it's wise to live as if it were the case already² (and everyone who disagrees is a fool that Will Be Left Behind™).

      The level of fervor in this rhetoric gives me an impression that the flaw is so fundamental that it won't be fixed in any form of AI based on today's technologies, that the AI vendor leadership knows this, and that the entire industry is, at this point, is a grand pump-and-dump scheme.

      I hope I'm wrong.

      ____

      ¹ See, you only live once. But there are millions of you. So, like, whatever if you don't. Something something economies of scale to them.

      ² This is called a phantasm.

      5 replies →

That doesn't sound like a "note taker," that sounds like an audio sample search engine. You still need to listen to everything if you want accuracy.

Yeah, what you're saying requires either:

- some human checking all the notes by listening to the entire meeting recording (takes a lot of time and man-hours)

- attendees checking notes from memory (prone to error unless they take notes)

- attendees cross checking with their own notes (defies the point of having the AI note taker)

The reality is that AI usage is not acceptable in any form in any context where accuracy is critical, but good luck getting anyone to acknowledge that.