← Back to context

Comment by Arodex

13 hours ago

But who is responsible is different.

(And if you already see 60% error rates in standard, pre-AI note taking, how does that not translate into many deaths and injury? At least one country's health system in the world should have caught that)

> And if you already see 60% error rates in standard, pre-AI note taking, how does that not translate into many deaths and injury?

Presumably most doctor's visits are a one-problem-one-solution-one-doctor type of thing. Done deal, notes are never read again. So that alone would explain why high rates of errors doesn't result in injuries or death very often.

Any injury or death caused by poor notes would have to occur when mistakes are done if you're followed for a serious chronic condition, or if you're handled by a team where effective communication is required.

> how does that not translate into many deaths and injury?

Because most of it is just written down and never looked at again until there’s a lawsuit or something.

The human who hits Submit or Approve is responsible.

The management human who offered the bad tool to the other human is responsible.

The robot cannot be responsible in place of us.

Yeah, the problem is the health system has no sacrificial goat if the AI note taker provides the wrong detail. The last thing we want is CTO being responsible!

  • I'm not convinced the CTO would be held accountable either.

    I do wonder if people would be pushing AI so hard if their organizations were planning to hold them accountable for mistakes the AI made

    I bet if that were the case we'd see a lot slower rollout of AI systems