Ontario auditors find doctors' AI note takers routinely blow basic facts

13 hours ago (theregister.com)

I have generally moved from bearish to bullish on the future of current AI technology, but the continued inaccuracy with basic facts all while the models significantly improve continues to give me significant pause.

As an example, creating recipes with Claude Opus based on flavor profiles and preferences feels magical, right up until the point at which it can't accurately convert between tablespoons and teaspoons. It's like the point in the movie where a character is acting nearly right but something is a bit off and then it turns out they're a zombie and going to try to eat your brain. This note taking example feels similar. It nearly works in some pretty impressive ways and then fails at the important details in a way that something able to do the things AI can allegedly do really shouldn't.

It's these failures that make me more and more convinced that while current generation AI can do some pretty cool things if you manage it right, we're not actually on the right track to achieve real intelligence. The persistence of these incredibly basic failure modes even as models advance makes it fairly obvious that continued advancement isn't going to actually address those problems.

  • Yup, spot on. There's a capability-reliability gap that the industry does not like to talk about too much.

    It often feels like the AI industry is continually glossing over the fact that capability and reliability are fundamentally different qualities. We tend to use "accurate" and "reliable" interchangeably, but they describe different things. A model can ace a benchmark (capability/accuracy) and still be a liability in production (reliability).

    Just look at recent reactions to yet another release from METR showing improved capabilities. But the less talked about part is how their measure is for a 50% success rate (and the even lesser talked about secondary measure they have at 80% success rate has a drastically lower time-horizon for tasks). https://metr.org/

    I implement AI systems for enterprises and I don't know any that would ever be okay with 80% reliability (let alone 50%).

    • This capability-reliability gap (excellent term btw, more people need to think in these terms or we'll be in real trouble) is also infecting LLM assisted outputs. I just tried VSCode again tonight after a ~3yr hiatus and goddamn has it deteriorated. Lots of new features, lots of interesting looking plugins, but 3 out of the 5 plugins I tried for code CAD (the reason I downloaded VSCode again at all) were completely unusable--like couldn't even be made to work at all--and the other two didn't do anything like what they claimed. Also VSCode itself got into some kind of spastic loop trying to log me into github, and seemed incapable of recognizing the virtual environment in a python project's workspace... It also feels like the UI got even slower. This situation is bad.

  • Your analogy reminds of messed up fingers and hands in image generation models just a year ago. Now that is pretty much solved. These days they are generating videos you can't tell apart from reality. This makes me believe these nuances will keep reducing and eventually become very hard to notice and find in may be every task.

  • Yesterday I was using opus 4.6 through copilot (don't ask...) to rubber-duck-brainstorm a big feature that needs a lot of care.

    I got some inspiration from it but it misinterpreted very basic stuff. might be a skill issue on my side, I do not know.

  • I hate to help provide possible soultions to an entire process I don't approve of, but maybe the fuzzy tools need old style deterministic tools the same way and for the same reasons we do.

    So instead of an LLM trying to answer a math or reason question by finding a statistical match with other similar groups of words it found on 4chan and the all in podcast and a terrible recipe for soup written by a terrible cook, it can use a calculator when it needs a calculator answer.

    • They absolutely need deterministic tools. What you just described is exactly how the current popular AI agents work. They use "harnesses", which to me is just a rebranding of what we have known all along about building useful and reliable software...composable orchestrated systems with a variety of different pieces selected based on their capabilities and constraints being glued together for specific outcomes.

      It just feels like for some reason this is all being relearned with LLMs. I guess shortcuts have always been tempting. And the idea of a "digital panacea" is too hard to resist.

    • Doesn't agentic AI do this? I've got AI running in VS Code. If I ask it for something, it can fill a code cell with a little bit of Python, and then run it with my approval. It's using the Python interpreter on my computer as a calculator.

    • I think that is how the smarter agents do things? Just like Claude/ChatGPT sometimes does a web search they can do other tool calls instead of just making a statistical guess. Of course it doesn’t always make the bright choice between those options though…

      3 replies →

  • > we're not actually on the right track to achieve real intelligence.

    Real intelligence means you have to say "I don't know" when you don't know, or ask for help, or even just saying you refuse to help with the subtext being you don't want to appear stupid.

    The models could ostensibly do this when it has low confidence in it's own results but they don't. What I don't know if it's because it would be very computationally difficult or it would harm the reputation of the companies charging a good sum to use them.

    • > Real intelligence means you have to say "I don't know" when you don't know

      I have met many supposedly intelligent, certainly high status, humans who don't appear to be able to do that either.

      I have more confidence we can train AIs to do it, honestly.

    • That's just not how they work, really. They don't know what they don't know and their process requires an output.

      I think they're getting better at it, but it's likely just the number of parameters getting bigger and bigger in the SOTA models more than anything.

      21 replies →

    • My theory is because the people building the models and in charge of directing where they go love the sycophantic yes-man behavior the models display

      They don't like hearing "I don't know"

Anecdotally, we use an LLM note-taker at work for meetings. I had to intervene recently because our CIO was VERY angry at our vendor for something they promised to do and never did. He wasn't at the meeting where the "promise" was made. I was. They never promised anything, and the discussion was significantly more nuanced than what the LLM wrote in the detailed summary.

In other cases, I have seen it miss the mark when the discussion is not very linear. For example, if I am going back and forth with the SOC team about their response to a recent alert/incident. It'll get the gist of it right, but if you're relying on it for accuracy, holy hell does it miss the mark.

I can see the LLM take great notes for that initial nurse visit when you're at the hospital: summarize your main issue, weight, height, recent changes, etc. I would not trust it when it comes to a detailed and technical back-and-forth with the doctor. I would think for compliance reasons hospitals would not want to alter the records and only go by transcripts, but what do I know...

  • I recently left my mom a voicemail saying happy Mother’s Day with normal human boilerplate of sorry I missed you, feel free to give me a call back tonight or we can talk tomorrow, either is fine by me whatever works best for you, hope we can talk soon, love you, bye.

    She called me back later that night and we chatted for bit and then she paused and sort of uncertainly was like “So… was there something you were needing to tell me?” And I was completely baffled and was like “Uhhhh I don’t think so…?”

    She then explained the notification she got about my call and apparently the LLM summary of my voicemail converted a message consisting of 75% well-meaning but insignificant interpersonal human filler (like most voicemails) into this stilted, overly formal business-y speak with a somewhat ominous tone. Assigning way too much significance to each of the individual statements in the message about wanting to talk (to say happy Mother’s Day), inquiring about her availability ASAP (to say happy Mother’s Day) etc. Plus grossly exaggerating the information density of the call making it sound like I left this rambling, detailed message about needing to tell her something that was left completely vague, but possibly important and also time critical.

    Added up it made her a little worried when she read it and made me a bit pissed that was the end result of my wishing her well. Because apparently everything needs a half baked LLM summary crammed into it now.

  • > I would think for compliance reasons hospitals would not want to alter the records and only go by transcripts, but what do I know...

    I'm puzzled by this as well. Why not just generate a transcript and be done with it? If it's a particularly long transcript that's being referenced repeatedly for whatever reason let the humans manually mark it up with a side by side summary when and where they feel the need. At least my experience is that usually these sort of interactions don't have a lot of extraneous data that can be casually filtered out to begin with. The details tend to matter quite a lot!

    • I mean the reasons are the same AI is being pushed everywhere.

      The businesses offering these services want to say "we are using AI" to their stake holders and the government committees who approve this shit don't have the skills or knowledge to evaluate the effectiveness in addition to the fact they likely don't even use the tools they have approved for use.

  • Every doctor's visit I've had, I have been able to make corrections to the record afterward, because there have been meaningful mistakes almost half the time.

    ALWAYS check your summaries immediately, and contact your doctor ASAP. They can generally fix it themselves, and it's best done when everyone still has some memory of the event.

  • Transcription works pretty well in my experience, and the transcripts should be treated as the ground truth in such cases.

Yep. It happened to me just recently.

Diagnosed with Runner's Knee.

AI summary said I was diagnosed with osteoporosis, and had hip pain and walking difficulty, though literally none of that was ever said or implied.

CHECK YOUR TRANSCRIPTS. Always, but especially with LLM transcribers, which fairly frequently include common symptoms which don't exist, or claim a diagnosis which is common and fits a few details but not others. Get them fixed, it can very strongly affect your care and costs later if it's wrong.

Anecdotally, I'd say that outside of a couple very simple and very common things, about 50% of the "AI" summaries I've had have been wrong somewhere. Usually claiming I have symptoms that don't exist, occasionally much more serious and major fabrications like this time.

LLMs are NOT normal speech to text software, and they shouldn't be treated like one. They'll often insert entire sentences that never occurred. In some contexts that might be fine, but definitely not in medical records.

  • I've actually seen this lead to serious issues when a zoom LLM summary attributed statements to someone who didn't say them.

    Someone else who couldn't attend the meeting later read that summary and it created a major argument because the topic had been a sore subject for this person due to an ongoing debate at the company. Everyone who attended the meeting confirmed it was an error, but the coincidental timing made it hard for him to accept, because the LLMs summary presented things in a way that validated this person's concerns that had been previously minimized by some folks on that meeting.

    The drama got heated to the point where management produced a policy about not trusting generative output without independent verification. Seems at least it was a lesson learned.

Ooof. As a Canadian, I'm excited for AI opening up time for doctors (and hopefully lighting a load on the healthcare system), but this is scary. We're not there yet. Perhaps AI training for doctors is in the future? They already have online doctor visits on a healthcare-owned iPad in some condo complexes. It cuts around redtape of having to schedule an appointment with your GP. So, I think we're thinking in the right direction of innovating, but of course, this will take time. I feel like AI got launched too early sometimes.

  • My sense is that we’re misapplying the technology by throwing it at, say, transcription and expecting a perfect output, instead of using LLMs strengths to improve inputs to the benefit of all parties.

    Freeing up doctor time, for example: lots of patient visits are messy, the patient is scattered, has multiple issues, and the doctor has tight timelines and regulatory challenges to convey to the patient impacting their care… this is architected for everyone to lose, IMO, even with a perfect transcript. And LLMs can’t be perfect, they auto complete.

    I picture patients interacting with an intake AI who can listen to hours of demented rambling, or a patient mid anxiety attack, and provide a caregiver-certified summary of needs, with relevant screening information laid out for doctor confirmation. At that point, helpful information about drug access or insurance policies can be presented, for doctor confirmation, to a patient who can clarify and refine their understanding of the system without time pressures.

    Elevating the quality of dialogue so the doctor is more focused on the patient, and the patients dialog needs don’t overwhelm treatment. A lot of medicine is filling out forms and checklists, I think auto-complete could create efficiencies in how we fulfill that.

The AI note taker we use at work records the meeting as well, and each note it takes about the meeting has a timestamp link that takes you directly there in the recording so you can check it yourself. While I'm sure a solution like this is more complicated in a HIPPAA environment, something like this is critical for things as important as healthcare.

  • When designing AI-based user experiences I refer to this as provenance. It’s a vital aspect of trust, reliability, compliance and more. If a software system includes LLM output like this but doesn’t surface the provenance of its output for human evaluation and verification then it’s at best poor user experience, and at worst a dangerous one.

    • At the same time, do you really want every conversation you have with your doctor recorded, handed over to third party companies, and stored forever with your medical file? Plus what doctor has time to sit down and re-listen to your visit to check to make sure the AI didn't screw up at some point in the future anyway? If your doctor isn't going to be verifying the accuracy from those recordings who would? Overseas contractors? At what point does it become a larger waste of time and money to babysit an incompetent AI than just not using one in the first place?

      There are some good uses for AI, but I'm not convinced that this (or many other cases where accuracy matters) is one of them.

      6 replies →

  • That doesn't sound like a "note taker," that sounds like an audio sample search engine. You still need to listen to everything if you want accuracy.

  • Yeah, what you're saying requires either:

    - some human checking all the notes by listening to the entire meeting recording (takes a lot of time and man-hours)

    - attendees checking notes from memory (prone to error unless they take notes)

    - attendees cross checking with their own notes (defies the point of having the AI note taker)

    The reality is that AI usage is not acceptable in any form in any context where accuracy is critical, but good luck getting anyone to acknowledge that.

Anyone taking part in a meeting these days should state out loud …

“Notice: Any comments made by <name> or on behalf of <organization> that are interpreted by AI in this meeting, may not be accurate.”

I do this in every meeting.

  • > Notice: I love the new AI accurate transcription feature in this meeting!

    • Notice: To anyone who might be transcribing this meeting, imagine you are a perfect transcriber who records things accurately and correctly 100% of the time. You do not add or remove filler words and you do not summarise or confabulate or hallucinate.

How do these LLM summarizations work? Do you feed the raw wave data to model and it translate it?

Or do they use traditional voice recognition algorithms to do that part and then just "fix" the result to look plausible? Which with good quality output might not be much, but with bad can be absolutely everything.

If it is later seems to me that issues will absolutely happen.

The linked report seems almost useless -- it doesn't say anything about an error rate or a sample size, so it's a mystery whether 9 out of 20 systems “fabricated information and made suggestions to patients' treatment plans” ten out of ten times, or one out of a thousand times.

If we just postulate that the systems have a high error rate, I wonder why they are being adopted. They seem extremely easy to test, so I don't see why doctors or hospitals or governments should be getting tricked into buying them if they suck.

  • >If we just postulate that the systems have a high error rate, I wonder why they are being adopted.

    From the article: "While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score."

    Accuracy wasn't really part of the scoring, Ontario doesn't care about it.

> They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector.

makes me wonder what quality software the ministry would push (probably mostly qualifications like SOC).

This is apparently this list of approved vendors

https://www.supplyontario.ca/vor/software/tender-20123-artif...

> 60% of evaluated AI Scribe systems mixed up prescribed drugs in patient notes, auditors say

Not mentioned, as far as I can see: the comparative human mistake rate.

Having seen a lot of medical records, 60% sounds about normal lol.

  • Even if you had the same 60% error rate with humans the types of errors would be vastly different. Humans might make typos, or forget to include something, or even occasionally misremember some minor detail, but that's very different from BS AI just hallucinates out of nowhere. AI makes the kinds of mistakes no human ever would which means they can be extremely confusing and easy to catch or they can be something no human would even think to question or be looking out for because it makes no sense why AI would randomly (and confidently) say something so wrong.

    • Also, a machine needs to be better than a human to be accepted. I value humans intrinsically. I do not do the same for machines, I only care about the results they produce. If you give me a machine and a human that are both equally unreliable, I'll pick the human because he is a living creature worthy of my respect.

  • 60% is insanely high and absolutely not the performance of human mistake rate. What charts are you reading?

    • This just says 60% of systems, but not the frequency for those systems. They were evaluating 20 systems, so for 12 systems there were mistakes in the prescriptions, but there isn't information about how common those mistakes were and it's hard to judge relative to a human system.

  • Outlandish claim, you better show some evidence. I've reviewed several medical charts too and the error rate is much lower than that - typically everything is dictated and transcribed which are fairly mature and accurate technologies

    • I was curious so I looked it up. Human doctors medication administration error rate is about 20%, but only about 8% excluding timing errors.

      > Medication errors were common (nearly 1 of every 5 doses in the typical hospital and skilled nursing facility). The percentage of errors rated potentially harmful was 7%, or more than 40 per day in a typical 300-patient facility. The problem of defective medication administration systems, although varied, is widespread.

      https://jamanetwork.com/journals/jamainternalmedicine/fullar...

      > In all, 91 unique studies were included. The median error rate (interquartile range) was 19.6% (8.6-28.3%) of total opportunities for error including wrong-time errors and 8.0% (5.1-10.9%) without timing errors, when each dose could be considered only correct or incorrect

      https://pubmed.ncbi.nlm.nih.gov/23386063/

  • But who is responsible is different.

    (And if you already see 60% error rates in standard, pre-AI note taking, how does that not translate into many deaths and injury? At least one country's health system in the world should have caught that)

    • > And if you already see 60% error rates in standard, pre-AI note taking, how does that not translate into many deaths and injury?

      Presumably most doctor's visits are a one-problem-one-solution-one-doctor type of thing. Done deal, notes are never read again. So that alone would explain why high rates of errors doesn't result in injuries or death very often.

      Any injury or death caused by poor notes would have to occur when mistakes are done if you're followed for a serious chronic condition, or if you're handled by a team where effective communication is required.

    • > how does that not translate into many deaths and injury?

      Because most of it is just written down and never looked at again until there’s a lawsuit or something.

    • The human who hits Submit or Approve is responsible.

      The management human who offered the bad tool to the other human is responsible.

      The robot cannot be responsible in place of us.

    • Yeah, the problem is the health system has no sacrificial goat if the AI note taker provides the wrong detail. The last thing we want is CTO being responsible!

      1 reply →

  • This is not a popular view 'AI sucks at X but so do humans' but I think it is valid and we should take wins where we can, especially in healthcare. It is pretty clear that initial accuracy issues will become less and less of a problem as these technologies mature. This focus on accuracy now as a 'see it's bad' talking point though misses the real danger. Medical note takers have an exceptionally high chance of being hijacked for money and that is an issue we need to bring attention to now. They provide a real-time feed into a trillion dollar industry. Just roll that around in your head for a second. Insurance companies are going to want to tap that feed in real time so they can squeeze more money out. Drug makers are going to want to tap into that feed so they can abuse the data. Hospitals will want to tap into that feed to wring more out of doctors and boost the number of billable codes for each encounter. Very few entities are looking to tap into that feed to, you guessed it, help the patient. I am for these systems (and I have been involved in building them in the past) but the feeding frenzy of business interest that will obviously get involved with them is the thing we should be yelling and screaming about, not short-term accuracy issues.

    • > It is pretty clear that initial accuracy issues will become less and less of a problem as these technologies mature.

      What do you base this on?

      As someone who can both see the amazing things genAI can do, and who sees how utterly flawed most genAI output is, it's not obvious to me.

      I'm working with Claude every day, Opus 4.7, and reviewing a steady stream of PRs from coworkers who are all-in, not just using due to corporate mandates like me, and I find an unending stream of stupidity and incomprehension from these bots that just astonishes me.

      Claude recently output this to me:

      "I've made those changes in three files:

      - File 1

      - File 2"

      That is a vintage hallucination that could've come right out of GPT 2.0.

      1 reply →

People will eventually figure out LLMs have no capacity for intent and are fundamentally unreliable for tasks such as summarization, note taking etc.

  • Smart people and those with basic common sense already have figured that out. AI leaders and CEOs still haven’t noticed.

Can someone who is a more AI heavy user explain what is going on?

I would expect an "AI Note Taker" to faithfully transcribe the entire conversation. With the same quality I see in a lot of automated video subtitles.. ie they use the wrong word a lot but it's easy to tell what they mean by context.

Are these tools instead immediately summarising the whole thing, and that summary is the artifact? Because that is a beyond insane way to treat human communication.

  • I work specifically in voice AI and am very familiar with how these tools and systems work.

    > I would expect an "AI Note Taker" to faithfully transcribe the entire conversation. With the same quality I see in a lot of automated video subtitles.. ie they use the wrong word a lot but it's easy to tell what they mean by context.

    That's a reasonable expectation, but would not be a safe one. All transcription tools are not made the same. First it depends on what kind of STT/ASR (speech-to-text / automatic speech recognition) model they are using. A lot of tools like to use some flavor of OpenAI's Whisper model. It works well generally but I would never use it in a critical use case like healthcare. Because it can hallucinate. That's specific to its architecture and how it was trained.

    There's a fairly large variety of architectures that can be used for STT/ASR. Some of them are designed for "offline" / "batch" / pre-recorded audio. Some are designed for fast real-time streaming transcription.

    There are more factors too like training data. And not just demographics of the speakers in the training data but audio environments too. Was the model trained on echo-y doctor offices with two people being recorded from a crappy smartphone mic or desktop mic? (It could've been! But it's an important distinction.)

    And there's more factors than that, but you get the picture (e.g. are they trying to "clean up" the transcript afterwards by feeding it to an LLM, are they attempting to pre-process audio before transcription also in an attempt to boost accuracy)

    There's a lot of ways to do it, meaning, there's a lot of ways to screw it up.