Comment by Groxx
12 hours ago
Yep. It happened to me just recently.
Diagnosed with Runner's Knee.
AI summary said I was diagnosed with osteoporosis, and had hip pain and walking difficulty, though literally none of that was ever said or implied.
CHECK YOUR TRANSCRIPTS. Always, but especially with LLM transcribers, which fairly frequently include common symptoms which don't exist, or claim a diagnosis which is common and fits a few details but not others. Get them fixed, it can very strongly affect your care and costs later if it's wrong.
Anecdotally, I'd say that outside of a couple very simple and very common things, about 50% of the "AI" summaries I've had have been wrong somewhere. Usually claiming I have symptoms that don't exist, occasionally much more serious and major fabrications like this time.
LLMs are NOT normal speech to text software, and they shouldn't be treated like one. They'll often insert entire sentences that never occurred. In some contexts that might be fine, but definitely not in medical records.
I've actually seen this lead to serious issues when a zoom LLM summary attributed statements to someone who didn't say them.
Someone else who couldn't attend the meeting later read that summary and it created a major argument because the topic had been a sore subject for this person due to an ongoing debate at the company. Everyone who attended the meeting confirmed it was an error, but the coincidental timing made it hard for him to accept, because the LLMs summary presented things in a way that validated this person's concerns that had been previously minimized by some folks on that meeting.
The drama got heated to the point where management produced a policy about not trusting generative output without independent verification. Seems at least it was a lesson learned.