Comment by bonesss

4 hours ago

My sense is that we’re misapplying the technology by throwing it at, say, transcription and expecting a perfect output, instead of using LLMs strengths to improve inputs to the benefit of all parties.

Freeing up doctor time, for example: lots of patient visits are messy, the patient is scattered, has multiple issues, and the doctor has tight timelines and regulatory challenges to convey to the patient impacting their care… this is architected for everyone to lose, IMO, even with a perfect transcript. And LLMs can’t be perfect, they auto complete.

I picture patients interacting with an intake AI who can listen to hours of demented rambling, or a patient mid anxiety attack, and provide a caregiver-certified summary of needs, with relevant screening information laid out for doctor confirmation. At that point, helpful information about drug access or insurance policies can be presented, for doctor confirmation, to a patient who can clarify and refine their understanding of the system without time pressures.

Elevating the quality of dialogue so the doctor is more focused on the patient, and the patients dialog needs don’t overwhelm treatment. A lot of medicine is filling out forms and checklists, I think auto-complete could create efficiencies in how we fulfill that.

Yeah, I could see AI being used for intake. That's a good point. And then the doctor can get some baseline info that they can use when they talk to the patient. Maybe even some really beautiful data, showing visually to the doctors all the different symptoms they reported.