Comment by wildzzz
25 days ago
The one thing a real doctor can do is actually touch the patient and run tests, even simple things like using a stethoscope. At best, an AI "doctor" is just comparing patient-provided symptoms to a lookup table of conditions. No better than what WebMD used to (still does?) when you would answer a questionnaire and be provided with a list of conditions ranging from a cold to the bubonic plague. AI loves taking everything you say at face value, it doesn't know how to think critically. Whole doctors shouldn't think of a patient as an adversary, they often lie or unintentionally obscure symptoms or the severity of symptoms. Even the most junior doctor can provide a more thorough examination over the phone or through chat than an AI that believes everything it hears.
I remember trying to talk to WebMD when I had pain in my side and appendicitis was near the bottom of the list, the top stuff was either nothing serious or highly improbable. The pain didn't seem as bad as what the appendicitis pain should have been based on descriptions. My mother got her doctor to call me and he walked me through some touching and said "you likely have appendicitis, don't talk to WebMD next time." I went to the hospital that night and that doctor told me I was likely hours away from a burst appendix. I can only imagine what nonsense ChatGPT would have told me.
Just like with most professions, the real world is nothing like the textbook. Being able to pass a medical exam doesn't necessarily mean you're going to be a good doctor. Most of the exam is taken during med school and the final portion is only taken after the first year of residency. They still have another few years at least of residency after passing USMLE and that's with supervision under an attending doctor. Being able to pass the USMLE is not equivalent to being successful doctor with years of experience.
No comments yet
Contribute on Hacker News ↗