Comment by tgtweak
10 days ago
Much like talking to your doctor - you need to ask/prompt the right questions. I've seen chatgpt and gemini make one false assumption that was never mentioned and run with it and continue referencing it down the line as if it were fact... That can be extremely dangerous if you don't know enough to ask it to reframe or verify, or correct it's assumption.
If you are using it like a tool to review/analyze or simplify something - ie explain risk stratification for a particular cancer variant and what is taken into account, or ask it to provide probabilities and ranges for survival based on age/medical history, it's usually on the money.
Every other caveat mentioned here is valid, and it's valid for many domains not just medical.
I did get hemotologist/oncologist level advice out of chatgpt 4o based on labs, pcr tests and symptoms - and those turned out to be 100% true based on how things panned out in the months that followed and ultimately the treatment that was given. Doctors do not like to tell you the good and the bad candidly - it's always "we'll see what the next test says but things look positive" and "it could be as soon as 1 week or as long as several months depending on what we find" when they know full well you're in there for 2 months at minimum you're a miracle case. Only once cornered or prompted will they give you a larger view of the big picture. The same is true for most professional fields.
The one thing a real doctor can do is actually touch the patient and run tests, even simple things like using a stethoscope. At best, an AI "doctor" is just comparing patient-provided symptoms to a lookup table of conditions. No better than what WebMD used to (still does?) when you would answer a questionnaire and be provided with a list of conditions ranging from a cold to the bubonic plague. AI loves taking everything you say at face value, it doesn't know how to think critically. Whole doctors shouldn't think of a patient as an adversary, they often lie or unintentionally obscure symptoms or the severity of symptoms. Even the most junior doctor can provide a more thorough examination over the phone or through chat than an AI that believes everything it hears.
I remember trying to talk to WebMD when I had pain in my side and appendicitis was near the bottom of the list, the top stuff was either nothing serious or highly improbable. The pain didn't seem as bad as what the appendicitis pain should have been based on descriptions. My mother got her doctor to call me and he walked me through some touching and said "you likely have appendicitis, don't talk to WebMD next time." I went to the hospital that night and that doctor told me I was likely hours away from a burst appendix. I can only imagine what nonsense ChatGPT would have told me.
Just like with most professions, the real world is nothing like the textbook. Being able to pass a medical exam doesn't necessarily mean you're going to be a good doctor. Most of the exam is taken during med school and the final portion is only taken after the first year of residency. They still have another few years at least of residency after passing USMLE and that's with supervision under an attending doctor. Being able to pass the USMLE is not equivalent to being successful doctor with years of experience.