Comment by miltonlost

15 hours ago

oh god using an LLM for medical advice? and maybe getting 3/5 right? Barely above a coin flip.

And that Warning section? "Do not be wrong. Give the correct names." That this is necessary to include is an idiotic product "choice" since its non-inclusion implies the bot is able to be wrong and give wrong names. This is not engineering.

Not if you're selecting out of 10s or 100s of possible diagnoses

  • It's hard to characterize the entropy of the distribution of potential diseases given a presentation: even if there are in theory many potential diagnoses, in practice a few will be a lot more common.

    It doesn't really matter how much better the model is than random chance on a sample size of 5, though. There's a reason medicine is so heavily licensed: people die when they get uninformed advice. Asking o1 if you have skin cancer is gambling with your life.

    That's not to say AI can't be useful in medicine: everyone doesn't have a dermatologist friend, after all, and I'm sure for many underserved people basic advice is better than nothing. Tools could make the current medical system more efficient. But you would need to do so much more work than whatever this post did to ascertain whether that would do more good than harm. Can o1 properly direct people to a medical expert if there's a potentially urgent problem that can't be ruled out? Can it effectively disclaim its own advice when asked about something it doesn't know about, the way human doctors refer to specialists?

  • ?????? What?

    > Just for fun, I started asking o1 in parallel. It’s usually shockingly close to the right answer — maybe 3/5 times. More useful for medical professionals — it almost always provides an extremely accurate differential diagnosis.

    THIS IS DANGEROUS TO TELL PEOPLE TO DO. OpenAI is not a medical professional. Stop using chatbots for medical diagnoses. 60% is not almost always extremely accurate. This whole post, because of this bullet point, shows the author doesn't actually know the limitations of the product they're using and instead passing along misinformation.

    Go to a doctor, not your chatbot.

    • I honestly think trusting exclusively your own doctor is a dangerous thing to do as well. Doctors are not infallible.

      It's worth putting in some extra effort yourself, which may include consulting with LLMs provided you don't trust those blindly and are sensible about how you incorporate hints they give you into your own research.

      Nobody is as invested in your own health as you are.