Comment by thisislife2

2 days ago

There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.

Even so, it's rather common for doctors to not be albe to diagonise correctly. It's a guessing game for them too. I don't know so much about US but it's a real problem in large parts of the world. As the comment stated, I would take anything a doctor says with a pinch of salt. Particularly so when the problem is not obvious.

  • These things are not equivalent.

    This is really not that far off from the argument that "well, people make mistakes a lot, too, so really, LLMs are just like people, and they're probably conscious too!"

    Yes, doctors make mistakes. Yes, some doctors make a lot of mistakes. Yes, some patients get misdiagnosed a bunch (because they have something unusual, or because they are a member of a group—like women, people of color, overweight people, or some combination—that American doctors have a tendency to disbelieve).

    None of that means that it's a good idea to replace those human doctors with LLMs that can make up brand-new diseases that don't exist occasionally.

It takes 10 years of hard work to become a profound engineer too yet it doesn't prohibit us missing the things. That argument cannot hold. AI is already wide-spread in medical treatment.

  • An engineer is not a doctor, nor a doctor an engineer. Yes, AI is being used in medicines - as a tool for the professional - and that's the right use for it. Helping a radiologist read an X-Ray, MRI scan or CT Scan, helping a doctor create an effective treatment plan, warning a pharmacologist about unsafe combinations (dangerous drug interactions) when different medications are prescribed etc are all areas where an AI can make the job of a professional easier and better, and also help create better AI.

    • And where did I claim otherwise? You're not disagreeing with me but only reinforcing my point

  • When a doctor gets it wrong they end up in a courtroom, lose their job and the respect of their peers.

    Nobody at Google gives a flying fuck.

    • Not really, these are exceptionaly cases. For most of misdiagnoses or failure to diagnose at all, nothing happens to the doctor.

Why stop at AI? By that same logic, we should ban non-doctors from being allowed to Google anything medical.