Comment by menaerus
3 days ago
This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.
The "well we already have a bunch of people doing this and it would be difficult to introduce guardrails that are consistently effective so fuck it we ball" is one of the most toxic belief systems in the tech industry.
> This "random output machine" is already in large use in medicine
By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.
> Should I trust the young doctor fresh out of the Uni
You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.
In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.
> This "random output machine" is already in large use in medicine so why exactly not?
Where does "large use" of LLMs in medicine exist? I'd like to stay far away from those places.
I hope you're not referring to machine learning in general, as there are worlds of differences between LLMs and other "classical" ML use cases.
https://executiveeducation.mayo.edu/products/foundations-of-...
Instead of asking me to spend $150 and 4 hours, could you maybe just share the insights you gained from this course?
3 replies →
An online course, even if offered by a reputable medical institution, hardly backs your argument.
LLM is just a tool. How the tool is used is also an important question. People vibe code these days, sometimes without proper review, but do you want them to vibe code a nuclear reactor controller without reviewing the code?
In principle we can just let anyone use LLM for medical advice provided that they should know LLMs are not reliable. But LLMs are engineered to sound reliable, and people often just believe its output. And cases showed that this can have severe consequences...
There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.
Even so, it's rather common for doctors to not be albe to diagonise correctly. It's a guessing game for them too. I don't know so much about US but it's a real problem in large parts of the world. As the comment stated, I would take anything a doctor says with a pinch of salt. Particularly so when the problem is not obvious.
These things are not equivalent.
This is really not that far off from the argument that "well, people make mistakes a lot, too, so really, LLMs are just like people, and they're probably conscious too!"
Yes, doctors make mistakes. Yes, some doctors make a lot of mistakes. Yes, some patients get misdiagnosed a bunch (because they have something unusual, or because they are a member of a group—like women, people of color, overweight people, or some combination—that American doctors have a tendency to disbelieve).
None of that means that it's a good idea to replace those human doctors with LLMs that can make up brand-new diseases that don't exist occasionally.
It takes 10 years of hard work to become a profound engineer too yet it doesn't prohibit us missing the things. That argument cannot hold. AI is already wide-spread in medical treatment.
An engineer is not a doctor, nor a doctor an engineer. Yes, AI is being used in medicines - as a tool for the professional - and that's the right use for it. Helping a radiologist read an X-Ray, MRI scan or CT Scan, helping a doctor create an effective treatment plan, warning a pharmacologist about unsafe combinations (dangerous drug interactions) when different medications are prescribed etc are all areas where an AI can make the job of a professional easier and better, and also help create better AI.
1 reply →
When a doctor gets it wrong they end up in a courtroom, lose their job and the respect of their peers.
Nobody at Google gives a flying fuck.
1 reply →
Why stop at AI? By that same logic, we should ban non-doctors from being allowed to Google anything medical.
Nobody can (and should) stop you from learning and educating yourself. It however doesn't mean just because you can use Google or use AI, you think you can become a doctor:
- Bihar teen dies after ‘fake doctor’ conducts surgery using YouTube tutorial: Report - https://www.hindustantimes.com/india-news/bihar-teen-dies-af...
- Surgery performed while watching YouTube video leaves woman dead - https://www.tribuneindia.com/news/uttar-pradesh/surgery-perf...
- Woman dies after quack delivers her baby while watching YouTube videos - https://www.thehindu.com/news/national/bihar/in-bihar-woman-...
Educating a user about their illness and treatment is a legitimate use case for AI, but acting on its advise to treat yourself or self-medicate would be plain stupidity. (Thankfully, self-medicating isn't as easy because most medication require a prescription. However, so called "alternate" medicines are often a grey area, even with regulations (for example, in India).
- The AI that are mostly in use in medicine are not LLMs
- Yes. All doctors advice should be taken cautiously, and every doctor recommends you get a second opinion for that exact reason.