← Back to context

Comment by arkh

2 days ago

> How do you suggest to deal with Gemini?

Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?

This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.

  • The "well we already have a bunch of people doing this and it would be difficult to introduce guardrails that are consistently effective so fuck it we ball" is one of the most toxic belief systems in the tech industry.

  • > This "random output machine" is already in large use in medicine

    By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.

    > Should I trust the young doctor fresh out of the Uni

    You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.

    In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.

  • LLM is just a tool. How the tool is used is also an important question. People vibe code these days, sometimes without proper review, but do you want them to vibe code a nuclear reactor controller without reviewing the code?

    In principle we can just let anyone use LLM for medical advice provided that they should know LLMs are not reliable. But LLMs are engineered to sound reliable, and people often just believe its output. And cases showed that this can have severe consequences...

  • There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.

    • Even so, it's rather common for doctors to not be albe to diagonise correctly. It's a guessing game for them too. I don't know so much about US but it's a real problem in large parts of the world. As the comment stated, I would take anything a doctor says with a pinch of salt. Particularly so when the problem is not obvious.

      1 reply →

    • It takes 10 years of hard work to become a profound engineer too yet it doesn't prohibit us missing the things. That argument cannot hold. AI is already wide-spread in medical treatment.

      4 replies →

  • - The AI that are mostly in use in medicine are not LLMs

    - Yes. All doctors advice should be taken cautiously, and every doctor recommends you get a second opinion for that exact reason.