← Back to context

Comment by myhf

3 days ago

If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.

https://www.fda.gov/medical-devices/digital-health-center-ex...

How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.

  • Ideally, hold Google liable until their AI doesn’t confabulate medical advice.

    Realistically, sign a EULA waiving your rights because their AI confabulates medical advice

  • Apparently we should hire the Guardian to evaluate LLM output accuracy?

    Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?

    In many areas AI has become this toy that we use because it looks real enough.

    It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.

    There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.

  • > How do you suggest to deal with Gemini?

    Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?

    • This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.

      21 replies →

  • > How do you suggest to deal with Gemini?

    With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.

  • If it's giving out medical advice without a license, it should be banned from giving medical advice and the parent company fined or forced to retire it.

  • As a certified electrical engineer, the amount of times googles LLM suggested a thing that would have at minimum started a fire is staggering.

    I have the capacity to know when it is wrong, but I teach this at university level. What worries me, are the people who are on the starting end of the Dunning-Kruger curve and needed that wrong advice to start "fixing" the spaces where this might become a danger to human life.

    No information is superior to wrong information presented in a convincing way.