Comment by eastbound
3 days ago
How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.
3 days ago
How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.
Ideally, hold Google liable until their AI doesn’t confabulate medical advice.
Realistically, sign a EULA waiving your rights because their AI confabulates medical advice
Apparently we should hire the Guardian to evaluate LLM output accuracy?
Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?
In many areas AI has become this toy that we use because it looks real enough.
It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.
There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.
Because $$$
Because privatized $$$ and public !!!
> How do you suggest to deal with Gemini?
Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?
This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.
The "well we already have a bunch of people doing this and it would be difficult to introduce guardrails that are consistently effective so fuck it we ball" is one of the most toxic belief systems in the tech industry.
> This "random output machine" is already in large use in medicine
By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.
> Should I trust the young doctor fresh out of the Uni
You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.
In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.
> This "random output machine" is already in large use in medicine so why exactly not?
Where does "large use" of LLMs in medicine exist? I'd like to stay far away from those places.
I hope you're not referring to machine learning in general, as there are worlds of differences between LLMs and other "classical" ML use cases.
6 replies →
LLM is just a tool. How the tool is used is also an important question. People vibe code these days, sometimes without proper review, but do you want them to vibe code a nuclear reactor controller without reviewing the code?
In principle we can just let anyone use LLM for medical advice provided that they should know LLMs are not reliable. But LLMs are engineered to sound reliable, and people often just believe its output. And cases showed that this can have severe consequences...
There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.
9 replies →
- The AI that are mostly in use in medicine are not LLMs
- Yes. All doctors advice should be taken cautiously, and every doctor recommends you get a second opinion for that exact reason.
> How do you suggest to deal with Gemini?
With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.
If it's giving out medical advice without a license, it should be banned from giving medical advice and the parent company fined or forced to retire it.
No.
"Whether we like it or not" is LLM inevitabilism.
https://news.ycombinator.com/item?id=44567857
Yes.
Counterpoint: LLMs are inevitable.
Can't put that genie back in the bottle, no matter how much the powers-that-be may wish. Such is the nature of (technological) genies.
The only way to 'stop' LLMs is to invent something better.
depends if the cost of training can be recouped (with profit) from the proceeds of usage. Plenty of inventions prove themselves to be uneconomic.
Thought terminating cliche.
As a certified electrical engineer, the amount of times googles LLM suggested a thing that would have at minimum started a fire is staggering.
I have the capacity to know when it is wrong, but I teach this at university level. What worries me, are the people who are on the starting end of the Dunning-Kruger curve and needed that wrong advice to start "fixing" the spaces where this might become a danger to human life.
No information is superior to wrong information presented in a convincing way.