Comment by nomel
15 hours ago
> It is irresponsible for these companies
I would claim that ignoring the "ChatGPT is AI and can make mistakes. Check important info." text, right under the query they type in client, is clearly more irresponsible.
I think that a disclaimer like that is the most useful and reasonable approach for AI.
"Here's a tool, and it's sometimes wrong." means the public can have access to LLMs and AI. The alternative, that you seem to be suggesting (correct me if I'm wrong), means the public can't have access to an LLM until they are near perfect, which means the public can't ever have access to an LLM, or any AI.
What do you see as a reasonable approach to letting the public access these imperfect models? Training? Popups/agreement after every question "I understand this might be BS"? What's the threshold for quality of information where it's no longer considered "broken"? Is that threshold as good as or better than humans/news orgs/doctors/etc?
The issue is that whilst the warning exists and is there front and centre, the marketing around ChatGPT etc - which is absolutely deafening in volume and enthusiasm - is that they're PHD level experts and can do anything.
This marketing obscures what the software is _actually_ good at and gives users a poor mental model of what's going on under the hood. Dumping years worth of un-differentiated health data into a generic chatGPT chat window seems like a fundamental misunderstanding of the strengths of large language models.
A reasonable approach would be to try to explain what kind of tasks these models do well at and what kind of situations they behave poorly in.
Why are you assuming that the general public ought to have access to imperfect tools?
I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you.
> Why are you assuming that the general public ought to have access to imperfect tools?
Could you tell me which source of information do you see as "perfect" (or acceptable) that you see as a good example of a threshold for what you think the public should and should not have access to?
Also, what if a tool still provides value to the user, in some contexts, but not to others, in different contexts (for example, using the tool wrong)?
For the "tool" perspective, I've personal never seen a perfect tool. Do you have an example?
> I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you.
I don't see how this is relevant. In the above article, the user went to their doctor for advice and a referral. But, in the US (and, many European countries) blood tests aren't restricted, and can be had from private labs out of pocket, since they're just measurements of things that exist in your blood, and not allowing you to know what's inside of you would be considered government overreach/privacy violation. Medical interpretations/advice from the measurements is what's restricted, in most places.
> Could you tell me which source of information do you see as "perfect" (or acceptable) that you see as a good example of a threshold for what you think the public should and should not have access to?
I know it when I see it.
> I don't see how this is relevant.
It's relevant because blood testing is an imperfect tool. Laypeople lack the knowledge/experience to identify imperfections and are likely to take results at face value. Like the author of the article did when ChatGPT gave them an F for their cardiac health.
> Medical interpretations/advice from the measurements is what's restricted, in most places.
Do you agree with that restriction?
3 replies →
> I live in a place where getting a blood test requires a referral from a doctor,
To me, this is horrific. I am the advocate for my own health. I trust my doctor - he's a great guy. I have spoken to him extensively around a variety of health matters and I greatly trust his opinion.
But I also recognize that he has many other patients and by necessity has to work within the general lines of probability. There is no way for him to know every confounding and contributing factor of my health, no matter how diligent I am in filling out my chart.
I get my own bloodwork done regularly. This has let me make significant changes in my life to improve health markers. I can also get a much broader spectrum of tests done than the standard panel. This has directly lead to productive conversations with my doctor!
And from a more philosophical standpoint, this is about understanding my own body. The source of the data is me. Why should this be gatekept behind a physician referral? I find it insane to think that I could be in a position where I am not allowed to find out the cholesterol serum levels in my blood unless a doctor OKs it! What the fuck?
> I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you.
You’re saying it like it’s a good thing.
Oh I have a plan for this.
Allow it to answer general questions about health, medicine and science.
It can’t practice medicine, it can only be a talking encyclopedia that tells you how the heart works and how certain biomarkers are used. Analyzing your specific case or data is off limits.
And then when the author asks his question, it says it’s not designed to do that.
> Popups/agreement after every question "I understand this might be BS"?
Considering the number of people who take LLM responses as authoritative Truth, that wouldn't be the worst thing in the world.
The problem is that AI companies are selling, advertising, and shipping AI as a tool that works most of the time for what you ask it to do. That’s deeply misleading.
The product itself is telling you in plain English that it’s ABSOLUTELY CERTAIN about its answer… even when you challenge it and try to rebut it. And the text of the product itself is much more prominent than the little asterisk “oh no, it’s actually lying because the LLM can never be that certain.” That’s clearly not a responsible product.
I opened the ChatGPT app right now and there is literally nothing about double checking results. It just says “ask anything,” in no uncertain terms, with no fine print.
Here’s a recent ad from OpenAI: https://youtu.be/uZ_BMwB647A, and I quote “Using ChatGPT allowed us to really feel like we have the facts and our doctor is giving us his expertise, his experience, his gut instinct” related to a severe health question.
And another recent ad related to analyzing medical scans: “What’s wonderful about ChatGPT is that it can be that cumulative source of information, so that we can make the best choices.” (https://youtu.be/rXuKh4e6gw4)
And yet another recent ad, where lots of users are using ChatGPT to get authoritative answers to health questions. They even say you can take a picture of a meal before you eat and after you eat, and have it generate the amount of calories you ate! Just based on the difference between the pictures! How has that been tested and verified? (https://youtu.be/305lqu-fmbg)
Now, some of the ads have users talking to their doctors, which is great.
But they are clearly marketing ChatGPT as the tool to use if you want to arrive at the truth. No asterisks. No “but sometimes it’s wrong and you won’t be able to tell.” There’s nothing to misunderstand about these ads: OpenAI is telling you that ChatGPT is trustworthy.
So I reject the premise that it’s the user’s fault for not using enough caution with these tools. OpenAI is practically begging you to jump in and use it for personal, life or death type decisions, and does very little to help you understand when it may be wrong.
> "ChatGPT is AI and can make mistakes. Check important info."
Is the same thing that can be said about any human
> "Doctor is human and can make mistakes"
Therefore it's really not sufficient to make it clear that it is wrong in different ways and worse than human.