← Back to context

Comment by jsheard

1 day ago

On the other hand, sometimes you end up like this guy. Are you feeling lucky?

https://arstechnica.com/health/2025/08/after-using-chatgpt-m...

You could also list plenty of horror stories where people went to medical professionals and got screwed over. There is this myth that people can go to doctors and get perfect attention and treatment. Reality is far from that

  • There’s the concept of “personal advocacy” when receiving healthcare. Unfortunately, you’ll only get the best outcomes if you continually seek out treatment with diligence and patience.

    But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow.

    • "But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow."

      It doesn't delegitimize the whole industry. It points out real problems. A lot of patients are not given enough attention and don't get the correct treatment because the doctors didn't listen but rushed through things.

      1 reply →

    • Id say the Healthcare industry works hard but is probably working at like 20% of their possible productivity due to systemic issues.

      1 reply →

    • Yes, there's been a tension between personal advocacy and the system for a long time. Doctors roll there eyes when a patient mentions they self diagnosed on WebMD. LLM's will accelerate self diagnosis immensely. This has the potential to help patients, but it is just a starting point. Of course, it should be verified from actual trained doctors.

      1 reply →

  • A big part of the legal implications of LLMs and AI in general is about accountability.

    If you are treated by a human being and it goes sideways, you could sue them and/or the hospital. Now, granted, you may not always win, it may take some time, but there is some chance.

    If you are "treated" by an LLM and it goes sideways, good luck trying to sue OpenAI or whoever is running the model. It's not a coincidence that LLM providers are trying to put disclaimers and/or claims in their ToS that LLM advice is not necessarily good.

    Same goes for privacy. Doctors and hospital are regulated in a way that you have a reasonable, often very strong, expectation of privacy. Consider doctor-patient confidentiality, for example. This doesn't mean that there is no leak, but you can hold someone accountable. If you send your medical data to ChatGPT and there is a leak, are you going to sue OpenAI?

    The answer in both cases is, yes, you should probably be able to sue an LLM provider. But because LLM providers have a lot of money (way more than any hospital!), are usually global (jurisdiction could be challenging) and, often, they say themselves that LLM advice is not necessarily good (which doctors cannot say that easily), you may find that way more challenging than suing a doctor or a hospital.

  • Are medical professionals not usually held accountable, globally speaking?

    • Lawsuits against medical professionals are difficult in many cases impossible for the average person to win. They are held less accountable compared to other professions.

      1 reply →

"…a 60-year-old man who had a “history of studying nutrition in college” decided to try a health experiment: He would eliminate all chlorine from his diet…"

You can see already that this can easily go sideways. This guy is already exploring the nether regions of self-medication.

It would be ideal if LLMs recognized this and would not happily offer up bromine as a substitute for chlorine, but I suspect this guy would have greedily looked for other shady advice if LLMs had never existed.

No, there's a difference between radically changing your diet and changing up your stretch/strength routine.. you don't just "end up" like one of them, you can evaluate that the downside risk of the latter is much lower and try it safely while recognizing that an extreme diet might not be so safe to try without any professional guidance.

You have to use your head, just like online forums or with doctors :)

I've had doctors tell me to do insane things. Some that caused lasting damage. Better to come with a trust-but-verify attitude to humans and AI.

The man in the article did not use it as a research help and did not verify it with experts.

So what's your argument?

Did he also drive into a lake following Google Maps' driving directions?