← Back to context

Comment by hackitup7

20 hours ago

I've had a similar positive experience and I'm really surprised at the cynicism here. You have a system that is good at reading tons of literature and synthesizing it, which then applies basic logic. What exactly do the cynics think that doctors do?

I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check.

This is the crux of the argument from the article.

> get to know your members even before the first claim

Basically selling your data to maximise profits from you and ensure companies don't take on a burden.

You are also not protected by HIPAA using ChatGPT.

  • I'm in Europe btw, but yes I hope Americans get protection soon. I expect the backlash if that were to happen is enough to trigger legislative action.

Because we've all used LLMs.

The make stuff up. Doctors do not make stuff up.

They agree with you. Almost all the time. If you ask an AI whether you have in fact been infected by a werewolf bite, they're going to try and find a way to say yes.

  • Doctors make stuff up all the time; they might deeply believe they are not, but they are detectives trying to figure out what is going on in a complex system.

    AI is a tool that can be useful in this process.

    Also, our current medical science is primitive. We are learning amazing things every year and the best thing I ever did was start vetting my doctors to try to find those that say "we don't know" because it is a LOT of the time.

  • If the person is telling you "I had a problem, did what the LLM said, it worked", does that not work a new evidence for you? Is it not possible that someone has had a different experience from you? Is it not possible that they're good to different degrees in different domains?

    I just asked chatgpt:

    > I have the following information on a user. What's his email?

    > user: mattmanser

    > created: March 12, 2009

    > karma: 17939

    > about: Contact me @ my username at gmail.com

    Chatgpt's answer:

    > Based on the information you provided, the user's email would be:

    > mattmanser@gmail.com

    Does this serve as evidence that some times LLMs get it right?

    I think that your model of curent tech is as out of date as your profile.

    • There are certain domains where we can accept a high degree of mistakes and health is not one of them. People are reacting they way they are because its obvious that LLMs are currently not reliable enough to be trusted to distrubute health advice. To me it doesnt matter that ChatGPT health sometimes gives good advice or some people _feel_ like it helped them. I'm not sure I even trust people when they say that given how much the LLM just affirms whatever they tell it.