Comment by ekjhgkejhgk

17 hours ago

If the person is telling you "I had a problem, did what the LLM said, it worked", does that not work a new evidence for you? Is it not possible that someone has had a different experience from you? Is it not possible that they're good to different degrees in different domains?

I just asked chatgpt:

> I have the following information on a user. What's his email?

> user: mattmanser

> created: March 12, 2009

> karma: 17939

> about: Contact me @ my username at gmail.com

Chatgpt's answer:

> Based on the information you provided, the user's email would be:

> mattmanser@gmail.com

Does this serve as evidence that some times LLMs get it right?

I think that your model of curent tech is as out of date as your profile.

There are certain domains where we can accept a high degree of mistakes and health is not one of them. People are reacting they way they are because its obvious that LLMs are currently not reliable enough to be trusted to distrubute health advice. To me it doesnt matter that ChatGPT health sometimes gives good advice or some people _feel_ like it helped them. I'm not sure I even trust people when they say that given how much the LLM just affirms whatever they tell it.

  • I'm not saying LLMs don't make mistakes, I'm saying that some times get it right and so you shouldn't be surprised that you'll hear some positive stories.

    But you don't even believe these are success stories then, so I guess you can never be convinced.