← Back to context

Comment by atmosx

1 day ago

I agree. LLMs cannot and should not replace professionals but there are huge gaps that can be filled by intro provided and the fact that you can dig deeper into any subject is huge.

This is probably a field that MistralAI could use privacy and GDPR as leverage to build LLMs around that.

One of the big issues I have with LLMs that when you start a prompting session with an easy question it all goes great. It bring up points you might not have considered and appears very knowledgeable. Fact checking at this stage will show the LLM is invariably correct.

Then you start "digging deeper" on a specific sub-topic, and this is where the risk of an incorrect response grows. But it is easy to continue with the assumption the text you are getting is accurate.

This has happened so many times with the computing/programming related topics i usually prompt about, there is no way I would trust a response from an LLM on health related issues I am not already very familiar with.

Given that the LLM will give incorrect information (after lulling people with a false sense of it being accurate), who is going to be responsible for the person that makes themselves worse off by doing self diagnosis, even with a privacy focused service?

  • That's a good point—and I have probably fallen victim to it as well: the "sliding scale" of an LLM's authority.

    Like you, I fact-check it (well, search the internet to see if others validate the claims/points) but I don't do so with every response.

  • The responsibility falls always to the patient. That’s true with doctors are as well: you visit two doctors they give you different diagnosis, one tells to go for surgery, the other tells you it’s not worth the hassle. Who can decide? The patient does.

    LLMs are yet another powerful tool under our belt, you know it’s hallucinating so be careful. That said, even asking specialized info about this or that medical topic can be a great thing for patients. That’s why I believe it’s a good thing to have specialized LLMs that can tailor responses on individual health situations.

    The problem is the framework and the implementation end goal. IMO state owned health data is a goldmine for any social welfare system and now with AI they can make use of it in novel ways.