Comment by simianwords

6 hours ago

> Their value prop is curation of trusted medical sources

Interesting. Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".

They're building a moat with data. They're building their own datasets of trusted sources, using their own teams of physicians and researchers. They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

  • > They're building their own datasets of trusted sources, using their own teams of physicians and researchers.

    Oh so they are not just helping in search but also in curating data.

    > They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

    I don't take this too seriously because lots of physicians use ChatGPT already.

    • Lots of physicians use ChatGPT but so do lots of non-physicians and I suspect there's some value in knowing which are which

I don't think you can use an LLM for that. For the same reason you can't just ask it to "Make the app secure and fast"

  • This is completely incorrect. This is exactly what LLMs can do better.

    • Somebody should tell the Claude code team then. They’ve had some perf issues for awhile now.

      More seriously, the concept of trust is extremely lossy. The LLM is gonna lean in one direction that may or may not be correct. At the extreme, it wound likely refute a new discovery that went against what we currently know. In a more realistic version, certain AIs are more pro Zionist than others.

      1 reply →

Yes, they can. We have gotten better at grounding LLMs to specific sources and providing accurate citations. Those go some distance in establishing trust.

There is trust and then there is accountability.

At the end of the day, a business/practice needs to hold someone/entity accountable. Until the day we can hold an LLM accountable we need businesses like OpenEvidence and Harvey. Not to say Anthropic/OpenAI/Google cannot do this but there is more to this business than grounding LLMs and finding relevant answers.