Comment by dnw

6 hours ago

It is a little more than semantic search. Their value prop is curation of trusted medical sources and network effects--selling directly to doctors.

I believe frontier labs have no option but to go into verticals (because models are getting commoditized and capability overhang is real and hard to overcome at scale), however, they can only go into so many verticals.

> Their value prop is curation of trusted medical sources

Interesting. Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".

  • They're building a moat with data. They're building their own datasets of trusted sources, using their own teams of physicians and researchers. They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

    • > They're building their own datasets of trusted sources, using their own teams of physicians and researchers.

      Oh so they are not just helping in search but also in curating data.

      > They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

      I don't take this too seriously because lots of physicians use ChatGPT already.

      1 reply →

  • Yes, they can. We have gotten better at grounding LLMs to specific sources and providing accurate citations. Those go some distance in establishing trust.

    There is trust and then there is accountability.

    At the end of the day, a business/practice needs to hold someone/entity accountable. Until the day we can hold an LLM accountable we need businesses like OpenEvidence and Harvey. Not to say Anthropic/OpenAI/Google cannot do this but there is more to this business than grounding LLMs and finding relevant answers.