Comment by tgv
8 hours ago
I don't quite get why you feel so strongly about it that this should be a deal breaker for everyone. It's really much better than a wrong answer, for everyone.
8 hours ago
I don't quite get why you feel so strongly about it that this should be a deal breaker for everyone. It's really much better than a wrong answer, for everyone.
> It's really much better than a wrong answer
That is a bad premise and a false dichotomy, because most medical questions are simple, with well-known standard answers. ChatGPT and Gemini answer such questions correctly, also finding glaring omissions by doctors, even without having to look up information.
As for the medical questions that are not simple, the ones that require looking up information, the model should in principle be able to respond that it does not know the answer when this is truthfully the case, implying that the answer, or a simple extrapoloation thereof, was not in its training data.