Comment by 3rodents
19 hours ago
I agree that absolute deference to doctors is a mistake and that individuals should be encouraged to advocate for themselves (and doctors should be receptive to it) but I'm not so convinced in this specific case. Why do high blood sugar levels matter? Are there side effects associated with the alternative treatment? Has ChatGPT actually helped you in a meaningful way, or has the doctor's eventual relenting made you feel like progress has been made, even if that change is not meaningful?
In this context, I think of ChatGPT as a many-headed Redditor (after all, reddit is what ChatGPT is trained on) and think about the information as if it was a well upvoted comment on Reddit. If you had come across a thread on Reddit with the same information, would you have made the same push for a change?
There are quite a few subreddits for specific medical conditions that provide really good advice, and there are others where the users are losing their minds egging each other on in weird and whacky beliefs. Doctors are far from perfect, doctors are often wrong, but ChatGPT's sycophancy and a desperate patient's willingness to treat cancer with fruit feel like a bad mix. How do we avoid being egged on by ChatGPT into forcing doctors to provide bad care? That's not a rhetorical question, curious about your thoughts as an advocate for ChatGPT.
> Why do high blood sugar levels matter?
I have type 2 diabetes.
> How do we avoid being egged on by ChatGPT into forcing doctors to provide bad care?
I don’t ask it leading questions. I ask “These are my symptoms, give me some guidance.” Instead of “these are my symptoms, I think I have cancer. Could I be right?” If I don’t ask leading questions it keeps the response more pure.
I know what you mean and I would certainly not want to blindly "trust" AI chatbots with any kind of medical plan. But they are very helpful at giving you some threads to pull on for researching. I do think they tend a little toward giving you potentially catastrophic, worst-case possibilities, but that's a known effect from when people were using Google and WebMD as well.
Yeah, and "I found one paper that says X" is very weak evidence even if you're correctly interpreting X and the specific context in which the paper says it.
> Why do high blood sugar levels matter?
Are you asking why a side effect that is actually an entire health problem on its own, is a problem? Especially when there is a replacement that doesn’t cause it?
Side effects do not exist in isolation. High blood sugar is not a problem if it is solving a much bigger health issue, or is a lesser side effect than something more serious. If medication A causes high blood sugar but medication B has a chance of causing blood clots, medication A is an obvious choice. If a patient gets it in their head that their high blood sugar is a problem to solve, ChatGPT is going to reinforce that, whereas a doctor will have a much better understanding of the tradeoffs for that patient. The doctor version of the x/y problem.
I have type 2 diabetes. Blood sugar levels are a concern for me. I switched medications to one that was equally benign and got my blood sugar levels decreased along with my blood pressure. I don’t know why you assume that the medication I switched to might have higher or worse side effects. That wasn’t a choice I had to make given the options I was presented with.
Look, anyone can argue hypotheticals. But if one reads the comment being discussed, it can be deduced that your proposed hypotheses are not applicable, and that the doctor actually acknowledged the side effect and changed medications leading to relief. Now, if the new medication has a more serious side effect, the doctor (or ChatGPT) should mention and/or monitor for it, but the parent has not stated that is the case (yet). As such, we do not need to invent any scenarios.
3 replies →