Comment by Imustaskforhelp
11 hours ago
xD
I am not even kidding but there is a guy who viewed twitter, found that table salt Aka sodium chloride is "bad for health" and the medical study recommends that if thats the case then they should less the consumption
But he ends up asking chatgpt and it somehow recommends him the idea of sodium bromide instead of sodium chloride and it really ended up having him have so many hallucinations and so many other problems that the list goes on.
I found this from a video, definitely worth a watch
https://www.youtube.com/watch?v=yftBiNu0ZNU
A man asked AI for health advice and it cooked every brain cell
Table salt is dangerous if yuo intake really too much of it and also if you intake too less of it. Water is the same way so Moderation's they key
Everything in moderation.
The root cause of what happened in that story was ultimately uncontextualized question asking.
Basically this guy starts with this fringe conspiracy theory belief that chloride ions are bad for you and asks a question to Chatgpt about alternatives to chloride ions and gets bromide as the next halogen.
We don't know this for certain, but when that video came out I tried it in ChatGPT and it this is what I could replicate about chloride bromide recommendations. It doesn't suggest eating sodium bromide but it will tell you bromide can fit where chloride is. The paper that discusses the case also mentions this.
> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do. [0]
Of course this kind of bad question asking makes you fall short of the no free lunch theorem / XY Problem. Like if I ask you: "what is the best metal? Name one only." and you suggest "steel" then I reveal that actually I needed to conduct electricity so that is a terrible option.
[0] https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260
Yes I understand that the context matters and to be honest, the person's context wasn't really given publically but still, i think that they trusted AI sources itself and that confusion almost costed their life.
> We don't know this for certain, but when that video came out I tried it in ChatGPT and it this is what I could replicate about chloride bromide recommendations. It doesn't suggest eating sodium bromide but it will tell you bromide can fit where chloride is. The paper that discusses the case also mentions this.
From the video I watched, what I can gather is the fact that somehow the chatbot confused chloride and bromide for washing machine related tasks but that being said, AI's are still sycophantic and we all probably know this.
> Basically this guy starts with this fringe conspiracy theory belief that chloride ions are bad for you
I still feel like AI/LLM's definitely tried to give into that conspiracy rheotoric and the Guy got even more convinced as proof
Of course he had a disillusioned theory in the first place but I still believe that a partial blame can still be placed and this is the crux of the argument actually, dont read just AI sources and treat them as gospel
They are based on scraping public resources which could be wrong (we all saw the countless screenshots floating on internet where google search engine's AI feature gave unhinged answers, I don't use google so I don't know if they still do but for a time they definitely did)
This is actually what I think the grandparent of the comment is talking about regarding poisoning of data I think in their own manner or atleast bring the nuance of that discussion.