Comment by kingstnap

10 hours ago

The root cause of what happened in that story was ultimately uncontextualized question asking.

Basically this guy starts with this fringe conspiracy theory belief that chloride ions are bad for you and asks a question to Chatgpt about alternatives to chloride ions and gets bromide as the next halogen.

We don't know this for certain, but when that video came out I tried it in ChatGPT and it this is what I could replicate about chloride bromide recommendations. It doesn't suggest eating sodium bromide but it will tell you bromide can fit where chloride is. The paper that discusses the case also mentions this.

> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do. [0]

Of course this kind of bad question asking makes you fall short of the no free lunch theorem / XY Problem. Like if I ask you: "what is the best metal? Name one only." and you suggest "steel" then I reveal that actually I needed to conduct electricity so that is a terrible option.

[0] https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260

Yes I understand that the context matters and to be honest, the person's context wasn't really given publically but still, i think that they trusted AI sources itself and that confusion almost costed their life.

> We don't know this for certain, but when that video came out I tried it in ChatGPT and it this is what I could replicate about chloride bromide recommendations. It doesn't suggest eating sodium bromide but it will tell you bromide can fit where chloride is. The paper that discusses the case also mentions this.

From the video I watched, what I can gather is the fact that somehow the chatbot confused chloride and bromide for washing machine related tasks but that being said, AI's are still sycophantic and we all probably know this.

> Basically this guy starts with this fringe conspiracy theory belief that chloride ions are bad for you

I still feel like AI/LLM's definitely tried to give into that conspiracy rheotoric and the Guy got even more convinced as proof

Of course he had a disillusioned theory in the first place but I still believe that a partial blame can still be placed and this is the crux of the argument actually, dont read just AI sources and treat them as gospel

They are based on scraping public resources which could be wrong (we all saw the countless screenshots floating on internet where google search engine's AI feature gave unhinged answers, I don't use google so I don't know if they still do but for a time they definitely did)

This is actually what I think the grandparent of the comment is talking about regarding poisoning of data I think in their own manner or atleast bring the nuance of that discussion.