← Back to context Comment by phoronixrly 10 hours ago Wow, so what value is there in LLM slop exctracted from already dubious self-medication advice? 4 comments phoronixrly Reply perching_aix 9 hours ago They're saying that it successfully filtered out the bit where the author told people to overdose by 40000x. I guess that's the value. phoronixrly 9 hours ago There would be value if it pointed out the mistake instead of hallucinating a correction. pulvinar 9 hours ago GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.I wonder if even the simplest LLM would make this particular mistake. naasking 3 hours ago IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.
perching_aix 9 hours ago They're saying that it successfully filtered out the bit where the author told people to overdose by 40000x. I guess that's the value. phoronixrly 9 hours ago There would be value if it pointed out the mistake instead of hallucinating a correction. pulvinar 9 hours ago GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.I wonder if even the simplest LLM would make this particular mistake. naasking 3 hours ago IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.
phoronixrly 9 hours ago There would be value if it pointed out the mistake instead of hallucinating a correction. pulvinar 9 hours ago GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.I wonder if even the simplest LLM would make this particular mistake. naasking 3 hours ago IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.
pulvinar 9 hours ago GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.I wonder if even the simplest LLM would make this particular mistake.
naasking 3 hours ago IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.
They're saying that it successfully filtered out the bit where the author told people to overdose by 40000x. I guess that's the value.
There would be value if it pointed out the mistake instead of hallucinating a correction.
GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.
I wonder if even the simplest LLM would make this particular mistake.
IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.