← Back to context

Comment by perching_aix

8 hours ago

They're saying that it successfully filtered out the bit where the author told people to overdose by 40000x. I guess that's the value.

There would be value if it pointed out the mistake instead of hallucinating a correction.

  • GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.

    I wonder if even the simplest LLM would make this particular mistake.

  • IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.