← Back to context

Comment by phoronixrly

5 hours ago

Wow, so what value is there in LLM slop exctracted from already dubious self-medication advice?

They're saying that it successfully filtered out the bit where the author told people to overdose by 40000x. I guess that's the value.

  • There would be value if it pointed out the mistake instead of hallucinating a correction.

    • GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.

      I wonder if even the simplest LLM would make this particular mistake.