Comment by emil-lp

20 hours ago

    Even your own AI model doesn't buy your propaganda

Let's not pretend the output of LLMs has any meaningful value when it comes to facts, especially not for recent events.

The LLM was given Anthropic's paper and asked "Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate". So the question was not about facts or recent events, but more like a summarizing task, for which an LLM should be good. But the question was specifically about China, while TFA has broader criticism of the paper.

There are obvious problems with wasting time and sending people off the wrong path, but if an LLM raises a good point, isn't it still a good point?

  • A broken analog clock will be accurate twice a day despite being of zero use. If someone were to attempt to sell the broken clock as useful because it "accurately returns the time at least twice every day", would Ultimately be causing harm to the consumer.

    • Depends on what you need the clock for. For example, if it's to serve as an adjustable sign indicating e.g. the closing time of a store, a broken one does the trick just fine :)

      In other words: Use the right tool for the right job.

Even if this assertion about LLMs is true, your response does not address the real issue. Where is the evidence?