← Back to context

Comment by okamiueru

21 hours ago

I think that AI is a benefit for about 1% of what people think it is good for.

The remaining 99% had become a significant challenge to the greatest human achievement in distribution of knowledge.

If people used LLMs, knowing that all output is statistical garbage made to seem plausible (i.e. "hallusinations"), and that it just sometimes overlaps with reality, it would be a lot less dangerous.

There is not a single case of using LLMs that has lead to a news story, that isn't handily explained by conflating a BS-generator with Fact-machine.

Does this sound like I'm saying LLMs are bad? Well, in every single case where you need factual information, it's not only bad, it's dangerous and likely irresponsible.

But there are a lot of great uses when you don't need facts, or by simply knowing it isn't producing facts, makes it useful. In most of these cases, you know the facts yourself, and the LLM is making the draft, the mundane statistically inferable glue/structure. So, what are these cases?

- Directing attention in chaos: Suggest where focus needs attention from a human expert. (useful in a lot of areas, medicine, software development). - Media content: music, audio (fx, speech), 3d/2d art and assets and operations. - Text processing: drafting, contextual transformation, etc

Don't trust AI if the mushroom you picked is safe to eat. But use its 100% confident sounding answer for which mushroom it is, as a starting point to look up the information. Just make sure that the book about mushrooms was written before LLMs took off....