Comment by perrygeo

10 hours ago

I feel the same way. LLMs errors sound most plausible to those who know least.

On complex topics where I know what I'm talking about, model output contains so much garbage with incorrect assumptions.

But complex topics where I'm out of my element, the output always sounds strangely plausible.

This phenomenon writ large is terrifying.