Comment by aakresearch
7 hours ago
Oops... I am deeply sorry, thank you for the heads up! It seems I've myself committed a cardinal sin that I am usually quick to point in others - rushing to reply without comprehending the full message. (Meta-oops: I realized how LLM-ish it sounds. Quick, reboot before my cover is blown!)
I happen to believe that the flaw being discussed IS fundamental and inherent in the design and architecture of LLM - this is why I always put "AI" in scare quotes. I've spoken about it in some of my other comments, namely this https://news.ycombinator.com/item?id=48046333. And as you do, I, too, hope that I am wrong about the hype and its eventual clash with reality, but do not hold my breath.
No problem, this is all human and understandable. You don't sound LLM-ish, you sound like you care a lot about this.
Which both of us do.