Comment by Retr0id
15 days ago
Defending against falling into these sorts of thought-traps (aside from "just don't be delusional") seems to rely on knowing when you're engaging with an LLM, so you can either be more sceptical of its claims, limit your time spent with it, or both.
This worries me, since there's a growing amount of undisclosed (and increasingly hard to detect) LLM output in the infosphere.
Real-time chat is probably the worst for it, but I already see humans copy-pasting LLM output at each other in discussion forums etc.
No comments yet
Contribute on Hacker News ↗