← Back to context

Comment by dev_hugepages

8 hours ago

The problem is that humans use this as a coping mechanism for things they don't understand: I don't understand why the printer doesn't work, so I give it a mind of its own.

This is harmless for inconsequential stuff like a chair, but when it's an LLM, people should at least understand it's behavior so they don't get trapped. That means not trusting it with advice meant for the user or on things it has no concept of, like time or self-introspection (people ask the LLM after it acted, "Why did you delete my database?" when it has limited understanding of its own processing, so it falls back to, "You're right, I deleted the database. Here's what I did wrong: ... This is an irrecoverable mistake, blah, blah, blah..."

Humans have extremely limited understanding of their own processing? When you ask a human why they did something wrong, they usually confabulate an answer as well.

Human conscious introspection doesn't extend to actual processing, it is limited at best to recollection of internal experience leading up to the point in question. That internal experience in turn represents but a tiny fraction of what actually happens in the brain and does so on a pretty abstract level only.

"Anthropomorphizing" is a red herring. Humans understand themselves so insufficiently, they can't claim reasonably founded judgement either way. When you don't know what you're doing, you probably shouldn't be doing it.