Comment by jayd16
19 hours ago
You might as well be asking a tape recorder why it said something. Why are we confusing the situation with non-nonsensical comparisons?
There is no internal monologue with which to have introspection (beyond what the AI companies choose to hide as a matter of UX or what have you). There is no "I was feeling upset when I said/did that" unless it's in the context.
There is no ghost in the machine that we cannot see before asking.
Even if a model is able to come up with a narrative, it's simply that. Looking at the log and telling you a story.
Sperry's experiments makes it quite clear that the comparison is not nonsensical: humans can't reliably tell why we do things either. It is not imbuing AI with anything more to recognise that. Rather pointing out that when we seek to imply the gap is so huge we often overestimate our own abilities.
Humans at least have a mental state that only they are privy to to work from, and not just their words and actions. The LLM literally cannot possibly have a deeper insight into the root cause than the user, because it can only work from the information that the user has access to.
> Humans at least have a mental state that only they are privy to to work from
Maybe. How do you tell? What would you expect to be different if they didn't?
> The LLM literally cannot possibly have a deeper insight into the root cause than the user, because it can only work from the information that the user has access to.
Insight is not solely a function of available input information. Arguably being able to search and extract the relevant parts is a far more important part of having insights.
1 reply →
It is non-sensical because you're simply bringing in comparisons without anything linking the two. You might as well be talking about how oranges, and bicycles think as well as that is just as relevant as how humans think in this discussion.
In fact, talking about "thinking" at all is already the wrong direction to go down when trying to triage an incident like this. "Do not anthropomorphize the lawnmower" applies to AI as much as Larry Ellison.
The thing linking the two is that neither are able to accurately introspect and explain the actual reason why they made a decision.
If thinking is the wrong direction to go down, then it is also the wrong direction to go down when talking about humans.
1 reply →
Slight pushback - I think there's still a lot more consistency and coherence in a human's recollection of their motives than an LLM.
Sometimes I think we're too eager to compare ourselves to them.
We have pretty much evidence to support that human recollection includes the right data to be able to ascertain why we actually did something.