← Back to context

Comment by layer8

13 hours ago

The thing is, the LLM mostly just states what it did, and doesn't really explain it (other than "I didn't understand what I was doing before doing it. I didn't read Railway's docs on volume behavior across environments."). Humans are able of more introspection, and usually have more awareness of what leads them to do (or fail to do) things.

LLMs are lacking layers of awareness that humans have. I wonder if achieving comparable awareness in LLMs would require significantly more compute, and/or would significantly slow them down.

Sperry's experiments suggests we don't have that awareness, but think we do as our brains will make up an explanation on the spot.