Comment by hedgehog
4 hours ago
Given the large visible differences in behavior and construction, akin to the difference between a horse and a pickup truck, I would ask the reverse question: In what ways do LLMs meet the definition of having consciousness and agency?
Veering into the realm of conjecture and opinion, I tend to think a 1:1 computer simulation of human cognition is possible, and transformers being computationally universal are thus theoretically capable of running that workload. That being said, that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality.
> In what ways do LLMs meet the definition of having consciousness and agency?
Agency: an ability to make decisions and act independently. Agentic pipelines are doing this.
Consciousness: something something feedback[1] (or a non-transferable feeling of being conscious, but that is useless for the discussion). Recurrent Processing Theory: A computation is conscious if it involves high-level processed representations being fed back into the low-level processors that generate it.
Tokens are being fed back into the transformer.
> that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality.
Is it? Vacuum of space is a tangible problem for aerodynamics-based propulsion. Which analogous thing do we have with ML?
[1] https://www.astralcodexten.com/p/the-new-ai-consciousness-pa...
What about modern LLMs isn't "agentic" enough?
Doesn't matter if they're conscious for that. They're clearly capable of goal oriented behavior.