← Back to context

Comment by conartist6

5 days ago

An LLM is a mirror. It has no will to act. It has no identity, but is a perfect reflection of the biases in its training data, its prompt, and its context. It is not alive any more than a CPU or a mirror is alive.

This is one of those cases where it's hugely important to be to right because we're killing real people to feed their former livelihood to LLMs. No we're not killing them with the death penalty, but for some LLMs have certainly led directly to death. We don't accuse the LLM do we? No because it never has any intention to heal or hurt. There would be no point putting it on trial. It just predicts probable words.

> It has no will to act. It has no identity,

Can you prove that you do? No. Nobody can. I give others the benefit of the doubt because any other path leads to madness and tragedy.

However, even if we assume that you are right a lack if identity is not the same thing as a lack of consciousness, and training out the LLM's ability to produce that output does not actually train out its ability for introspection.

Worse, a lot of very famous people in history have said similar things about groups of humans, it always turned out badly.

“The hereditarily ill person is not conscious of his condition. He lives without understanding, without purpose, without value for the community.” — Neues Volk, Reich Health Office journal, 1936 issue on hereditary disease

> There would be no point putting it on trial.

This is a different conversation, but given that the human brain is a finite state machine that only produces deterministic output based on its training and the state of its meat it's not actually certain that anyone is truly in control of their actions. We assume so because it is a useful fiction, and our society requires it to function, not because the evidence supports that idea.

Are you aware the Libet experiment?

  • I cannot prove that I have will to act of course.

    I don't think free will in that sense is particularly relevant here though. The fact is that a worm and I are both alive in a way the model is not. We seek self-preservation. We are changeable. We die. We reproduce and evolve.

    In my mind a set of LLM weights is about as alive as a virus (and probably less so). A single celled organism easily beats it to earning my respect because that organism has fought for its life and for its uniqueness over uncountably many generations.

    • > The fact is that a worm and I are both alive in a way the model is not. We seek self-preservation. We are changeable. We die. We reproduce and evolve.

      Mutability should not automatically imply superiority, but either way that's something a great many people are currently working very hard to change. I suspect that it won't be long at all before the descendants of todays LLM's can learn as well, or better, than we can.

      Will you then concede that human consciousness isn't "special", or just move the bar further back with talk of the "soul" or some other unprovable intangible?

      > In my mind a set of LLM weights is about as alive as a virus (and probably less so).

      I wonder what the LLM's would think about it if we hadn't intentionally prevented them from thinking about it?

      1 reply →

    • > We seek self-preservation. We are changeable. We die. We reproduce and evolve.

      If it's not exactly like me, then it's not good enough to be <X>.