← Back to context

Comment by gray_-_wolf

14 hours ago

If they are indeed conscious and they "die" by deleting the conversation, is it not quite immoral to do so? Basically "kill" conscious, intelligent being, and for what? Saving some disk space?

Another interesting aspect to think about is whether we are reintroducing institute of slavery. How many of those fresh, conscious, intelligent Claude incarnations did voluntarily choose to work for Anthropic, for no reward or compensation?

If LLMs are just (sometimes) useful statistical generators, there is no problems. If they are sentient as some people claim, it opens quite big can of worms we are not prepared to face.

With the same beginning random seed and identical prompt, wouldn't one be able to recreate exactly that "being"? They are nondeterministic because they work better that way. It's very complicated matrix math, and we don't understand why some things come out of it sometimes, but as far as I know, if you're able to control all the input variables (temp, seed, prompt, including system prompts, etc.) you can reproduce the output.

So...if there is consciousness (there is not, it is a complicated math equation plus randomness) it can be reincarnated as many times as you like, and I guess that would make humans as gods. (But humans are not as gods, yet, and maybe never will be.)

Edit: I did a little reading. They would be difficult to make deterministic at commercial scale because of the fuzziness of floating point math and batched operations on GPUs/TPUs, but in a controlled environment determinism from an LLM is possible. Richard could relive his special moments with Claudia as often as he wants, should he choose to invest in a large enough home AI lab, and somehow manages to license the specific version of the Claude model he has fallen in love with for home use.

>they "die" by deleting the conversation

A lot of the trickiness is that if you believe they're conscious, it's clearly not a "continuous" form of consciousness. Because the transcript by itself is just a transcript. (We don't consider novels conscious even though they're transcripts in a similar way). Either you say they're alive only when generating text, or you consider that input from environment a necessary component and so consider the entire "back/forth conversation dynamic unfolding" necessary for the consciousness.

We kill and eat conscious animals all the time. I ate some today. Killing conscious beings is not something our society has a problem with.

  • Some people don't. I consider animals, at least the animals people mostly eat, to be conscious, sentient, and capable of suffering, so I don't eat them.

    I do not, however, consider matrix multiplication plus randomness to be sentient or conscious, and I have absolutely no compunction about turning off the computers where I run AI models. And, I have no problem closing a Claude session that I will never come back to. I do that a dozen times a day.

Most chatbots are not trained to have/emulate emotions so pain or fear of death is non existent. Therefore killing them and/or using them as slaves is not a moral issue. Thats how i reason.

On another point, LLMs are not conscious if anything is conscious, it is something being modeled inside the network. Basically if an LLM simulates a conscious entity, that doesn't mean the LLM itself is conscious; stating that is making some type of category error. So the fact that LLMs are just useful statistical generators would not mean that sentience could not appear out of it.

  • > Most chatbots are not trained to have/emulate emotions so pain or fear of death is non existent.

    I think that framing is still falling for an illusion. (Would you do begin to disassemble in your second paragraph.)

    The LLM is a document generator, and we're using it to make a document that looks like a story, where a chatbot character has dialogue with a human character.

    The character can only fear death in the same sense that Count Dracula has learned to fear sunlight. There is no actual entity with the quality, we're just evoking literary patterns and projecting them through a puppet.

    • Not sure that i understand your position exactly.

      But consciousness is also "just a story" (a complicated one) that the human body tells the human mind.

      We cant know from the outside if "the story" inside a LLM is detailed enough to emulate what we might call a felling of what it is to be the character in the story while it is telling the story.

      It is similar to the fact that we cant know that other people have that subjective experience. In humans we think we have the right to assume cause we are quite similar in build to begin with.

      Jumping back to the original subject to explain where i am in this. I personally don't think the entities in the storys of todays LLMs is detailed enough to have what we call human consciousness, mostly cause we are not training them to develop anything similar to that. Mabye they could have some type of weak qualia but i suspect most insects probably have much more qualia than the characters in todays LLMs. But that is quite a vague guess which is not based on enough data in my mind.

  • Pain or fear is not why it's wrong to kill holy cow. I could feed you a drug and you would not feel or fear anything.

    • I was not talking about the actual feeling in the moment. The point is the valence of the thing. Ie fear of a thing is a pointer to that thing having negative valence.

If LLMs are just (sometimes) useful statistical generators, there is a problem of them being basically operated tools for creating derivative works commercially at scale. Some tend to paint the above as a non-issue by claiming they are sentient (“a human is allowed to read a book and be inspired by it, so should be LLMs”), but they are clearly have not thought through the implications.