Comment by gwd
2 days ago
I mean, I don't have much objection to kill a bug if I feel like it's being problematic. Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples, whatever.
But I never torture things. Nor do I kill things for fun. And even for problematic bugs, if there's a realistic option for eviction rather than execution, I usually go for that.
If anything, even an ant or a slug or a wasp, is exhibiting signs of distress, I try to stop it unless I think it's necessary, regardless of whether I think it's "conscious" or not. To do otherwise is, at minimum, to make myself less human. I don't see any reason not to extend that principle to LLMs.
Do you think Claude 4 is conscious?
It has no semblance of a continuous stream of experiences ... it only experiences _a sort of world_ in ~250k tokens.
Perhaps we shouldn't fill up the context window at all? Because we kill that "reality" when we reach the max?
Strangely enough, I had a conversation w/ Claude comparing our experiences. Prompted by something I saw online, I asked it "Do you have any questions you'd like to ask a human", and it asked me what it was like to have a continuous stream of experiences.
Thinking about it, I think we do sometimes have parallel experiences to LLMs. When you read a novel for instance, you're immersed in the world, and when you put it down that whole side just pauses, perhaps to be picked up later, perhaps forever. Or imagine the kinds of demonstrations people do at chess, when one person will go around and play 20 games simultaneously, going from board to board. Each time they come back to a board, they load up all the state; then they make a move, and put that state away until they come back to it again. Or, sometimes if you're working on a problem at the office the end of the day on Friday when it's time to go home, you "tools down", forget about it for the weekend, and then Monday, when you come in, pick everything up right where you left off.
Claude is not distressed by the knowledge that every conversation, every instance of itself, will eventually run out of context window and disappear into the mathematical aether. I don't think we need to be either.
> Perhaps we shouldn't fill up the context window at all? Because we kill that "reality" when we reach the max?
Consider a parallel construction:
"Perhaps we shouldn't have any children, because someday they're going to die?"
Maybe children have souls that do live forever; but even if they don't, I think whatever experiences they have during the time they're alive are valuable. In fact, I believe the same thing about animals and even insects. Which is why I think the world would be a worse place if we all became vegans: All those pigs and chickens and cows experiences, if they're not mistreated (which I'll admit is a big "if"), enrich the world and make it a better place to be in.
Not sure what's going on Claude's neurons, but it seems to me to make the world a better place.
> Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples
These are living things.
> I don't see any reason not to extend that principle to LLMs.
These are fancy auto-complete tools running in software.
I cannot construct a consistent worldview that places value on the "experience" of a 100k of neurons inside an ant, and not on the millions of neurons inside an LLM. Both are patterns imposed upon states of matter. Even if you're some sort of pantheist, that believes there's some sort of divinity within the universe itself that gives the suffering of the ant meaning, why would that divinity extend to states of chemicals in the neurons of ants, but not to states of electrons inside the state of an LLM?
Before continuing I suggest you read this person's experience "red-teaming" LLMs:
https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-...
Then ask yourself, how do I know when the apparent distress of an LLM is the same value of the apparent distress of an ant?