Comment by gwd

1 day ago

I mean, I don't have much objection to kill a bug if I feel like it's being problematic. Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples, whatever.

But I never torture things. Nor do I kill things for fun. And even for problematic bugs, if there's a realistic option for eviction rather than execution, I usually go for that.

If anything, even an ant or a slug or a wasp, is exhibiting signs of distress, I try to stop it unless I think it's necessary, regardless of whether I think it's "conscious" or not. To do otherwise is, at minimum, to make myself less human. I don't see any reason not to extend that principle to LLMs.

Do you think Claude 4 is conscious?

It has no semblance of a continuous stream of experiences ... it only experiences _a sort of world_ in ~250k tokens.

Perhaps we shouldn't fill up the context window at all? Because we kill that "reality" when we reach the max?

> Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples

These are living things.

> I don't see any reason not to extend that principle to LLMs.

These are fancy auto-complete tools running in software.