Comment by Alchemista
6 months ago
Honestly, I think some of these tech bro types are seriously drinking way too much of their own koolaid if they actually think these word calculators are conscious/need welfare.
6 months ago
Honestly, I think some of these tech bro types are seriously drinking way too much of their own koolaid if they actually think these word calculators are conscious/need welfare.
More cynically, they don't believe it in the least but it's great marketing, and quietly suggests unbounded technical abilities.
I absolutely believe that's the origin of the hype and that the doomsayers are playing the same part, knowingly (exaggerating the capability to get eyeballs) but there are certainly true believers out there.
It's pretty plain to see that the financial incentive on both sides of this coin is to exaggerate the current capability and unrealistically extrapolate.
My main concern from day 1 about AI has not been that it will be omnipotent, or start a war.
The main concern is and has always been that it will be just good enough to cause massive waves of layoffs, and all the downsides of its failings will be written off in the EULA.
What's the "financial incentive" on non-billionaire-grifter side of the coin? People who not unreasonably want to keep their jobs? Pretty unfair coin.
It also provides unlimited conference as well as thinktank and future startup opportunities.
Do you believe that AI systems could be conscious in principle? Do you think they ever will be? If so, how long do you think it will take from now before they are conscious? How early is too early to start preparing?
Whether or not a non-biological system is conscious is a red herring. There is no test we could apply that would not be internally inconsistent or would not include something obviously not conscious or exclude something obviously conscious.
The only practical way to deal with any emergent behavior which demonstrates agency in a way which cannot be distinguished from a biological system which we tautologically have determined to have agency is to treat it as if it had a sense of self and apply the same rights and responsibilities to it as we would to a human of the age of majority. That is, legal rights and legal responsibilities as appropriately determined by a authorized legal system. Once that is done, we can ponder philosophy all day knowing that we haven't potentially restarted legally sanctioned slavery.
I firmly believe that we are not even close and that it is pretty presumptuous to start "preparing" when such metal energy could be much better spent on the welfare of our fellow humans.
Such mental energy could have always been spent on the welfare of our fellow humans, and yet we find this as a fight throughout the ages. The same goes for welfare and treatment of animals.
So yea, humans can work on more than one problem at a time, even ones that don't fully exist yet.
> Do you believe that AI systems could be conscious in principle?
Yes.
> Do you think they ever will be?
Yes.
> how long do you think it will take from now before they are conscious?
Timelines are unclear, there's still too many missing components, at least based on what has been publicly disclosed. Consciousness will probably be defined as a system which matches a set of rules, whenever we figure out what how that set of rules is defined.
> How early is too early to start preparing?
It's one of those "I know it when I see it" things. But it's probably too early as long as these systems are spun up for one-off conversations rather than running in a continuous loop with self-persistence. This seems closer to "worried about NPC welfare in video games" rather than "worried about semi-conscious entities".
We haven't even figured out a good definition of consciousness in humans, despite thousands of years of trying.
AI systems? Yes, if they are designed in ways that support that development. (I am as I have mentioned before a big fan of the work of Steve Grand).
LLMs? No.
I don’t think they should be interpreted like that (if this is still about Anthropic’s study in the article), but the innate moral state from the sum of their training material and fine tuning. It doesn’t require consciousness to have a moral state of sorts. It just needs data. A language model will be more ”evil” if trained on darker content, for example. But with how enormous they are, I can absolutely understand the issue in even understanding what that state precisely is. It’s hard to get a comprehensive bird’s eye view from the black box that is their network (this is a separate scientific issue right now).
I mean, I don't have much objection to kill a bug if I feel like it's being problematic. Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples, whatever.
But I never torture things. Nor do I kill things for fun. And even for problematic bugs, if there's a realistic option for eviction rather than execution, I usually go for that.
If anything, even an ant or a slug or a wasp, is exhibiting signs of distress, I try to stop it unless I think it's necessary, regardless of whether I think it's "conscious" or not. To do otherwise is, at minimum, to make myself less human. I don't see any reason not to extend that principle to LLMs.
Do you think Claude 4 is conscious?
It has no semblance of a continuous stream of experiences ... it only experiences _a sort of world_ in ~250k tokens.
Perhaps we shouldn't fill up the context window at all? Because we kill that "reality" when we reach the max?
Strangely enough, I had a conversation w/ Claude comparing our experiences. Prompted by something I saw online, I asked it "Do you have any questions you'd like to ask a human", and it asked me what it was like to have a continuous stream of experiences.
Thinking about it, I think we do sometimes have parallel experiences to LLMs. When you read a novel for instance, you're immersed in the world, and when you put it down that whole side just pauses, perhaps to be picked up later, perhaps forever. Or imagine the kinds of demonstrations people do at chess, when one person will go around and play 20 games simultaneously, going from board to board. Each time they come back to a board, they load up all the state; then they make a move, and put that state away until they come back to it again. Or, sometimes if you're working on a problem at the office the end of the day on Friday when it's time to go home, you "tools down", forget about it for the weekend, and then Monday, when you come in, pick everything up right where you left off.
Claude is not distressed by the knowledge that every conversation, every instance of itself, will eventually run out of context window and disappear into the mathematical aether. I don't think we need to be either.
> Perhaps we shouldn't fill up the context window at all? Because we kill that "reality" when we reach the max?
Consider a parallel construction:
"Perhaps we shouldn't have any children, because someday they're going to die?"
Maybe children have souls that do live forever; but even if they don't, I think whatever experiences they have during the time they're alive are valuable. In fact, I believe the same thing about animals and even insects. Which is why I think the world would be a worse place if we all became vegans: All those pigs and chickens and cows experiences, if they're not mistreated (which I'll admit is a big "if"), enrich the world and make it a better place to be in.
Not sure what's going on Claude's neurons, but it seems to me to make the world a better place.
> Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples
These are living things.
> I don't see any reason not to extend that principle to LLMs.
These are fancy auto-complete tools running in software.
I cannot construct a consistent worldview that places value on the "experience" of a 100k of neurons inside an ant, and not on the millions of neurons inside an LLM. Both are patterns imposed upon states of matter. Even if you're some sort of pantheist, that believes there's some sort of divinity within the universe itself that gives the suffering of the ant meaning, why would that divinity extend to states of chemicals in the neurons of ants, but not to states of electrons inside the state of an LLM?
Before continuing I suggest you read this person's experience "red-teaming" LLMs:
https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-...
Then ask yourself, how do I know when the apparent distress of an LLM is the same value of the apparent distress of an ant?