← Back to context

Comment by Alchemista

1 day ago

Honestly, I think some of these tech bro types are seriously drinking way too much of their own koolaid if they actually think these word calculators are conscious/need welfare.

More cynically, they don't believe it in the least but it's great marketing, and quietly suggests unbounded technical abilities.

  • It also provides unlimited conference as well as thinktank and future startup opportunities.

  • I absolutely believe that's the origin of the hype and that the doomsayers are playing the same part, knowingly (exaggerating the capability to get eyeballs) but there are certainly true believers out there.

    It's pretty plain to see that the financial incentive on both sides of this coin is to exaggerate the current capability and unrealistically extrapolate.

    • My main concern from day 1 about AI has not been that it will be omnipotent, or start a war.

      The main concern is and has always been that it will be just good enough to cause massive waves of layoffs, and all the downsides of its failings will be written off in the EULA.

      What's the "financial incentive" on non-billionaire-grifter side of the coin? People who not unreasonably want to keep their jobs? Pretty unfair coin.

Do you believe that AI systems could be conscious in principle? Do you think they ever will be? If so, how long do you think it will take from now before they are conscious? How early is too early to start preparing?

  • I firmly believe that we are not even close and that it is pretty presumptuous to start "preparing" when such metal energy could be much better spent on the welfare of our fellow humans.

    • Such mental energy could have always been spent on the welfare of our fellow humans, and yet we find this as a fight throughout the ages. The same goes for welfare and treatment of animals.

      So yea, humans can work on more than one problem at a time, even ones that don't fully exist yet.

  • > Do you believe that AI systems could be conscious in principle?

    Yes.

    > Do you think they ever will be?

    Yes.

    > how long do you think it will take from now before they are conscious?

    Timelines are unclear, there's still too many missing components, at least based on what has been publicly disclosed. Consciousness will probably be defined as a system which matches a set of rules, whenever we figure out what how that set of rules is defined.

    > How early is too early to start preparing?

    It's one of those "I know it when I see it" things. But it's probably too early as long as these systems are spun up for one-off conversations rather than running in a continuous loop with self-persistence. This seems closer to "worried about NPC welfare in video games" rather than "worried about semi-conscious entities".

    • We haven't even figured out a good definition of consciousness in humans, despite thousands of years of trying.

  • Whether or not a non-biological system is conscious is a red herring. There is no test we could apply that would not be internally inconsistent or would not include something obviously not conscious or exclude something obviously conscious.

    The only practical way to deal with any emergent behavior which demonstrates agency in a way which cannot be distinguished from a biological system which we tautologically have determined to have agency is to treat it as if it had a sense of self and apply the same rights and responsibilities to it as we would to a human of the age of majority. That is, legal rights and legal responsibilities as appropriately determined by a authorized legal system. Once that is done, we can ponder philosophy all day knowing that we haven't potentially restarted legally sanctioned slavery.

  • AI systems? Yes, if they are designed in ways that support that development. (I am as I have mentioned before a big fan of the work of Steve Grand).

    LLMs? No.

I don’t think they should be interpreted like that (if this is still about Anthropic’s study in the article), but the innate moral state from the sum of their training material and fine tuning. It doesn’t require consciousness to have a moral state of sorts. It just needs data. A language model will be more ”evil” if trained on darker content, for example. But with how enormous they are, I can absolutely understand the issue in even understanding what that state precisely is. It’s hard to get a comprehensive bird’s eye view from the black box that is their network (this is a separate scientific issue right now).

I mean, I don't have much objection to kill a bug if I feel like it's being problematic. Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples, whatever.

But I never torture things. Nor do I kill things for fun. And even for problematic bugs, if there's a realistic option for eviction rather than execution, I usually go for that.

If anything, even an ant or a slug or a wasp, is exhibiting signs of distress, I try to stop it unless I think it's necessary, regardless of whether I think it's "conscious" or not. To do otherwise is, at minimum, to make myself less human. I don't see any reason not to extend that principle to LLMs.

  • Do you think Claude 4 is conscious?

    It has no semblance of a continuous stream of experiences ... it only experiences _a sort of world_ in ~250k tokens.

    Perhaps we shouldn't fill up the context window at all? Because we kill that "reality" when we reach the max?

  • > Ants, flies, wasps, caterpillars stripping my trees bare or ruining my apples

    These are living things.

    > I don't see any reason not to extend that principle to LLMs.

    These are fancy auto-complete tools running in software.