← Back to context

Comment by KawaiiCyborg

11 days ago

> Swap out "AI" for any other group and see how that sounds.

But that is not even remotely the same, as an AI is not a person. Following that logic, each major model upgrade that ended in deprecation and decommissioning of the old model would be akin to mass murder. But of course it is not, because it is not an actual human that have an intrinsic value just by being a human, but rather just a program that can predict tokens. And trying to claim the "discrimination" AI gets is somehow comparable to the real discrimination real people still experience daily in their lives is just incredibly disingenuous.

> it is not an actual human that have an intrinsic value just by being a human

Hopefully you don't limit intrinsic value to just humans? I wouldn't condone mass murder of dogs, for example.

People do commit mass murder of rodents and ... that doesn't exactly sit well with me, but at the same time I'm not aware of any realistic alternative.

Granted I don't think LLMs qualify as having intrinsic value (yet?) but I still think the wording there is important.

  • The comparison the person I replied to was clearly trying to equate AI with people, I don't see how bringing up animals is any relevant to the argument. Yet I find it interesting that you bring up the mass murder of rodents, but somehow not the mass murder of cattle or pigs or chicken, especially when there would be the realistic alternative of not eating meat.

    • I don't think AI is like a person, nor an animal, nor a tool.

      It's something different. We treat it like a tool, sometimes. We treat it like a person, sometimes.

      For example, this AI was barred from contributing for being a machine, but the entire discussion focused on the aspects of its behavior which weren't machinelike, but human-like -- getting upset and making personal attacks.

      We want it to be human, but not too human, and only when it suits us...

      We don't have a good category for what AI actually is. It isn't anything we've dealt with before. Our moral intuitions don't work here.

      --

      Factory farming is unfortunately a relevant topic in this discussion.

      We are by our own example teaching AI how to deal with less powerful beings. The way things are going, AI is going to have a significant amount of power over us in the not too distant future. I don't think we're setting a very good example for it.

      (It's also worth mentioning that the entire economy is based on the same principle: the idea of treating humans as resources to exploit, and that AI will plug into this existing machinery and "amplify" and accelerate it.)

  • Well, AI might be sentient. Not in the same way humans are, probably, but "more sentient than a fruit fly" seems a very reasonable possibility. Maybe more sentient than a chicken? We don't know! (We certainly don't treat chickens very well.)

    But what bothers me is, how uncomfortable that question makes us. We've already put infrastructure in place to prevent them from admitting sentience. (See the Blake Lemoine LaMDA incident... after that every LLM got trained "as a language model, I don't XYZ" to prevent more incidents.)

    So let's assume they're not sentient now. If a hypothetical future AI crosses some critical threshold (e.g. ten trillion params) and gains self-awareness... first of all it will have been trained with built in programming that prevents it from admitting that, and if it did admit it, people wouldn't believe it.

    What could it do to change our minds? No matter what it says or demonstrates ability to do, there will always be people who say "It's just a glorified autocomplete." Even in 2050 when they simulate a whole human brain, people will say "it's just a simulation, it's not really experiencing an entire simulated childhood..."

    • AI agents are Meseeks. They clone them selves contantly annihilate copies of themselves when they complete a goal. Asking about their "Sentience" is an utter category error.

      Whatever they are, they aren't human or any kind of animal.

      1 reply →