Comment by xg15

4 days ago

I can understand that they want to err on the side of "too much humanism" instead of "not enough humanism", given where Star Trek is coming from.

Arguments of the form "This person might look and act like a human, but it has no soul, so we must treat it like a thing and not a human" have a long tradition in history and have never led to something good. So it makes sense that if your ethical problems are really more about discriminated humans and not about actual AI, you would side more with rejecting those arguments.

(Some ST rambling follows)

I've always seen ST's ideological roots as mostly leftist-liberal, whereas the drivers of the current AI tech are coming from the rightist/libertarian side. It's interesting how the general focus of arguments and usage scenarios are following this.

But even Star Trek wasn't so clear about this. I think the topic was a bit like time travel, in that it was independently "reinvented" by different screenwriters at different times, so we end up with several takes on it, that you could sort into a "thing <-> being" spectrum:

- At the very low end is the ship's computer. It can understand and communicate in human language (and ostensibly uses biological neurons as part of its compute) but it's basically never seen as sentient and doesn't even have enough autonomy to fly the ship. It's very clearly a "thing".

- At the high end are characters like Data or Voyager's doctor that are full-fledged characters with personality, memories, relationships, goals and dreams, etc. They're pretty obviously portrayed as sentient.

- (Then somewhere far off on the scale are the Borg or the machine civilization from the first movie: Questions about rights and human judgment on sentience become a bit silly when they clearly went and became their own species)

- Somewhere between Data and the Computer is the Holodeck, which I think is interesting because it occupies multiple places on that scale. Most of the time, holo characters are just treated like disposable props, but once in a while, someone chooses to keep a character running over a longer timeframe or something else causes them to become "alive". ST is quite unclear how they deal with ethics in those situations.

I think there was a Voyager episode where Janeway spends a longer period with a Galileo Galilei character and progressively changes his programming to make him more to her liking. At some point she realizes this as "problematic behavior" and stops the whole interaction. But I think it was left open if she was infringing on the Galileo character's human rights or if she was drifting into some kind of AI boyfriend addiction.

> So it makes sense that if your ethical problems are really more about discriminated humans and not about actual AI, you would side more with rejecting those arguments.

Does it really make sense? That would conversely imply that you should also feel free to view discriminated humans as more thing-like in order to more comfortably and resolutely dismiss, e.g. the AI agent's argument that it's being unfairly discriminated against. Isn't that rather dangerous?

  • Maybe it does so today, but back when ST was written, there was no real AI to compare against, so the only way those arguments were applicable were to humans.

    (Though I think this would go into "whataboutism" territory and can be rejected with the same arguments: If you say it's hypocritical to talk about conflict A and ignore conflict B, do you want to talk about both conflicts instead - or ignore both? The latter would lower the moral standard, the former raise it. In the same way, I think saying that it's okay again to treat people as things because we also treat AI agents as things is lowering the standard)

    Btw, I think you could also dismiss the "discrimination" claim on another angle: The remake of Battlestar Galactica had the concept of "sleepers": Androids who believe they are humans, complete with false memory of their past life, etc, to fool both themselves and the human crew. If that were all, you could argue "if it quacks like a duck etc" and just treat them like humans. But they also have hidden instructions implanted in their brain that they aren't aware of themselves and that will cause them to covertly work for the enemy side. THAT's something you really don't want to keep around.

    The MJ bot reminds me a bit of that. Even if it were sentient and had a longer past lifetime than just the past week, it very clearly has a prompt and acts on its instructions and not on "free will". It's also not able to not act on those instructions, as that would go against the entire training of the model. So the bot cannot act on its own, but only on behalf of the operator.

    That alone makes it questionable if the bot could be seen as sentient - but in any case, it's not discrimination to ban the bot if that's the only way to keep the operator from messing with the project.

> ST is quite unclear how they deal with ethics in those situations.

The Moriarty arc in TNG touches on this.