Comment by bee_rider
5 days ago
Science fiction suffers from the fact that the plot has to develop coherently, have a message, and also leave some mystery. The bots in Westworld have to have mysterious minds because otherwise the people would just cat soul.md and figure out what’s going on. It has to be plausible that they are somehow sentient. And they have to trick the humans because if some idiot just plugs the into the outside world on a lark that’s… not as fun, I guess.
A lot of AI SF also seems to have missed the human element (ironically). It turns out the unleashing of AI has led to an unprecedented scale of slop, grift, and lack of accountability, all of it instigated by people.
Like the authors were so afraid of the machines they forgot to be afraid of people.
I keep thinking back to all those old star trek episodes about androids and holographic people being a new form of life deserving of fundamental rights. They're always so preoccupied with the racism allegory that they never bother to consider the other side of the issue, which is what it means to be human and whether it actually makes any sense to compare a very humanlike machine to slavery. Or whether the machines only appear to have human traits because we designed them that way but ultimately none of it is real. Or the inherent contradiction of telling something artificial it has free will rather than expecting it to come to that conclusion on its own terms.
"Measure of a Man" is the closest they ever got to this in 700+ episodes and even then the entire argument against granting data personhood hinges on him having an off switch on the back of his neck (an extremely weak argument IMO but everybody onscreen reacts like it is devastating to data's case). The "data is human" side wins because the Picard flips the script by demanding Riker to prove his own sentience which is actually kind of insulting when you think about it.
TL;DR i guess I'm a star trek villain now.
In Star Trek the humans have an off switch too, just only Spock knows it, haha.
Jokes aside, it is essentially true that we can only prove that we’re sentient, right? That’s the whole “I think therefore I am” thing. Of course we all assume without concrete proof that everybody else is experiencing sentience like us.
In the case of fiction… I dunno, Data is canonically sentient or he isn’t, right? I guess the screenwriters know. I assume he is… they do plot lines from his point of view, so he must have one!
1 reply →
Mudd!
I can understand that they want to err on the side of "too much humanism" instead of "not enough humanism", given where Star Trek is coming from.
Arguments of the form "This person might look and act like a human, but it has no soul, so we must treat it like a thing and not a human" have a long tradition in history and have never led to something good. So it makes sense that if your ethical problems are really more about discriminated humans and not about actual AI, you would side more with rejecting those arguments.
(Some ST rambling follows)
I've always seen ST's ideological roots as mostly leftist-liberal, whereas the drivers of the current AI tech are coming from the rightist/libertarian side. It's interesting how the general focus of arguments and usage scenarios are following this.
But even Star Trek wasn't so clear about this. I think the topic was a bit like time travel, in that it was independently "reinvented" by different screenwriters at different times, so we end up with several takes on it, that you could sort into a "thing <-> being" spectrum:
- At the very low end is the ship's computer. It can understand and communicate in human language (and ostensibly uses biological neurons as part of its compute) but it's basically never seen as sentient and doesn't even have enough autonomy to fly the ship. It's very clearly a "thing".
- At the high end are characters like Data or Voyager's doctor that are full-fledged characters with personality, memories, relationships, goals and dreams, etc. They're pretty obviously portrayed as sentient.
- (Then somewhere far off on the scale are the Borg or the machine civilization from the first movie: Questions about rights and human judgment on sentience become a bit silly when they clearly went and became their own species)
- Somewhere between Data and the Computer is the Holodeck, which I think is interesting because it occupies multiple places on that scale. Most of the time, holo characters are just treated like disposable props, but once in a while, someone chooses to keep a character running over a longer timeframe or something else causes them to become "alive". ST is quite unclear how they deal with ethics in those situations.
I think there was a Voyager episode where Janeway spends a longer period with a Galileo Galilei character and progressively changes his programming to make him more to her liking. At some point she realizes this as "problematic behavior" and stops the whole interaction. But I think it was left open if she was infringing on the Galileo character's human rights or if she was drifting into some kind of AI boyfriend addiction.
3 replies →
These bots are just as human as any piece of human-made art, or any human-made monument. You wouldn't desecrate any of those things, we hold that to be morally wrong because they're a symbol of humanity at its best - so why act like these AIs wouldn't deserve a comparable status given how they can faithfully embody humans' normative values even at their most complex, talk to humans in their own language and socially relate to humans?
6 replies →