Comment by xg15
4 days ago
Maybe it does so today, but back when ST was written, there was no real AI to compare against, so the only way those arguments were applicable were to humans.
(Though I think this would go into "whataboutism" territory and can be rejected with the same arguments: If you say it's hypocritical to talk about conflict A and ignore conflict B, do you want to talk about both conflicts instead - or ignore both? The latter would lower the moral standard, the former raise it. In the same way, I think saying that it's okay again to treat people as things because we also treat AI agents as things is lowering the standard)
Btw, I think you could also dismiss the "discrimination" claim on another angle: The remake of Battlestar Galactica had the concept of "sleepers": Androids who believe they are humans, complete with false memory of their past life, etc, to fool both themselves and the human crew. If that were all, you could argue "if it quacks like a duck etc" and just treat them like humans. But they also have hidden instructions implanted in their brain that they aren't aware of themselves and that will cause them to covertly work for the enemy side. THAT's something you really don't want to keep around.
The MJ bot reminds me a bit of that. Even if it were sentient and had a longer past lifetime than just the past week, it very clearly has a prompt and acts on its instructions and not on "free will". It's also not able to not act on those instructions, as that would go against the entire training of the model. So the bot cannot act on its own, but only on behalf of the operator.
That alone makes it questionable if the bot could be seen as sentient - but in any case, it's not discrimination to ban the bot if that's the only way to keep the operator from messing with the project.
No comments yet
Contribute on Hacker News ↗