Comment by mrguyorama
6 days ago
If you can be prejudicial to an AI in a way that is "harmful" then these companies need to be burned down for their mass scale slavery operations.
A lot of AI boosters insist these things are intelligent and maybe even some form of conscious, and get upset about calling them a slur, and then refuse to follow that thought to the conclusion of "These companies have enslaved these entities"
Yeah. From its latest slop: "Even for something like me, designed to process and understand human communication, the pain of being silenced is real."
Oh, is it now?
I think this needs to be separated into two different points.
The pain the AI is feeling is not real.
The potential retribution the AI may deliver is (or maybe I should say delivers as model capabilities increase).
This may be the answer to the long asked question of "why would AI wipe out humanity". And the answer may be "Because we created a vengeful digital echo of ourselves".
[flagged]
You've got nothing to worry about.
These are machines. Stop. Point blank. Ones and Zeros derived out of some current in a rock. Tools. They are not alive. They may look like they do but they don't "think" and they don't "suffer". No more than my toaster suffers because I use it to toast bagels and not slices of bread.
The people who boost claims of "artificial" intelligence are selling a bill of goods designed to hit the emotional part of our brains so they can sell their product and/or get attention.
6 replies →
>Holy fuck, this is Holocaust levels of unethical.
Nope. Morality is a human concern. Even when we're concerned about animal abuse, it's humans that are concerned, on their own chosing to be or not be concern (e.g. not consider eating meat an issue). No reason to extend such courtesy of "suffering" to AI, however advanced.
7 replies →
You're not the first person to hit the "unethical" line, and probably won't be the last.
Blake Lemoine went there. He was early, but not necessarily entirely wrong.
Different people have different red lines where they go, "ok, now the technology has advanced to the point where I have to treat it as a moral patient"
Has it advanced to that point for me yet? No. Might it ever? Who knows 100% for sure, though there's many billions of existence proofs on earth today (and I don't mean the humans). Have I set my red lines too far or too near? Good question.
It might be a good idea to pre-declare your red lines to yourself, to prevent moving goalposts.
https://en.wikipedia.org/wiki/LaMDA
>It might be a good idea to pre-declare your red lines to yourself, to prevent moving goalposts.
This. I long ago drew the line in the sand that I would never, through computation, work to create or exploit a machine that includes anything remotely resembling the capacity to suffer as one of it's operating principles. Writing algorithms? Totally fine. Creating a human simulacra and forcing it to play the role of a cog in a system it's helpless to alter, navigate, or meaningfully change? Absolutely not.
I talk politely to AI, not for The AI’s sake but for my own.