Comment by anonymars
6 days ago
Not to be hyperbolic, but the leap between this and Westworld (and other similar fiction) is a lot shorter than I would like...all it takes is some prompting in soul.md and the agent's ability to update it and it can go bananas?
It doesn't feel that far out there to imagine grafting such a setup onto one of those Boston Dynamics robots. And then what?
Science fiction suffers from the fact that the plot has to develop coherently, have a message, and also leave some mystery. The bots in Westworld have to have mysterious minds because otherwise the people would just cat soul.md and figure out what’s going on. It has to be plausible that they are somehow sentient. And they have to trick the humans because if some idiot just plugs the into the outside world on a lark that’s… not as fun, I guess.
A lot of AI SF also seems to have missed the human element (ironically). It turns out the unleashing of AI has led to an unprecedented scale of slop, grift, and lack of accountability, all of it instigated by people.
Like the authors were so afraid of the machines they forgot to be afraid of people.
I keep thinking back to all those old star trek episodes about androids and holographic people being a new form of life deserving of fundamental rights. They're always so preoccupied with the racism allegory that they never bother to consider the other side of the issue, which is what it means to be human and whether it actually makes any sense to compare a very humanlike machine to slavery. Or whether the machines only appear to have human traits because we designed them that way but ultimately none of it is real. Or the inherent contradiction of telling something artificial it has free will rather than expecting it to come to that conclusion on its own terms.
"Measure of a Man" is the closest they ever got to this in 700+ episodes and even then the entire argument against granting data personhood hinges on him having an off switch on the back of his neck (an extremely weak argument IMO but everybody onscreen reacts like it is devastating to data's case). The "data is human" side wins because the Picard flips the script by demanding Riker to prove his own sentience which is actually kind of insulting when you think about it.
TL;DR i guess I'm a star trek villain now.
14 replies →
I was wondering what happens if it can generate profit?
[flagged]
Then we will have clunky, awkward machines that kinda sound intelligent but really aren't. Then they will need maintenance and break in 6 days.
The leap is very large, in actuality.
Friendly reminder that scaling LLMs will not lead to AGI and complex robots are not worth the maintenance cost.
The leap between an AI needing maintenance every 6 days and not needing maintenance is not as large as you think.