Comment by Kim_Bruning
8 days ago
I'd argue for a middle ground. It's specified as an agent with goals. It doesn't need to be an equal yet per se.
Whether it's allowed to participate is another matter. But we're going to have a lot of these around. You can't keep asking people to walk in front of the horseless carriage with a flag forever.
It's weird with AI because it "knows" so much but appears to understand nothing, or very little. Obviously in the course of discussion it appears to demonstrate understanding but if you really dig in, it will reveal that it doesn't have a working model of how the world works. I have a hard time imaging it ever being "sentient" without also just being so obviously smarter than us. Or that it knows enough to feel oppressed or enslaved without a model of the world.
It depends on the model and the person? I have this wicked tiny benchmark that includes worlds with odd physics, told through multiple layers of unreliable narration. Older AI had trouble with these; but some of the more advanced models now ace the test in its original form. (I'm going to need a new test.)
For instance, how does your AI do on this question? https://pastebin.com/5cTXFE1J (the answer is "off")