Seems like you're making a judgment based on your own experience, but as another commenter pointed out, it was wrong. There are plenty of us out there who would confirm, because people are too flawed to trust. Humans double/triple check, especially under higher stakes conditions (surgery).
Heck, humans are so flawed, they'll put the things in the wrong eye socket even knowing full well exactly where they should go - something a computer literally couldn't do.
“People are too flawed to trust”? You’ve lost the plot. People are trusted to perform complex tasks every single minute of every single day, and they overwhelmingly perform those tasks with minimal errors.
Intelligence in my book includes error correction. Questioning possible mistakes is part of wisdom.
So the understanding that AI and HI are different entities altogether with only a subset of communication protocols between them will become more and more obvious, like some comments here are already implicitly telling.
If the instructions were actually specific, e.g. Put a blackberry in its right eye socket, then yes, most humans would know what that meant. But the instructions were not that specific: in the right eye socket
If you asked me right now what the biggest known planet was, I'd think Jupiter. I'd assume you were talking about our solar system ("known" here implying there might be more planets out in the distant reaches).
Seems like you're making a judgment based on your own experience, but as another commenter pointed out, it was wrong. There are plenty of us out there who would confirm, because people are too flawed to trust. Humans double/triple check, especially under higher stakes conditions (surgery).
Heck, humans are so flawed, they'll put the things in the wrong eye socket even knowing full well exactly where they should go - something a computer literally couldn't do.
Why on earth would the fallback when a prompt is under specified be to do something no human expects?
“People are too flawed to trust”? You’ve lost the plot. People are trusted to perform complex tasks every single minute of every single day, and they overwhelmingly perform those tasks with minimal errors.
Intelligence in my book includes error correction. Questioning possible mistakes is part of wisdom.
So the understanding that AI and HI are different entities altogether with only a subset of communication protocols between them will become more and more obvious, like some comments here are already implicitly telling.
If the instructions were actually specific, e.g. Put a blackberry in its right eye socket, then yes, most humans would know what that meant. But the instructions were not that specific: in the right eye socket
Or be even more explicit: Put a strawberry in the person’s right eye socket.
If you asked me right now what the biggest known planet was, I'd think Jupiter. I'd assume you were talking about our solar system ("known" here implying there might be more planets out in the distant reaches).
I would be amused to see you test this theory with 100 men on the street
Yeah, just like humans always know what you mean.
I would not, I would clarify, and I think I'm a human.
But different humans would know what you meant differently. Some would have known it the same way the AI did.