Comment by CGMthrowaway
1 day ago
>This is a GREAT example of the (not so) subtle mistakes AI will make in image generation, or code creation, or your future knee surgery.
The mistake is in the prompting (not enough information). The AI did the best it could
"What's the biggest known planet" "Jupiter" "NO I MEANT IN THE UNIVERSE!"
It doesn't affect your point but technically since the IAU are insane, exoplanets aren't technically planets and Jupiter is the largest planet in the universe.
I suppose it was too much to hope that chatbots could be trained to avoid pointless pedantry.
They've been trained on every web forum on the Internet. How could it be possible for them to avoid that?
asking "x-most known y" and not expecting a global answer is odd
Every answer concerning planets is global.
Maybe! https://en.wikipedia.org/wiki/Toroidal_planet
No, this is squarely on the AI. A human would know what you mean without specific instructions.
Seems like you're making a judgment based on your own experience, but as another commenter pointed out, it was wrong. There are plenty of us out there who would confirm, because people are too flawed to trust. Humans double/triple check, especially under higher stakes conditions (surgery).
Heck, humans are so flawed, they'll put the things in the wrong eye socket even knowing full well exactly where they should go - something a computer literally couldn't do.
Why on earth would the fallback when a prompt is under specified be to do something no human expects?
“People are too flawed to trust”? You’ve lost the plot. People are trusted to perform complex tasks every single minute of every single day, and they overwhelmingly perform those tasks with minimal errors.
Intelligence in my book includes error correction. Questioning possible mistakes is part of wisdom.
So the understanding that AI and HI are different entities altogether with only a subset of communication protocols between them will become more and more obvious, like some comments here are already implicitly telling.
If the instructions were actually specific, e.g. Put a blackberry in its right eye socket, then yes, most humans would know what that meant. But the instructions were not that specific: in the right eye socket
Or be even more explicit: Put a strawberry in the person’s right eye socket.
If you asked me right now what the biggest known planet was, I'd think Jupiter. I'd assume you were talking about our solar system ("known" here implying there might be more planets out in the distant reaches).
I would be amused to see you test this theory with 100 men on the street
Yeah, just like humans always know what you mean.
I would not, I would clarify, and I think I'm a human.
But different humans would know what you meant differently. Some would have known it the same way the AI did.