Comment by yannyu
2 days ago
There's no definition of thinking that isn't a purely internal phenomenon, which means that there's no way to point a diagnostic device at someone and determine whether they're thinking. The only way to determine whether something is conscious/thinking is through some sort of inference, which is why Turing landed on the Turing Test that he did. Problem is, technology over the past 5 years pretty easily passes variations of the Turing Test, and exposed a lot of its limits as well.
So the next definition of detecting "thinking" will have to be externally observable and inferrable like a Turing Test, but get into the other things that we consider part of consciousness/thinking.
Often this is some combination of introspection (understanding internal states), perception (understanding external objects), and synthesis of the two into testable hypotheses in some sort of feedback loop between the internal representation of the world and the external feedback from the world.
Right now, a chatbot can say all sorts of things about itself and about the world, but none of that is based on real-time, factual information. Whereas an animal can't speak, but they clearly process information and consider it when determining their future and current actions.
No comments yet
Contribute on Hacker News ↗