Comment by bonplan23
9 hours ago
"Someone" literally did that (+/- 2 years): https://link.springer.com/book/10.1007/978-3-540-68677-4
I think it was supposed to be a more useful term than the earlier and more common "Strong AI". With regards to strong AI, there was a widely accepted definition - i.e. passing the Turing Test - and we are way past that point already: ( see https://arxiv.org/pdf/2503.23674 )
I have to challenge the paper authors' understanding of the Turing test. For an AI system to pass the Turing test its output needs to be indistinguishable from a human's. In other words, the rate of picking the AI system as human should be equal to the rate of picking the human. If in an experiment the AI system is picked at a rate higher than 50% it does not pass the Turing test (as the authors seem to believe) because another human can use this knowledge to conclude that the system being picked is not really human.
Also, I would go one step further and claim that to pass the Turing test an AI system should be indistinguishable from a human when judged by people trained in making such a distinction. I doubt that they used such people in the experiment.
I doubt that any AI system available today, or in the foreseeable future, can pass the test as I qualify it above.
People are constantly being fooled by bots in forums like Reddit and this one. That's good enough for me to consider the Turing test passed.
It also makes me consider it an inadequate test to begin with, since all classes of humans including domain experts can be fooled and have been in the past. The Turing test has always said more about the human participants than the machine.