← Back to context

Comment by falcor84

2 years ago

I'm having a really hard time following your argument. But absolutely agree we need to redefine the Turing test. Only problem is that I can no longer come up with any reasonable time-limited cognitive task that next year's AI would fail at, but a "typical human" would pass.

"Intelligence" is probably too nebulous a term for what it is we're trying to build. Like "pornography", its hard to rigidly define, but you know it when you see it.

I think "human level intelligence" is an emergent phenomenon arising from a variety of smaller cognitive subsystems working together to solve a problem. It does seem that ChatGPT and similar models have at least partially automated one of the subsystems in this model. Still, it can't reason, doesn't know it's wrong, and can't lie because it doesn't understand what a lie is. So it has a long way to go. But it's still real progress in the sense that it's allowing us to better see the dividing lines between the subsystems that make up general intelligence.

I think that we'll need to build a better systems level model of what general intelligence is and the pieces it's built out of. With a better defined model, we can come up with better tests for each subsystem. These tests will replace the Turing test.