← Back to context

Comment by lordfrito

2 years ago

> And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".

This. To answer the OPs question, this is what I'm fatigued about.

I'm glad we're making progress. It's a hell of a parlor trick. But the hype around it is astounding considering how often it's answers are completely wrong. People think computers are magic boxes, and so we must be just a few lever pulls away from making it correct all the time.

Or maybe my problem is that I've overestimated the average human's intelligence. If you can't tell ChatGPT apart from a good con-man, can we consider the Turing test passed? It's likely time for a redefinition of the Turing test.

Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?

I'm reminded of the old Rod Serling quote: We're developing a new citizenry. One that will be very selective about cereals and automobiles, but won't be able to think.

I'm having a really hard time following your argument. But absolutely agree we need to redefine the Turing test. Only problem is that I can no longer come up with any reasonable time-limited cognitive task that next year's AI would fail at, but a "typical human" would pass.

  • "Intelligence" is probably too nebulous a term for what it is we're trying to build. Like "pornography", its hard to rigidly define, but you know it when you see it.

    I think "human level intelligence" is an emergent phenomenon arising from a variety of smaller cognitive subsystems working together to solve a problem. It does seem that ChatGPT and similar models have at least partially automated one of the subsystems in this model. Still, it can't reason, doesn't know it's wrong, and can't lie because it doesn't understand what a lie is. So it has a long way to go. But it's still real progress in the sense that it's allowing us to better see the dividing lines between the subsystems that make up general intelligence.

    I think that we'll need to build a better systems level model of what general intelligence is and the pieces it's built out of. With a better defined model, we can come up with better tests for each subsystem. These tests will replace the Turing test.

>>Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?

I came here to make this comment. Thank you for doing it for me.

I remember feeling shocked when this article appeared in the Atlantic in 2008, "Is Google Making Us Stupid?": https://www.theatlantic.com/magazine/archive/2008/07/is-goog...

The existence of the article broke Betteridge's law for me. The fact that this phenomenon it is not more widely discussed describes the limit of human intelligence. Which brings me back around to the other side... perhaps we were never as intelligent as we suspected?

  • > perhaps we were never as intelligent as we suspected?

    Yeah, I think you're right. Intelligence is just something our species has evolved as a strategy for survival. It isn't about intelligence, it's about survival.

    The cognitive skills needed to survive/navigate/thrive in the digital era are very different than the cognitive skills required to survive in the pre-digital era.

    We're biologically programmed through millions of years of evolution to survive in a world of scarcity. Intelligence used to be about tying together small bits of scarce information to find larger patterns so that we can better predict outcomes.

    Those skills are being rendered more and more irrelevant in a world of information abundance. Perhaps the "best fit" humans of the future are those that possess new form of "intelligence", relying less on reason and more on the ability to quickly digest the firehose of data thrown at them 24-7.

    If so, then the AI we were trying to build in the 1950s would necessarily be different than the AI that our grandchilden would find helpful.

    • You're dead on. Isn't it wild that despite our seemingly impressive intelligence, such insights never seem to rise to the level of... second nature.

      I forgot to add something to my original post. >>"I remember feeling shocked when this article appeared in the Atlantic in 2008..."

      At the time I was shocked that the question was even being asked!