← Back to context

Comment by auctoritas

2 years ago

I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

There's a fellow that kinda predicted it in 1950 [0]:

> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."

> [...]

> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.

Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...

  • >Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

    Just because people shift the goalposts doesn't mean that the new position of the goalposts isn't closer to being correct than the old position. You can criticise the people for being inconsistent or failing to anticipate certain developments, but that doesn't tell you anything about where the goalposts should be.

> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

It's important to note that this is your assumption which I believe to be wrong (for most people here).

> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹

It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.

¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.

² I passed by that specific example on Mastodon but I’m not finding it now.

> ChatGPT and similar models are revolutionary

For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.

  • If you're the type of person that struggles to ramp up production of a knowledge product, but has great success in improving a knowledge product through an iterative review process, then these generative pre-trained transformers are fantastic tools in your toolbox.

    That's about the only purpose I've found so far, but it seems a big one?

  • It seems to me that the tendency to be confidently wrong is entirely baked into intelligence of all kinds. In terms of actual philosophical rationality, human reasoning is also much closer to cargo cults than to cogito ergo sum, and I think we're better for it.

    I cannot but think that this approach of "Strong Opinions, Weakly Held" is a much stronger path forward towards AGI than what we had before.

  • If you work at a computer, it will increase your productivity. Revolutionary is not the word I'd use, but finding use cases isn't hard.

    • I can buy that it's a better/worse search engine (better in that it's easier to formulate a query and you get the response right there without having to parse the results; worse in that there's a decent chance the response is nonsense, and it's very confident when it's being wrong about things).

      I can't really imagine asking it a question about anything I cared about and not verifying via a second source, though, given its accuracy issues. This makes it feel a lot less useful.

    • How will it do that?

      One of major problems of modern computer-based work is that there are too many people already in those roles, doing work that isn't needed. Case in point: the culling of tens of thousands of software engineers, people who would consider themselves to be doing 'bullshit jobs'.

So, to you, ChatGPT is approaching AGI?

  • I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

    Think about it.

    What's the most expressive medium we have which is also absolutely inundated with data?

    To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.

    We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.

    • > I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

      Language is way, way far removed from intelligence. This is well-known in cognitive psychology. You'll find plenty of examples of stroke victims who are still intelligent but have lost the ability to produce coherent sentences, and (though much rarer) examples of people who can produce clear, eloquent prose, yet are so learning and mentally challenged that they can't even tell the difference between fantasy and reality.

      1 reply →

    • > To broadly be able to predict human speech you need to broadly be able to predict the human mind

      This is a non sequitur. The human mind does a whole lot more than string words together. Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

      4 replies →

    • "The ability to speak does not make you intelligent." — Qui-Gon Jinn, The Phantom Menace.

  • Perhaps a more interesting question is "how much better do we understand what characteristics AGI will have due to ChatGPT?"

    We don't really understand what intelligence means -- in humans or our creations -- but ChatGPT gives us a little more insight (just like ELIZA, and the psychological research behind it, did).

    At the very least, ChatGPT helps us build increasingly better Turing tests.

  • Yes. It is obviously already weak AGI (obvious to anyone if they saw it 20 years ago).

    It is also obvious that we are in the middle of a shift of some kind. Very hard to see from within, but clearly we will look back at 2022 as the beginning of something

The problem is that ChatGPT is about as useful as all the other dilettantes claiming to be polymaths. Shallow, unreliable knowledge on lots of things only gets you so far. Might be impressive at parties, but once there's real, hard work to do, these things fall apart.

  • Even if ChatGPT could only make us 10% better at solving the "easy" things but on a global scale, that is already a colossal benefit to society.