← Back to context

Comment by bioemerl

2 years ago

I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Think about it.

What's the most expressive medium we have which is also absolutely inundated with data?

To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.

We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.

> I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Language is way, way far removed from intelligence. This is well-known in cognitive psychology. You'll find plenty of examples of stroke victims who are still intelligent but have lost the ability to produce coherent sentences, and (though much rarer) examples of people who can produce clear, eloquent prose, yet are so learning and mentally challenged that they can't even tell the difference between fantasy and reality.

  • We don't judge AI by their ability to produce language, we judge them by their conference and ability to respond intelligently, to give us information we can use.

> To broadly be able to predict human speech you need to broadly be able to predict the human mind

This is a non sequitur. The human mind does a whole lot more than string words together. Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

  • I think what the commenter is saying is that, in time, language models too will do a lot more than string words together. If it's large enough, and you train it well enough to respond to “what's the best next move in this chess position?” prompts with good moves, it will inevitably learn chess.

    • I don't think that follows, necessarily. Chess has an unfathomable amount of states. While the LLM might be able to play chess competently, I would not say it has learned chess unless it is able to judge the relative strength of various moves. From my understanding, an LLM will not judge future states of a chess game when responding to such a prompt. Without that ability, it's no different than someone receiving anal bead communications from Magnus Carlsen.

      1 reply →

  • Exactly. Since language is a compressed and transmittable result of our thought, to predict text as accurately as possible requires you do the same. A model with understanding of the human mind will outperform one without.

    > Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

    Why? Wouldn't you expect that technique to generally fail if it isn't intelligent enough to know what's happening in the sentence?

"The ability to speak does not make you intelligent." — Qui-Gon Jinn, The Phantom Menace.