← Back to context

Comment by Sohcahtoa82

4 years ago

I saw a presentation once (It was not recorded, so I can't link it) that said the difference between real intelligence and artificial intelligence is the ability to quickly learn.

As an example, he said imagine if he invented a word. Poditon. And he told us that a poditon is any object that can fit on a podium. Instantly, you know whether or not any object can be classified as a poditon. A laptop is a poditon, but a car is not.

We are not at the stage where we can just tell a program "Any object that can fit on a podium is a poditon" and then ask "Is X a poditon?" and get a correct answer. And we probably won't be there for another couple decades.

So, I tried this out with GPT-3 in the OpenAI Playground. (The model called text-davinci-002.) My prompt looked like this:

    Definition: A blorple is anything that is red and more or less round.
    Question: Is a tennis ball a blorple?
    Answer: No, because although tennis balls are round they aren't red.
    Question: Is a cherry a blorple?
    Answer: Yes, because cherries are red and approximately round.

    Definition: A poditon is anything that can fit on top of a podium.
    Question: Is a laptop computer a poditon?
    Answer:

GPT-3 says: "Yes, because laptop computers are small enough to fit on top of a podium."

Is a normal-sized automobile a poditon? "No, an automobile is too large to fit on top of a podium."

Is the sun a poditon? "No, because the sun is too large to fit on top of a podium."

Is a human being a poditon? "Yes, because human beings are small enough to fit on top of a podium."

Is a house a poditon? "No, because a house is too large to fit on top of a podium."

While generating those answers it also spontaneously answered the question for tennis balls (yes) and books (yes).

Decades sure do go by quickly, these days.

> We are not at the stage where we can just tell a program "Any object that can fit on a podium is a poditon" and then ask "Is X a poditon?" and get a correct answer. And we probably won't be there for another couple decades.

If that presenter actually said that, they need to take a look at "Few shot learning in language models" (just Google the term and start reading the papers).