Comment by Sohcahtoa82
4 years ago
I saw a presentation once (It was not recorded, so I can't link it) that said the difference between real intelligence and artificial intelligence is the ability to quickly learn.
As an example, he said imagine if he invented a word. Poditon. And he told us that a poditon is any object that can fit on a podium. Instantly, you know whether or not any object can be classified as a poditon. A laptop is a poditon, but a car is not.
We are not at the stage where we can just tell a program "Any object that can fit on a podium is a poditon" and then ask "Is X a poditon?" and get a correct answer. And we probably won't be there for another couple decades.
So, I tried this out with GPT-3 in the OpenAI Playground. (The model called text-davinci-002.) My prompt looked like this:
GPT-3 says: "Yes, because laptop computers are small enough to fit on top of a podium."
Is a normal-sized automobile a poditon? "No, an automobile is too large to fit on top of a podium."
Is the sun a poditon? "No, because the sun is too large to fit on top of a podium."
Is a human being a poditon? "Yes, because human beings are small enough to fit on top of a podium."
Is a house a poditon? "No, because a house is too large to fit on top of a podium."
While generating those answers it also spontaneously answered the question for tennis balls (yes) and books (yes).
Decades sure do go by quickly, these days.
FWIW, the presentation was in 2018, and GPT-3 came out in 2020.
But yeah, that's pretty amazing.
> We are not at the stage where we can just tell a program "Any object that can fit on a podium is a poditon" and then ask "Is X a poditon?" and get a correct answer. And we probably won't be there for another couple decades.
If that presenter actually said that, they need to take a look at "Few shot learning in language models" (just Google the term and start reading the papers).
If you've seen examples of GPT3 you'd know this is already possible.