Comment by mike_hearn
5 days ago
Are you sure? I wrote an essay at the end of 2016 about the state of AI research and at the time researchers were demolishing benchmarks like FAIR's bAbI which involved generating answers to questions. I wrote back then about story comprehension and programming robots by giving them stories (we'd now call these prompts).
https://blog.plan99.net/the-science-of-westworld-ec624585e47
bAbI paper: https://arxiv.org/abs/1502.05698
Abstract: One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human.
So at least FAIR was thinking about making AI that you could ask questions of in natural language. Then they went and beat their own benchmark with the Memory Networks paper:
https://arxiv.org/pdf/1410.3916
Fred went to the kitchen. Fred picked up the milk. Fred travelled to the office.
Where is the milk ? A: office
Where does milk come from ? A: milk come from cow
What is a cow a type of ? A: cow be female of cattle
Where are cattle found ? A: cattle farm become widespread in brazil
What does milk taste like ? A: milk taste like milk
What does milk go well with ? A: milk go with coffee
Where was Fred before the office ? A: kitchen
That was published in 2015. So we can see quite early ChatGPT like capabilities, even though they're quite primitive still.
No comments yet
Contribute on Hacker News ↗