← Back to context

Comment by yobbo

5 days ago

> Nobody was thinking about “solving” tasks by asking oracle-like models

I remember this being talked about maybe even earlier than 2018/2019, but the scale of models then was still off by at least one order of magnitude before it had a chance of working. It was the ridiculous scale of GPT that allowed the insight that scaling would make it useful.

(Tangentially related; I remember a research project/system from maybe 2010 or earlier that could respond to natural language queries. One of the demos was to ask for distance between cities. It was based on some sort of language parsing and knowledge graph/database, not deep-learning. Would be interesting to read about this again, if anyone remembers.)