← Back to context

Comment by vessenes

7 days ago

Please let’s hold M Chollet to account, at least a little. He launched ARC claiming transformer architectures could never do it and that he thought solving it would be AGI. And he was smug about it.

ARC 2 had a very similar launch.

Both have been crushed in far less time without significantly different architectures than he predicted.

It’s a hard test! And novel, and worth continuing to iterate on. But it was not launched with the humility your last sentence describes.

Here is what the original paper for ARC-AGI-1 said in 2019:

> Our definition, formal framework, and evaluation guidelines, which do not capture all facets of intelligence, were developed to be actionable, explanatory, and quantifiable, rather than being descriptive, exhaustive, or consensual. They are not meant to invalidate other perspectives on intelligence, rather, they are meant to serve as a useful objective function to guide research on broad AI and general AI [...]

> Importantly, ARC is still a work in progress, with known weaknesses listed in [Section III.2]. We plan on further refining the dataset in the future, both as a playground for research and as a joint benchmark for machine intelligence and human intelligence.

> The measure of the success of our message will be its ability to divert the attention of some part of the community interested in general AI, away from surpassing humans at tests of skill, towards investigating the development of human-like broad cognitive abilities, through the lens of program synthesis, Core Knowledge priors, curriculum optimization, information efficiency, and achieving extreme generalization through strong abstraction.

  • https://www.dwarkesh.com/p/francois-chollet (June 2024, about ARC-AGI-1. Note the AGI right in the name)

    > I’m pretty skeptical that we’re going to see an LLM do 80% in a year. That said, if we do see it, you would also have to look at how this was achieved. If you just train the model on millions or billions of puzzles similar to ARC, you’re relying on the ability to have some overlap between the tasks that you train on and the tasks that you’re going to see at test time. You’re still using memorization.

    > Maybe it can work. Hopefully, ARC is going to be good enough that it’s going to be resistant to this sort of brute force attempt but you never know. Maybe it could happen. I’m not saying it’s not going to happen. ARC is not a perfect benchmark. Maybe it has flaws. Maybe it could be hacked in that way.

    e.g. If ARC is solved not through memorization, then it does what it says on the tin.

    [Dwarkesh suggests that larger models get more generalization capabilities and will therefore continue to become more intelligent]

    > If you were right, LLMs would do really well on ARC puzzles because ARC puzzles are not complex. Each one of them requires very little knowledge. Each one of them is very low on complexity. You don't need to think very hard about it. They're actually extremely obvious for human

    > Even children can do them but LLMs cannot. Even LLMs that have 100,000x more knowledge than you do still cannot.

    If you listen to the podcast, he was super confident, and super wrong. Which, like I said, NBD. I'm glad we have the ARC series of tests. But they have "AGI" right in the name of the test.

    • He has been wrong about timelines and about what specific approaches would ultimately solve ARC-AGI 1 and 2. But he is hardly alone in that. I also won't argue if you call him smug. But he was right about a lot of things, including most importantly that scaling pretraining alone wouldn't break ARC-AGI. ARC-AGI is unique in that characteristic among reasoning benchmarks designed before GPT-3. He deserves a lot of credit for identifying the limitations of scaling pretraining before it even happened, in a precise enough way to construct a quantitative benchmark, even if not all of his other predictions were correct.

      1 reply →