← Back to context

Comment by daveguy

9 days ago

o3 progress on ARC was not a zero shot. It was based on fine tuning to the particular data set. A major point of ARC is that humans do not need fine tuning more than being explained what the problem is. And a few humans working on it together after minimal explanation can achieve 100%.

o3 doing well on ARC after domain training is not a great argument. There is something significant missing from LLMs being intelligent.

I'm not sure if you watched the entire video, but there were insightful observations. I don't think anyone believes LLMs aren't a significant breakthrough in HCI and language modelling. But it is many layers with many winters away from AGI.

Also, understanding human and machine intelligence isn't about sides. And CoT is not symbolic reasoning.