← Back to context

Comment by aprilthird2021

13 days ago

> As LLMs do things thought to be impossible before

Like what?

Your timeline doesn't sound crazy outlandish. It sounds pretty normal and lines up with my thoughts as AI has advanced over the past few years. Maybe more conservative than others in the field, but that's not a reason to dismiss him entirely any more than the hypesters should be dismissed entirely because they were over promising and under delivering?

> Now reasoning models can solve problems they never saw

This is not the same as a novel question though.

> o3 did huge progresses on ARC

Is this a benchmark? O3 might be great, but I think the average person's experience with LLMs matches what he's saying, it seems like there is a peak and we're hitting it. It also matches what Ilya said about training data being mostly gone and new architectures (not improvements to existing ones) needing to be the way forward.

> LeCun is directing an AI lab that as the same point has the following huge issues

Second point has nothing to do with the lab and more to do with Meta. Your last point has nothing to do with the lab at all. Meta also said they will have an agent that codes like a junior engineer by the end of the year and they are clearly going to miss that prediction, so does that extra hype put them back in your good books?