Moravec's Paradox and the Robot Olympics

4 days ago (physicalintelligence.company)

I genuinely did not expect to see a robot handling clothing like this within the next ten years at least. Insanely impressive

I do find it interesting that they state that each task is done with a fine tuned model. I wonder if that’s a limitation of the current data set their foundation model is trained on (which is what I think they’re suggesting in the post) or if it reflects something more fundamental about robotics tasks. It does remind me of a few years ago in LLMs when fine tuning was more prevalent. I don’t follow LLM training methodology closely but my impression was that the bulk of recent improvements have come from better RL post training and inference time reasoning.

Obviously they’re pursuing RL and I’m not sure spending more tokens at inference would even help for fine manipulation like this, notwithstanding the latency problems with that.

So, maybe the need for fine tuning goes away with a better foundation model like they’re suggesting? I hope this doesn’t point towards more fundamental limitations on robotics learning with the current VLA foundation model architectures

  • There's a lot of indications that robotics AI is in a data-starved regime - which means that future models are likely to attain better 0-shot performance, solve more issues in-context, generalize better, require less task-specific training, and be more robust.

    But it seems like a degree of "RL in real life" is nigh-inevitable - imitation learning only gets you this far. Kind of like RLVR is nigh-inevitable for high LLM performance on agentic tasks, and for many of the same reasons.

Those videos are very impressive. This is real progress on tasks at which robotics have been failing for fifty years.

Here are some of the same tasks being attempted as part of the DARPA ARM program in 2012.[1] Compare key-in-lock and door opening with the 2025 videos linked above. Huge improvement.

We just might be over the hump on manipulation.

[1] https://www.youtube.com/watch?v=jeABMoYJGEU

  > The gold-medal task is to hang an inside-out dress shirt, after turning it right-side-in, which we do not believe our current robot can do physically, because the gripper is too wide to fit inside the sleeve

You don't need to fit inside the sleeve to turn it inside out...

Think about a sock (same principle will apply, but easier to visualize). You scrunch up the sock so it's like a disk. Then you pull to invert.

This can be done with any piece of clothing. It's something I do frequently because it's often easier (I turn all my clothes inside out before washing).

  • With those grippers, though? There's a lot of difficulty in making it scrunch up a sock, and a sock does fit. Doing a long sleeve completely unanchored is probably physically possible with extreme care but I see why they mark the robot down as physically unable.

Sergey Levine, one of the co-founders, sat for an excellent Dwarkesh podcast episode this year, which I thoroughly recommend.