← Back to context

Comment by godelski

8 hours ago

I thought it funny a few weeks ago Karpathy shared a sample od NanoBannana solving some physics problems but despite getting the right output it isn't get the right answers.

I think it's quite illustrative of the problem even with coding LLMs. Code and math proofs aren't so different, what matters is the steps to generate the output. All that matters far more than the actual output. The output is meaningless if the steps to get there aren't correct. You can't just jump to the last line of a proof to determine its correctness and similarly you can't just look at a program's output to determine its correctness.

Checking output is a great way to invalidate them but do nothing to validate.

Maybe what surprised me most is that the mistakes NanoBananna made are simple enough that I'm absolutely positive Karpathy could have caught them. Even if his physics is very rusty. I'm often left wondering if people really are true believers and becoming blind to the mistakes or if they don't care. It's fine to make mistakes but I rarely see corrections and let's be honest here, these are mistakes that people of this caliber should not be making.

I expect most people here can find multiple mistakes with the physics problem. One can be found if you know what the derivative of e^x is and another can be found if you can count how many i's there are.

The AI cheats because it's focused on the output, not the answer. We won't solve this problem till we recognize the output and answer aren't synonymous

https://xcancel.com/karpathy/status/1992655330002817095