Comment by luma
5 days ago
Modern LLMs, just like everyone reading this, will instead reach for a calculator to perform such tasks. I can't do that in my head either, but a python script can so that's what any tool-using LLM will (and should) do.
5 days ago
Modern LLMs, just like everyone reading this, will instead reach for a calculator to perform such tasks. I can't do that in my head either, but a python script can so that's what any tool-using LLM will (and should) do.
This is special pleading.
Long multiplication is a trivial form of reasoning that is taught at elementary level. Furthermore, the LLM isn't doing things "in its head" - the headline feature of GPT LLMs is attention across all previous tokens, all of its "thoughts" are on paper. That was Opus with extended reasoning, it had all the opportunity to get it right, but didn't. There are people who can quickly multiply such numbers in their head (I am not one of them).
LLMs don't reason.
I tried this with Claude - it has to be explicitly instructed to not make an external tool call, and it can get the right answer if asked to show its work long-form.
Mathematics is not the only kind of reasoning, so your conclusion is false. The human brain also has compartments for different types of activities. Why shouldn't an AI be able to use tools to augment its intelligence?
I used the mathematics example only because the GP did. There are many other examples of non-reasoning, including some papers (as recent as Feb).
2 replies →
Furthermore, the LLM isn't doing things "in its head" - the headline feature of GPT LLMs is attention across all previous tokens, all of its "thoughts" are on paper
LOL, talk about special pleading. Whatever it takes to reshape the argument into one you can win, I guess...
LLMs don't reason.
Let's see you do that multiplication in your head. Then, when you fail, we'll conclude you don't reason. Sound fair?
I can do it with a scratch pad. And I can also tell you when the calculation exceeds what I can do in my head and when I need a scratch pad. I can also check a long multiplication answer in my head (casting 9s, last digit etc.) and tell if there’s a mistake.
The LLMs also have access to a scratch pad. And importantly don’t know when they need to use it (as in, they will sometimes get long multiplication right if you ask them to show their work but if you don’t ask them to they will almost certainly get it wrong).
3 replies →
The conclusion that LLMs don't reason is not a consequence of them not being able to do arithmetic, so your argument isn't valid.
Also, see https://news.ycombinator.com/newsguidelines.html
"Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
etc.
6 replies →
i assert that by your evidentiary standards humans don't reason.
presumably one of us is wrong.
therefore, humans don't reason.
LLMs don't use tools. Systems that contain LLMs are programmed to use tools under certain circumstances.
you’re just abstracting it away into this new “systems” definition
when someone says LLMs today they obviously mean software that does more than just text, if you want to be extra pedantic you can even say LLMs by themselves can’t even geenrate text since they are just model files if you don’t add them to a “system” that makes use of that model files, doh
> when someone says LLMs today they obviously mean ...
LLMs, if the someone is me or others who understand why it's important to be precise. And in this context, the distinction between LLM and AI mattered--not pedantic at all.
I won't respond further ... over and out.