← Back to context

Comment by AIorNot

15 days ago

LLM models are to a large extent neuronal analogs of human neural architecture

- of course they reason

The claim of the “stochastic parrot” needs to go away

Eg see: https://www.anthropic.com/news/golden-gate-claude

I think the rub is that people think you need consciousness to do reasoning, I’m NOT claiming LLMs have consciousness or awareness

They are really not neuronal analogs, reasoning is far from what they do. If they reasoned, they'd stick to their guns more readily, but try to contradict an LLM and it will make any logic leap you ask it too.

If you debate with me, I'll keep reasoning on the same premises and usually the difference between two humans is not in reasoning but in choice of premises.

For instance you really want here to assert that LLM are close to human, I want to assert they're not - truth is probably in between but we chose two camps. We'll then reason from these premises, reach antagonistic conclusions and slowly try to attack each other point.

An LLM cannot do that, it cannot attack your point very well, it doesn't know how to say you're wrong, because it doesn't care anyway. It just completes your sentences, so if you say "now you're wrong, change your mind" it will, which sounds far from reasoning to me, and quite unreasonable in fact.

  • > An LLM cannot do that, it cannot attack your point very well, it doesn't know how to say you're wrong, because it doesn't care anyway. It just completes your sentences, so if you say "now you're wrong, change your mind" it will, which sounds far from reasoning to me, and quite unreasonable in fact.

    That is absolute bullshit. Go try any frontier reasoning model such as Gemini 2.5 Pro or GPT-o3 and see how that goes. They will inform you that you are full of shit.

    Do you understand that they are deep learning models with hundreds of layers and trillions of parameters? They have learned patterns of reasoning, and can emulate human reasoning well enough to call you out on that nonsense.

> LLM models are to a large extent neuronal analogs of human neural architecture

They are absolutely not. Despite the disingenuous name, computer neural nets are nothing like biological brains.

(Neural nets are a generalization of the logistic regression.)