← Back to context

Comment by coldtea

18 hours ago

That is not that strong an argument as it seems, because we too might very well be "a series of weights for probable next tokens".

The main difference is the training part and that it's always-on.

If you claim something might "very well" be something you state you need some better proof. Otherwise we might also "very well" be living in the matrix.

That is a silly point. We very clearly are not "a series of weights for probable next tokens", as we can reason based on prior data points. LLMs cannot.

  • Unless you're using some mystical conception of "reason", nothing about being able to "reason based on prior data points" translates to "we very clearly are not a series of weights for probable next tokens".

    And in fact LLMs can very well "reason based on prior data points". That's what a chat session is. It's just that this is transient for cost reasons.

People always say this kind of thing. Human minds are not Turing machines or able to be simulated by Turing machines. When you go about your day doing your tasks, do you require terajoules of energy? I believe it is pretty clear human thinking is not at all like a computer as we know them.

  • >People always say this kind of thing. Human minds are not Turing machines or able to be simulated by Turing machines

    That's just a claim. Why so? Who said that's the case?

    >When you go about your day doing your tasks, do you require terajoules of energy?

    That's the definition of irrelevant. ENIAC needed 150 kW to do about 5,000 additions per second. A modern high-end GPU uses about 450 W to do around 80 trillion floating-point operations per second. That’s roughly 16 billion times the operation rate at about 1/333 the power, or around 5 trillion times better energy efficiency per operation.

    Given such increase being possible, one can expect a future computer being able to run our mental tasks level of calculation, with similar or better efficiency than us.

    Furthermore, "turing machine" is an abstraction. Modern CPUs/GPUs aren't turing machines either, in a pragmatic sense, they have a totally different architecture. And our brains have yet another architecture (more efficient at the kind of calculations they need).

    What's important is computational expressiveness, and nothing you wrote proves that the brains architecture can't me modelled algorithmically and run in an equally efficient machine.

    Even equally efficient is a red herring. If it's 1/10000 less efficient would it matter for whether the brain can be modelled or not? No, it would just speak to the effectiveness of our architecture.

We are much more than weights which output probable next tokens.

You are a fool if you think otherwise. Are we conscious beings? Who knows, but we’re more than a neural network outputting tokens.

Firstly, and most obviously, we aren’t LLMs, for Pete’s sake.

There are parts of our brains which are understood (kinda) and there are parts which aren’t. Some parts are neural networks, yes. Are all? I don’t know, but the training humans get is coupled with the pain and embarrassment of mistakes, the ability to learn while training (since we never stop training, really), and our own desires to reach our own goals for our own reasons.

I’m not spiritual in any way, and I view all living beings as biological machines, so don’t assume that I am coming from some “higher purpose” point of view.

  • >We are much more than weights which output probable next tokens. You are a fool if you think otherwise. Are we conscious beings? Who knows, but we’re more than a neural network outputting tokens.

    That's just stating a claim though. Why is that so?

    Mine is reffering to the "brain as prediction machine" establised theory. Plus on all we know for the brain's operation (neurons, connections, firings, etc).

    >There are parts of our brains which are understood (kinda) and there are parts which aren’t. Some parts are neural networks, yes. Are all?

    What parts aren't? Can those parts still be algorithmically described and modelled as some information exchange/processing?

    >but the training humans get is coupled with the pain and embarrassment of mistakes

    Those are versions of negative feedback. We can do similar things to neural networks (including human preference feedback, penalties, and low scores).

    >the ability to learn while training (since we never stop training, really)

    I already covered that: "The main difference is the training part and that it's always-on."

    We do have NNs that are continuously training and updating weights (even in production).

    For big LLMs it's impractical because of the cost, otherwise totally doable. In fact, a chat session kind of does that too, but it's transient.

  • They're not artificial intelligence neural networks.

    They're biological neural networks. Brains are made of neurons (which Do The Thing... mysteriously, somehow. Papers are inconclusive!) , Glia Cells (which support the neurons), and also several other tissues for (obvious?) things like blood vessels, which you need to power the whole thing, and other such management hardware.

    Bioneurons are a bit more powerful than what artificial intelligence folks call 'neurons' these days. They have built in computation and learning capabilities. For some of them, you need hundreds of AI neurons to simulate their function even partially. And there's still bits people don't quite get about them.

    But weights and prediction? That's the next emergence level up, we're not talking about hardware there. That said, the biological mechanisms aren't fully elucidated, so I bet there's still some surprises there.

We very obviously are not just a series of weights for probable next tokens. Like seriously, you can even ask an LLM and it will tell you our brains work differently to it, and that’s not even including the possibility that we have a soul or any other spiritual substrait.

  • >We very obviously are not just a series of weights for probable next tokens.

    How exactly? Except via handwaving? I refer to the "brain as prediction machine theory" which is the dominant one atm.

    >you can even ask an LLM and it will tell you our brains work differently to it

    It will just tell me platitudes based on weights of the millions of books and articles and such on its training. Kind of like what a human would tell me.

    >and that’s not even including the possibility that we have a soul or any other spiritual substrait.

    That's good, because I wasn't including it either.

    • "brain as prediction machine theory" is dominant among whom, exactly? Is it for the same reason that the "watchmaker analogy" was 'dominant' when clockwork was the most advanced technology commonly available?

  • Its really just a matter of degrees. There are 1 million, 1 million, 1 trillion parameter LLMs... and you keep scaling those parameters and you eventually get to humans. But it's still probable next tokens (decisions) based on previous tokens (experience).

    • > Its really just a matter of degrees. There are 1 million, 1 million, 1 trillion parameter LLMs... and you keep scaling those parameters and you eventually get to humans.

      It isn’t because humans and current LLMs have radically different architectures

      LLMs: training and inference are two separate processes; weights are modifiable during training, static/fixed/read-only at runtime

      Humans: training and inference are integrated and run together; weights are dynamic, continuously updated in response to new experiences

      You can scale current LLM architectures as far as you want, it will never compete with humans because it architecturally lacks their dynamism

      Actually scaling to humans is going to require fundamentally new architectures-which some people are working on, but it isn’t clear if any of them have succeeded yet

      2 replies →

    • They’re both neural networks, but the architectures built using those neural connections, and the way they are trained and operate are completely different. There are many different artificial neural network architectures. They’re not all LLMs.

      AlphaZero isn’t a LLM. There are Feed Forward networks, recurrent networks, convolutional networks, transformer networks, generative adversarial networks.

      Brains have many different regions each with different architectures. None of them work like LLMs. Not even our language centres are structured or trained anything like LLMs.

      5 replies →

    • LOL. Oook.. No i dont think so. The human experience and the mechanisms behind it have a lot of unknowns and im pretty sure that trying to confine the human experience into the amount of parameters there are is short sighted.

      2 replies →

  • Our brains work differently, yes. What evidence do you have that our brains are not functionally equivalent to a series of weights being used to predict the next token?

    I'm not claiming that to be the case, merely pointing out that you don't appear to have a reasonable claim to the contrary.

    > not even including the possibility that we have a soul or any other spiritual substrait.

    If we're going to veer off into mysticism then the LLM discussion is also going to get a lot weirder. Perhaps we ought to stick to a materialist scientific approach?

    • You are setting the bar in a way that makes “functional equivalence” unfalsifiable.

      If by “functionally equivalent” you mean “can produce similar linguistic outputs in some domains,” then sure we’re already there in some narrow cases. But that’s a very thin slice of what brains do, and thus not functionally equivalent at all.

      There are a few non-mystical, testable differences that matter:

      - Online learning vs. frozen inference: brains update continuously from tiny amounts of data, LLMs do not

      - Grounding: human cognition is tied to perception, action, and feedback from the world. LLMs operate over symbol sequences divorced from direct experience.

      - Memory: humans have persistent, multi-scale memory (episodic, procedural, etc.) that integrates over a lifetime. LLM “memory” is either weights (static) or context (ephemeral).

      - Agency: brains are part of systems that generate their own goals and act on the world. LLMs optimize a fixed objective (next-token prediction) and don’t have endogenous drives.

      1 reply →