← Back to context

Comment by bloomingkales

3 months ago

This thing that people are calling “reasoning” is more like rendering to me really, or multi pass rendering. We’re just refining the render, there’s no reasoning involved.

That was succinct and beautifully stated. Thank-you for the "Aha!" moment.

  • Hah. You should check out my other comment on how I think we’re obviously in a simulation (remember, we just need to see a good enough render).

    LLMs are changing how I see reality.

How are you defining "reasoning"?

Because I see these sorts of gnostic assertion about LLMs all the time about how they "definitely aren't doing <thing we normally apply to meat-brains>" by gesturing at the technical things it's doing, with no attempts to actually justify the negative assertion.

It often comes across as privileged reason trying to justify that of course the machine isn't doing some ineffable thing only meat-brains do.

  • From my other ridiculous comment, as I do entertain simulation theory in my understanding of God:

    Reasoning as we know it could just be a mechanism to fill in gaps in obviously sparse data (we absolutely do not have all the data to render reality accurately, you are seeing an illusion). Go reason about it all you want.

    The LLM doesn’t know anything. We determine what output is right, even if the LLM swears the output is right. We “reason” about it, I guess? Well in this case the whole “reasoning” process is to simply get an output that looks right, so what is reasoning in our case?

    Let me just go one ridiculous level lower. If I measure every frame the Hubble telescope takes, and I measure with a simple ruler the distances between things, frame by frame, I can “reason” out some rules of the universe (planetary orbits). In this “reasoning” process, the very basic question of “well why, and who made this” immediately arises, so reasoning always leads to the fundamental question of God.

    So, yeah. We reason to see God, because that’s all we’re seeing, everything else is an illusion. Reasoning is inextricably linked to God, so we have to be very open minded when we ask what is this machine doing.

    • Honestly, I was going to nitpick, but this definition scratches an itch in my brain so nicely that I'll just complement it as beautiful. "We reason to see God", I love it.

      (Also, if I might give a recommendation, you might be the type of person to enjoy Unsong by Scott Alexander https://unsongbook.com/)

      1 reply →

Yes.

Before LLMs we had N-Gram language models. Many tasks like speech recognition worked as beach search in the graph defined by the ngram language model. You could easily get huge accuracy gains simply by pruning your beam less.

s1 reminds of this. You can always trade off latency for accuracy. Given these LLMs are much more complex than good old N-Grams, we're just discovering how to do this trade.

  • Let me carry that concept, “learning to do this trade”, it’s a new trade.

    I don’t believe computer science has the algorithms to handle this new paradigm. Everything was about sequential deterministic outputs, and clever ways to do it fast. This stuff is useless at the moment. We need new thinkers on how to not think sequentially or how not to think about the universe in such a small way.

    Verifying input/output pairs is the old way. We need to understand differently going forward.

We could see it the other way around : what we call "reasoning" may actually be some kind of multipass rendering, whatever it is performed by computers or human brains.

Which is related to multistage/ hierarchical/coarse-to-fine optimization, which is a pretty good way to find the global optimum in many problem domains.

"...there’s no reasoning involved...wait, could I just be succumbing to my heuristic intuitions of what is (seems to be) true....let's reconsider using System 2 thinking..."

  • Or there is no objective reality (well there isn’t, check out the study), and reality is just a rendering of the few state variables that keep track of your simple life.

    A little context about you:

    - person

    - has hands, reads HN

    These few state variables are enough to generate a believable enough frame in your rendering.

    If the rendering doesn’t look believable to you, you modify state variables to make the render more believable, eg:

    Context:

    - person

    - with hands

    - incredulous demeanor

    - reading HN

    Now I can render you more accurately based on your “reasoning”, but truly I never needed all that data to see you.

    Reasoning as we know it could just be a mechanism to fill in gaps in obviously sparse data (we absolutely do not have all the data to render reality accurately, you are seeing an illusion). Go reason about it all you want.

    • Is this a clever rhetorical trick to make it appear that your prior claim was correct?

      If not: what am I intended to take away from this? What is its relevance to my comment?

      4 replies →