Comment by Gazoche

1 month ago

> they don't feel intelligence but rather an attempt at mimicking it

Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context".

There is no higher thinking. They were literally built as a mimicry of intelligence.

> Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context". > There is no higher thinking. They were literally built as a mimicry of intelligence.

Maybe real intelligence also is a big optimization function? Brain isn't magical, there are rules that govern our intelligence and I wouldn't be terribly surprised if our intelligence in fact turned out to be kind of returning the most plausible thoughs. Might as well be something else of course - my point is that "it's not intelligence, it's just predicting next token" doesn't make sense to me - it could be both!

I don't understand why this point is NOT getting across to so many on HN.

LLM's do not think, understand, reason, reflect, comprehend and they never shall. I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.

  • > If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model.

    But you could make the exact same argument for a human mind? (could just simulate all those neural interactions with pen and paper)

    The only way to get out of it is to basically admit magic (or some other metaphysical construct with a different name).

    • > But you could make the exact same argument for a human mind?

      It would be an argument and you are free to make it. What the human mind is, is an open scientific and philosophical problem many are working on.

      The point is that LLM's are NOT the same because we DO know that LLM's are. Please see the myriad of tutorials 'write an LLM from scratch'

      1 reply →

    • I'm not so sure "a human mind" is the kind of newtonian clockwork thingiemabob you "could just simulate" within the same degree of complexity as the thing you're simulating, at least not without some sacrifices.

  • Can you give examples of how that "LLM's do not think, understand, reason, reflect, comprehend and they never shall" or that "completely mechanical process" helps you understand better when LLM works and when they don't?

    Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well. The "they don't reason" people tend to, in my opinion/experience, underestimate them by a lot, often claiming that they will never be able to do <thing that LLMs have been able to do for a year>.

    To be fair, the "they reason/are conscious" people tend to, in my opinion/experience, overestimate how much a LLM being able to "act" a certain way in a certain situation says about the LLM/LLMs as a whole ("act" is not a perfect word here, another way of looking at it is that they visit only the coast of a country and conclude that the whole country must be sailors and have a sailing culture).

    • We know what an LLM is in fact you can build one from scratch if you like. e.g. https://www.manning.com/books/build-a-large-language-model-f...

      It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?

      > Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well.

      My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.

      A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.

      What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition. Some things are so hard to define (and people have tried for centuries) e.g. what is consciousness? That they are a problem set within themselves please see Hard problem of consciousness.

      https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

      2 replies →