← Back to context

Comment by slightwinder

4 days ago

> It won't solve an original problem for which it has no prior context to "complete" an approximated solution with.

Neither can humans. We also just brute force "autocompletion" with our learned knowledge and combine it to new parts, which we then add to our learned knowledge to deepen the process. We are just much, much better at this than AI, after some decades of training.

And I'm not saying that AI is fully there yet and has solved "thinking". IMHO it's more "pre-thinking" or proto-intelligence.. The picture is there, but the dots are not merging yet to form the real picture.

> It does not actually add 1+2 when you ask it to do so. it does not distinguish 1 from 2 as discrete units in an addition operation.

Neither can a toddler nor an animal. The level of ability is irrelevant for evaluating its foundation.

> Neither can humans. We also just brute force "autocompletion"

I have to disagree here. When you are tasked with dividing 2 big numbers you most certainly don't "autocomplete" (with the sense of finding the most probable next tokens, which is what an LLM does), rather you go through set of steps you have learned. Same as with the strawberry example, you're not throwing guesses until something statistically likely to be correct sticks.

  • Humans first start with recognizing the problem, then search through their list of abilities to find the best skill for solving it, thus "autocomplete" their inner shell's commandline, before they start execution, to stay with that picture. Common AIs today are not much different from this, especially with reasoning-modes.

    > you're not throwing guesses until something statistically likely to be correct sticks.

    What do you mean? That's exactly how many humans are operating with unknown situations/topics. If you don't know, just throw punches and look what works. Of course, not everyone is ignorant enough to be vocal about this in every situation.

  • > I have to disagree here. When you are tasked with dividing 2 big numbers you most certainly don't "autocomplete" (with the sense of finding the most probable next tokens, which is what an LLM does), rather you go through set of steps you have learned.

    Why do you think that this is the part that requires intelligence, rather than a more intuitive process? Because they have had machines that can do this mechanically for well over a hundred years.

    There is a whole category of critiques of AI of this type: "Humans don't think this way, they mechanically follow an algorithm/logic", but computers have been able to mechanically follow algorithms and perform logic from the beginning! That isn't thinking!

    • Good points - mechanically just following algorithms isn't thinking, and neither is "predicting the next tokens".

      But would a combination of the 2 then be close to what we define as thinking though?

humans, and even animals track different "variables" or "entities" and distinct things with meaning and logical properties which they then apply some logical system on those properties to compute various outputs. LLMs see everything as one thing, in case of chat-completion models, they're completing text. in case of image generation, they're completing an image.

Look at it this way, two students get 100% on an exam. One learned the probability of which multiple choice options have the likelihood of being most correct based on how the question is worded, they have no understanding of the topics at hand, and they're not performing any sort of topic-specific reasoning. They're just good at guessing the right option. The second student actually understood the topics, reasoned, calculated and that's how they aced the exam.

I recently read about a 3-4 year old that impressed their teacher by reading perfectly a story book like an adult. it turns out, their parent read it to them so much, they can predict based on page turns and timing the exact words that need to be spoken. The child didn't know what an alphabet, word,etc.. was they just got so good at predicting the next sequence.

That's the difference here.

  • I'd say, they are all doing the same, just in different domains and level of quality. "Understanding the topic" only means they have specialized, deeper contextualized information. But at the end, that student also just autocompletes their memorized data, with the exception that some of that knowledge might trigger a program they execute to insert the result in their completion.

    The actual work is in gaining the knowledge and programs, not in accessing and executing them. And how they operate, on which data, variables, objects, worldview or whatever you call it, this might make a difference in quality and building speed, but not for the process in general.

    • > only means they have specialized, deeper contextualized information

      no, LLMs can have that contextualized information. understanding in a reasoning sense means classifying the thing and developing a deterministic algorithm to process it. If you don't have a deterministic algorithm to process it, it isn't understanding. LLMs learn to approximate, we do that too, but then we develop algorithms to process input and generate output using a predefined logical process.

      A sorting algorithm is a good example, when you compare that with an LLM sorting a list. they both may have correct outcome, but the sorting algorithm "understood" the logic and will follow that specific logic and have consistent performance.

      2 replies →

>>> We also just brute force "autocompletion"

Wouldn't be an A.I. discussion without a bizarre, untrue claim that the human brain works identically.

  • There are no true and untrue claims about how the brain works, because we have no idea how it works.

    The reason people give that humans are not auto-complete is "Obviously I am not an autocomplete"

    Meanwhile, people are just a black box process that output words into their head, which they then take credit for, and calling it cognition. We have no idea how that black box that serves up a word when I say "Think of a car brand" works.

    • > because we have no idea how it works

      Flagrantly, ridiculously untrue. We don't know the precise nuts and bolts regarding the emergence of consciousness and the ability to reason, that's fair, but different structures of the brain have been directly linked to different functions and have been observed in operation on patients being stimulated in various ways with machinery attached to them reading levels of neuro-activity in the brain, and in specific regions. We know which parts handle our visual acuity and sense of hearing, and even cooler, we can watch those same regions light up when we use our "minds eye" to imagine things or engage in self-talk, completely silent speech that nevertheless engages our verbal center, which is also engaged by the act of handwriting and typing.

      In short: no, we don't have the WHOLE answer. But to say that we have no idea is categorically ridiculous.

      As to the notion of LLMs doing similarly: no. They are trained on millions of texts of various sources of humans doing thinking aloud, and that is what you're seeing: a probabilistic read of millions if not billions of documents, written by humans, selected by the machine to "minimize error." And crucially, it can't minimize it 100%. Whatever philosophical points you'd like to raise about intelligence or thinking, I don't think we would ever be willing to call someone intelligent if they just made something up in response to your query, because they think you really want it to be real, even when it isn't. Which points to the overall charade: it wants to LOOK intelligent, while not BEING intelligent, because that's what the engineers who built it wanted it to do.

    • Accepting as true "We don't know how the brain works in a precise way" does not mean that obviously untrue statements about the human brain cannot still be made. Your brain specifically, however, is an electric rat that pulls on levers of flesh while yearning for a taste of God's holiest cheddar. You might reply, "no! that cannot be!", but my statement isn't untrue, so it goes.

    • >>>There are no true and untrue claims about how the brain works, because we have no idea how it works.

      Which is why if you pick up a neuroscience textbook it's 400 pages of blank white pages, correct?

      There are different levels of understanding.

      I don't need to know how a TV works to know there aren't little men and women acting out the TV shows when I put them on.

      I don't need to know how the brain works in detail to know claims that humans are doing the same things as LLMs to be similarly silly.

      7 replies →

    • Our output is quite literally the sum of our hardware (genetics) and input (immediate environment and history). For anyone who agrees that free will is nonsense, the debate is already over, we’re nothing more than output generating biological machines.

  • Similar, not identical. Like a bicycle and car are both vehicles with tires, but are still not identical vessels.

> We also just brute force "autocompletion" with our learned knowledge and combine it to new parts, which we then add to our learned knowledge to deepen the process

you know this because you're a cognitive scientist right? or because this is the consensus in the field?

>Neither can a toddler nor an animal. The level of ability is irrelevant for evaluating its foundation.

Its foundation of rational logical thought that can't process basic math? Even a toddler understands 2 is more than 1.