← Back to context

Comment by starbugs

1 year ago

Something in the mind.

(Didn't make that up. It's one of the definitions of Merriam Webster: https://www.merriam-webster.com/dictionary/thought)

Way to cherry pick the one definition that fits your argument and ignore all the other definitions which kinda fit for what this model is doing.

  • > Way to cherry pick the one definition that fits your argument and ignore all the other definitions which kinda fit for what this model is doing.

    There is no definition on the linked page that isn't self-referential or even more inapplicable in the context of transformer based LLMs?

    * "an individual act or product of thinking": This is self-referential.

    * "a developed intention or plan": The model doesn't have an intention or plan - it predicts the probability of text based on a user prompt mixed with corporate-inflicted bias.

    * "the intellectual product or the organized views and principles of a period, place, group, or individual": GPTs are not periods, places, groups or individuals. I don't doubt that the input to the model is based on thought, but this is not the same as the model thinking. Maybe this is where your confusion originates from?

    Bonus: I asked GPT4o "Can you think?" (sorry I won't pay for o1). It was trained to state itself that it can't think:

    "I can process information, analyze patterns, and provide responses based on data and logical reasoning. However, I don't "think" in the human sense—I don't have consciousness, emotions, or subjective experiences. My abilities are limited to generating text, solving problems, and assisting with tasks based on the vast amount of information I've been trained on.

    In essence, I simulate reasoning, but I don't have awareness or independent thought."

    I would add that I strongly doubt that it is capable of even simulating "reasoning" as is claimed by the model "authors" (not sure if we can say they are authors since most of the model isn't their IP). And I can prove that the models up to 4o aren't generally able to solve problems.

    The question really is whether a group of people is attempting to anthropomorphize a clever matrix processor to maximize hype and sales. You'll have to answer that one for yourself.

    • What does self referential have to do with anything? These LLMs have proven they can "talk about themselves".

      > an individual act or product of thinking

      Emphasis on "product of thinking". Though you'll probably get all upset by the use of the word "thinking". However, people have applied the word "thinking" to computers for decades. When a computer is busy or loading, they might say "it's thinking."

      > a developed intention or plan

      You could certainly ask this model to write up a plan for something.

      > reasoning power

      Whether you like it or not, these LLMs do have some limited ability to reason. Far from human level reasoning, and they VERY frequently make mistakes/hallucinations and misunderstand, but these models have proven they can reason about things they weren't specifically trained on. For example, I remember seeing one person made up a new programming language, never existed before, and they were able to discuss it with an LLM.

      No, they're not conscious. No, they don't have minds. But we need to rethink what it means for something to be "intelligent", or what it means for something to "reason", that doesn't require a conscious mind.

      For the record, I find LLM technology fascinating, but I also see how flawed it is, how over hyped it is, that it is mostly a stochastic parrot, and that currently it's greatest use is as a grand scale bullshit misinformation generator. I use chatgpt sparingly, only when I'm confident it may actually give me an accurate answer. I'm not here to praise chatbots or anything, but I also don't have a blind hatred for the technology, nor do I immediately reject everything labeled as "AI".

      1 reply →