← Back to context

Comment by cmenge

6 months ago

I kinda agree with both of you. It might be a required abstraction, but it's a leaky one.

Long before LLMs, I would talk about classes / functions / modules like "it then does this, decides the epsilon is too low, chops it up and adds it to the list".

The difference I guess it was only to a technical crowd and nobody would mistake this for anything it wasn't. Everybody know that "it" didn't "decide" anything.

With AI being so mainstream and the math being much more elusive than a simple if..then I guess it's just too easy to take this simple speaking convention at face value.

EDIT: some clarifications / wording

Agreeing with you, this is a "can a submarine swim" problem IMO. We need a new word for what LLMs are doing. Calling it "thinking" is stretching the word to breaking point, but "selecting the next word based on a complex statistical model" doesn't begin to capture what they're capable of.

Maybe it's cog-nition (emphasis on the cog).

  • What does a submarine do? Submarine? I suppose you "drive" a submarine which is getting to the idea: submarines don't swim because ultimately they are "driven"? I guess the issue is we don't make up a new word for what submarines do, we just don't use human words.

    I think the above poster gets a little distracted by suggesting the models are creative which itself is disputed. Perhaps a better term, like above, would be to just use "model". They are models after all. We don't make up a new portmanteau for submarines. They float, or drive, or submarine around.

    So maybe an LLM doesn't "write" a poem, but instead "models a poem" which maybe indeed take away a little of the sketchy magic and fake humanness they tend to be imbued with.

    • Depends on if you are talking about an llm or to the llm. Talking to the llm, it would not understand that "model a poem" means to write a poem. Well, it will probably guess right in this case, but if you go out of band too much it won't understand you. The hard problem today is rewriting out of band tasks to be in band, and that requires anthropomorphizing.

      2 replies →

    • A submarine is propelled by a propellor and helmed by a controller (usually a human).

      It would be swimming if it was propelled by drag (well, technically a propellor also uses drag via thrust, but you get the point). Imagine a submarine with a fish tail.

      Likewise we can probably find an apt description in our current vocabulary to fittingly describe what LLMs do.

    • I really like that, I think it has the right amount of distance. They don't write, they model writing.

      We're very used to "all models are wrong, some are useful", "the map is not the territory", etc.

      4 replies →

  • > this is a "can a submarine swim" problem IMO. We need a new word for what LLMs are doing.

    Why?

    A plane is not a fly and does not stay aloft like a fly, yet we describe what it does as flying despite the fact that it does not flap its wings. What are the downsides we encounter that are caused by using the word “fly” to describe a plane travelling through the air?

    • For what it's worth, in my language the motion of birds and the motion of aircraft _are_ two different words.

    • > A plane is not a fly and does not stay aloft like a fly, yet we describe what it does as flying despite the fact that it does not flap its wings.

      Flying doesn't mean flapping, and the word has a long history of being used to describe inanimate objects moving through the air.

      "A rock flies through the window, shattering it and spilling shards everywhere" - see?

      OTOH, we have never used to word "swim" in the same way - "The rock hit the surface and swam to the bottom" is wrong!

  • A machine that can imitate the products of thought is not the same as thinking.

    All imitations require analogous mechanisms, but that is the extent of their similarities, in syntax. Thinking requires networks of billions of neurons, and then, not only that, but words can never exist on a plane because they do not belong to a plane. Words can only be stored on a plane, they are not useful on a plane.

    Because of this LLMs have the potential to discover new aspects and implications of language that will be rarely useful to us because language is not useful within a computer, it is useful in the world.

    Its like seeing loosely related patterns in a picture and keep derivating on those patterns that are real, but loosely related.

    LLMs are not intelligence but its fine that we use that word to describe them.

  • It will help significantly, to realize that the only thinking happening is when the human looks at the output and attempts to verify if it is congruent with reality.

    The rest of the time it’s generating content.

  • It's more like muscle memory than cognition. So maybe procedural memory but that isn't catchy.

    • They certainly do act like a thing which has a very strong "System 1" but no "System 2" (per Thinking, Fast And Slow)

  • This is a total non-problem that has been invented by people so they have something new and exciting to be pedantic about.

    When we need to speak precisely about a model and how it works, we have a formal language (mathematics) which allows us to be absolutely specific. When we need to empirically observe how the model behaves, we have a completely precise method of doing this (running an eval).

    Any other time, we use language in a purposefully intuitive and imprecise way, and that is a deliberate tradeoff which sacrifices precision for expressiveness.

  • > "selecting the next word based on a complex statistical model" doesn't begin to capture what they're capable of.

    I personally find that description perfect. If you want it shorter you could say that an LLM generates.

We can argue all day what "think" means and whether a LLM thinks (probably not IMO), but at least in my head the threshold for "decide" is much lower so I can perfectly accept that a LLM (or even a class) "decides". I don't have a conflict about that. Yeah, it might not be a decision in the human sense, but it's a decision in the mathematical sense so I have always meant "decide" literally when I was talking about a piece of code.

It's much more interesting when we are talking about... say... an ant... Does it "decide"? That I have no idea as it's probably somewhere in between, neither a sentient decision, nor a mathematical one.

  • Well, it outputs a chain of thoughts that later used to produce better prediction. It produces a chain of thoughts similar to how one would do thinking about a problem out loud. It's more verbose that what you would do, but you always have some ambient context that LLM lacks.

I mean you can boil anything down to it's building blocks and make it seem like it didn't 'decide' anything. When you as a human decide something, your brain and it's neurons just made some connections with an output signal sent to other parts that resulting in your body 'doing' something.

I don't think LLMs are sentient or any bullshit like that, but I do think people are too quick to write them off before really thinking about how a nn 'knows things' similar to how a human 'knows' things, it is trained and reacts to inputs and outputs. The body is just far more complex.

  • I wasn't talking about knowing (they clearly encode knowledge), I was talking about thinking/reasoning, which is something LLMs do not in fact do IMO.

    These are very different and knowledge is not intelligence.

    • To me all of those are so vaguely defined that arguing whether an LLM is "really really" doing something is kind of a waste of time.

      It's like we're clinging on to things that make us feel like human cognition is special so we're saying LLM's arent "really" doing it, then not defining what it actually is.

> EDIT: some clarifications / wording

This made me think, when will we see LLMs do the same; rereading what they just sent, and editing and correcting their output again :P