← Back to context

Comment by yusina

9 months ago

Um, moving the goal post?

The claim was LLMs understand things.

The counter was, nope, they don't. They can fake it well though.

Your argument now is, well humans also often fake it. Kinda implying that it means it's ok to claim that LLMs have understanding?

They may outclass people in a bunch of things. That's great! My pocket calculator 20 years also did, and it's also great. Neither understands what they are doing though.

It's fun to talk about, but personally he whole "understanding" debate is a red herring, imo what we actually care about when we talk about intelligence is the capacity and depth of: second order thinking, regardless of the underlying mechanism. I think personally key question isn't "do LLMs understand?" but, "can LLMs engage in second order thinking?" The answer seems to be yes - they can reason about reasoning, plan their approaches, critique their own outputs, and adapt their strategies, o1 has shown us that with RL and reasoning tokens you can include it in a single system, but our brains have multiple systems we can control and that can be combined in multiple ways at any given moment: emotions, feelings, thoughts combined into user space, 3 core systems input, memory, output. The nuances is in the fact that various reasons (nature + nurture), various humans appear to have varying levels of meta control over the multiple reasoning systems.

Why are you pretending to be participating in a debate? You mention things like "moving the goalpost", "counter[arguments]", and "arguments", as if you did anything more than just assert your opinion in the first place.

This is what you wrote:

> LLMs don't understand.

That's it. An assertion of opinion with nothing else included. I understand it sucks when people feel otherwise, but that's just kinda how this goes. And before you bring up how there were more sentences in your comment, I'd say they are squarely irrelevant, but sure, let's review those too:

> It's mind-boggling to me that large parts of the tech industry think that.

This is just a personal reporting of your own feelings. Zero argumentational value.

> Don't ascribe to them what they don't have.

A call for action, combined with the same assertion of opinion as before, just rehashed. Again, zero argumentational value.

> They are fantastic at faking understanding.

Opinion, loaded with the previous assertion of opinion. No value add.

> Don't get me wrong, for many tasks, that's good enough.

More opinion. Still no arguments or verifiable facts presented or referenced. Also a call for action.

> But there is a fundamental limit to what all this can do.

Opinion, and a vague one at that. Still nothing.

> Don't get fooled into believing there isn't.

Call for action + assertion of opinion again. Nope, still nothing.

It's pretty much the type of comment I wish would just get magically filtered out before it ever reached me. Zero substance, maximum emotion, and plenty of opportunities for people to misread your opinions as anything more than that.

Even within your own system of opinions, you provide zero additional clarification why you think what you think. There's literally nothing to counter, as strictly speaking you never actually ended up claiming anything. You just asserted your opinion, in its lonesome.

This is no way to discuss anything, let alone something you or others likely feel strongly about. I've had more engaging, higher quality, and generally more fruitful debates with the models you say don't understand, than anyone here so far could have possibly had with you. Please reconsider.

  • > higher quality, and generally more fruitful debates with the models you say don't understand

    My favorite thing about LLMs is that they can convincingly tell me why I'm wrong or how I could think about things differently, not for ideas on the order of sentences and paragraphs, but on the order of pages.

    My second favorite thing is that it is amazingly good at deconstructing manipulative language and power tactics. It is scary good at developing manipulation strategies and inferring believable processes to achieve complex goals.

    • Had some success with that myself as well. Also found out about Claimify [0] recently, I should really get myself together and get a browser extension going one of these days. I think the quantized gemma3 models should be good enough for this, so it could remain all local too.

      [0] https://youtu.be/WTs-Ipt0k-M

  • So, it is your opinion that the mere expression of opinion "without anything else" is not allowed in a discussion?

    And if that is so, didn't you also "just" express an opinion? Would your own contribution to the discussion pass your own test?

    You might have overlooked that I provided extensive arguments all around in this thread. Please reconsider.

    • > So, it is your opinion that the mere expression of opinion "without anything else" is not allowed in a discussion?

      This is not what I said, no: I said that asserting your opinion over others' and then suddenly pretending to be in a debate is "not allowed" (read: is no way to have a proper discussion).

      A mere expression of opinion would have been like this:

      > [I believe] LLMs don't understand.

      And sure, having to stick an explicit "I think / I believe" everywhere is annoying. But it became necessary, when all the other things you had to say continued to omit this magic phrase, and it became clearly intentionally not present, when you started talking as if you made any arguments of your own. Merely expressing your opinion is not what you did, even when reading it charitably. That's my problem.

      > Would your own contribution to the discussion pass your own test?

      And so yes, I believe it does.

      > You might have overlooked that I provided extensive arguments all around in this thread. Please reconsider.

      I did consider this. It cannot be established that the person whose comment you took a whole lot of issue with also considered those though, so why would I do so? And so, I didn't, and will not either. Should I change my mind, you'll see me in those subthreads later.

      2 replies →