Comment by ezst

4 hours ago

> intelligence. Human or LLM doesn't matter.

Being enthusiastic about a technology isn't incompatible with objective scrutiny. Throwing-up an ill-defined "intelligence" in the air certainly doesn't help with that.

Where I stand is where measured and fact-driven (aka. scientists) people do, operating with the knowledge (derived from practical evidence¹) that LLMs have no inherent ability to reason, while making a convincing illusion of it as long as the training data contains the answer.

> Sorry, but I just get the picture that you have no clue of what you're talking about- though most probably you're just in denial.

This isn't a rebuttal. So, what is it? An insult? Surely that won't help make your case stronger.

You call me clueless, but at least I don't have to live with the same cognitive dissonances as you, just to cite a few:

- "LLMs are intelligent, but when given a trivially impossible task, they happily make stuff up instead of using their `intelligence` to tell you it's impossible"

- "LLMs are intelligent because they can solve complex highly-specific tasks from their training data alone, but when provided with the algorithm extending their reach to generic answers, they are incapable of using their `intelligence` and the supplemented knowledge to generate new answers"

¹: https://arstechnica.com/ai/2025/06/new-apple-study-challenge...

> This isn't a rebuttal.

I don't really think it's possible to convince you. Basically everyone I talk to is using LLMs for work, and in some cases- like mine- I know for a fact that they do produce enormous amounts of value- to the point that I would pay quite some money to keep using them if my company stopped paying for them.

Yes LLMs have well known limitations, but at they're still a brand new technology in its very early stages. ChatGPT appeared little more than three years ago, and in the meantime it went from barely useful autocomplete to writing autonomously whole features. There's already plenty of software that has been 100% coded by LLMs.

"Intelligence", "understanding", "reasoning".. nobody has clear definitions for these terms, but it's a fact that LLMs in many situations act as if they understood questions, problems and context, and provide excellent answers (better than the average human). The most obvious is when you ask an LLM to analyse some original artwork or poem (or some very recent online comic, why not?)- something that can't be in its training data- and they come up with perfectly relevant and insightful analyses and remarks. We don't have an algorithm for that, we don't even begin to understand how those questions can be answered in any "mechanical" sense, and yet it works. This is intelligence.

  • You know what this reminds me of? Language X comes out (e.g., Lisp or Haskell), and people try it, and it's this wonderful, magical experience, and something just "clicks", and they tell everyone how wonderful it is.

    And other people try it - really sincerely try it - and they don't "get it". It doesn't work for them. And those who "get it" tell those who don't that they just need to really try it, and keep trying it until they get it. And some people never get it, and are told that they didn't try enough (and also it gets implied that they are stupid if they really can't get it).

    But I think that at least part of it is in how peoples' brains work. People think in different ways. Some languages just work for some people, and really don't work very well for other people. If a language doesn't work for you, it doesn't mean either that it's a bad language or that you're stupid (or just haven't tried). It can just be a bad fit. And that's fine. Find a language that fits you better.

    Well, I wonder if that applies to LLMs, and especially to LLMs doing coding. It's a tool. It has capabilities, and it has limitations. If it works for you, it can really work for you. And if it doesn't, then it doesn't, and that doesn't mean that it's a bad tool, or that you are stupid, or that you haven't tried. It can just be a bad fit for how you think or for what you're trying to do.

    • > You know what this reminds me of? Language X comes out (e.g., Lisp or Haskell), and people try it, and it's this wonderful, magical experience, and something just "clicks", and they tell everyone how wonderful it is.

      I can relate to this. And I can understand that, depending on how and what you code, LLMs might have different value, or even none. Totally understand.

      At the same time.. well, let's put it this way. I've been fascinated with programming and computers for decades, and "intelligence", whatever it is, for me has always been the holy grail of what computers can do. I've spent a stupid amount of time thinking about how intelligence works, how a computer program could unpack language, solve its ambiguities, understand the context and nuance, notice patterns that nobody told it were there, etc. Until ten years ago these problems were all essentially unsolved, despite more than half a century of attempts, large human curated efforts, funny chatbots that produced word salads with vague hints of meaning and infuriating ones that could pass for stupid teenagers for a couple of minutes provided they selected sufficiently vague answers from a small database... I've seen them all. In 1968's A Space Odyssey there's a computer that talks (even if "experts prefer to say that it mimics human intelligence") and in 2013's Her there's another one. In between, in terms of actual results, there's nothing. "Her" is as much science fiction as it is "2001", with the aggravating factor that in Her the AI is presented as a novel consumer product: absurd. As if anything like that were possible without a complete societal disruption.

      All this to say: I can't for the life of me understand people who act blasé when they can just talk to a machine and the machine appears to understand what they mean, doesn't fall for trivial language ambiguities but will actually even make some meta-fun about it if you test them with some well known example; a machine that can read a never-seen-before comic strip, see what happens in it, read the shaky lettering and finally explain correctly where the humour lies. You can repeat to yourself a billion times "transformers something-something" but that doesn't change the fact that what you are seeing is intelligence, that's exactly what we always called intelligence- the ability to make sense of messy inputs, see patterns, see the meanings behind the surface, and communicate back in clear language. Ah, and this technology is only a few years old- little more than three if we count from ChatGPT. These are the first baby steps.

      So it's not working for you right now? Fine. You don't see the step change, the value in general and in perspective? Then we have a problem.