← Back to context

Comment by resident423

10 hours ago

I wonder if the rationalizations people come up with for why this isn't real intelligence will be as creative as ChatGPTs solution.

Remember when people thought multiplying numbers, remembering a large number of facts, and being good at rote calculations was intelligence?

Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.

Most intelligent people do not think that.

Eventually, we will arrive at the same conclusion for what LLMs are doing now.

  • I've had a similar notion that Time() is a necessary test function. Maybe it's because of the limitations of human cognition. (We have biases and blind-spots and human intelligence itself is erratic.)

    I find it's helpful to avoid conflating the following three topics:

    /1/ Is the tool useful?

    /2/ At scale, what is the economic opportunity and social/environmental impact?

    /3/ Is the tool intelligent?

    Casual observation suggests that most people agree on /1/. An LLM can be a useful tool. (Present case: someone found a novel approach to a proof.) So are pocket calculators, personal computers, and portable telephones. None of these tools confers intelligence, although these tools may be used adeptly and intelligently.

    For /2/, any level of observation suggests that LLMs offer a notable opportunity and have a social/environmental impact. (Present case: students benefitted in their studies.) A better understanding comes with Time() ... our species is just not good at preparing for risks at scale. The other challenge is that competing interests may see economic opportunities that don't align for social/environmental Good.

    Topic /3/ is of course the source of energetic, contentious debate. Any claim of intelligence for a tool has always had a limited application. Even a complex tool like a computer, a modern aircraft, or a guided missile is not "intelligent". These tools are meant to be operated by educated/trained personnel. IBM's Deep Blue and Watson made headlines -- but was defeating humans at games proof of Intelligence?

    On this particular point, we should worry seriously about conferring trust and confidence on stochastic software in any context where we expect humans to act responsibly and be fully accountable. No tool, no software system, no corporation has ever provided a guarantee that harm won't ensue. Instead, they hire very smart lawyers.

  • Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence? Surely the trend has to break at some point, if so what would be the thing that crosses the line to into real intelligence?

    • > Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence?

      Hah. It reminds me of this great quote, from the '80s:

      > There is a related “Theorem” about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of “real thinking”. The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This “Theorem” was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: “AI is whatever hasn’t been done yet.”

      We are seeing this right now in the comments. 50 years later, people are still doing this! Oh, this was solved, but it was trivial, of course this isn't real intelligence.

      6 replies →

    • Well, the famous Turing test was evidently insufficient. All that happened is that the test is dead and nobody ever mentions it anymore. I'm not sure that any other test would fare any better once solved.

    • I've spend a good chunk of time formalising mathematics.

      Doing formalized mathematics is as intelligent as multiplying numbers together.

      The only reason why it's so hard now is that the standard notation is the equivalent of Roman numerals.

      When you start using a sane metalanguage, and not just augmrnted English, to do proofs you gain the same increase in capabilities as going from word equations to algebra.

      1 reply →

    • When will LLM folks realize that automated theorem provers have existed for decades and non-ML theorem provers have solved non-trivial Math problems tougher than this Erdos problem.

      Proposing and proving something like Gödel's theorem's definitely requires intelligence.

      Solving an already proposed problem is just crunching through a large search space.

      4 replies →

None of it is really from logical thought. The rationalizations don't make any sense, but they haven't for a while. It's an emotional response. Honestly, It's to be expected.

  • It's because HN is not really full of smart people. It's full of people who think they're smart and take pride in that idea that they're pretty intelligent.

    ChatGPT equalizes intelligence. And that is an attack on their identity. It also exposes their ACTUAL intelligence which is to say most of HN is not too smart.

    • > ChatGPT equalizes intelligence

      Yes, I love living in communism too. Imagine if you had to pay money for it or something. The wealthiest people would get unrestricted access to intelligence while the poor none. And the people in the middle would eventually find themselves unable to function without a product they can no longer afford. Chilling, huh? Good thing humans are known for sharing in the benefits of technological progress equally. /s

      7 replies →

Proving a negative is a pretty high bar. You also have the problem of defining "real intelligence", which I suspect you can't.

  • Intelligence is Intelligence. It's intelligent because it does intelligent things. If someone feels the need to add a 'real' and 'fake' moniker to it so they can exclude the machine and make themselves feel better (or for whatever reason) then they are the one meant to be doing the defining, and to tell us how it can be tested for. If they can't, then there's no reason to pay attention to any of it. It's the equivalent of nonsensical rambling. At the end of the day, the semantic quibbling won't change anything.

    • > It's intelligent because it does intelligent things.

      Most people would consider someone who can calculate 56863*2446 instantly in their head to be intelligent. Does that mean pocket calculators are intelligent? The result is the same.

      > then they are the one meant to be doing the defining, and to tell us how it can be tested for. If they can't, then there's no reason to pay attention to any of it.

      That is the equivalent of responding to criticism with “can you do better?”. One does not need to be a chef (or even know how to cook) to know when food tastes foul. Similarly, one does not need to have a tight definition of “life” to say a dog is alive but a rock isn’t. Definitions evolve all the time when new information arises, and some (like “art”) we haven’t been able to pin down despite centuries of thinking about it.

LLMs are definitely intelligent - just not general like humans, and very very jagged (succeedingand failing in head-scratching ways).

Well it still gets easy problems wrong

With real general intelligence you'd expect it to solve problems above a certain difficulty with a good clip

For one, everything its 'intelligence' knows about solving the problem is contained within the finite context window memory buffer size for the particular model and session. Unless the memory contents of the context window are being saved to storage and reloaded later, unlike a human, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced later.

  • For one, everything humans' "intelligence" knows about solving the problem is contained within the finite brain size for the particular person and life. Unless the memory contents of the brain are being saved to storage and reloaded later, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced in a later life.

  • There's humans that have memory issues, or full blown Anterograde amnesia.

    • There are humans who can’t read. That doesn’t mean Grammarly is “intelligent”. These things are tools - nothing more, nothing less.

  • What your describing sounds more like the model is lacking awareness than lacking intelligence? Why does it need to know it solved the problem to be intelligent?

    • We say African Elephants are intelligent for a number of reasons, one of which is because they remember where sources of water are in very dry conditions, and can successfully navigate back to them across relatively large distances. An intelligent being that can't remember its own past is at a significant disadvantage compared to others that can, which is exactly one of the reasons why alzheimers patients often require full time caregivers.

      2 replies →

  • As another commenter pointed out these models are being trained how to save and read context into files so denying them to use such an ability that they have just makes your claim tautological.

I think one day the VCs will have given the monkeys on typewriters enough money that these kinds of comments can be generated without human intervention.

And how about the creative rationalizations about how statistical text generation is actual intelligence? As if there is any intent or motive behind the words that are generated or the ability to learn literally any new thing after it has been trained on human output?

  • 2022 called, wants this argument back. When you're "statistically generating text" to find zero-day vulnerabilities in hard targets, building Linux kernel modules, assembly-optimizing elliptic curve signature algorithms, and solving arbitrary undergraduate math problems instantaneously --- not to mention apparently solving Erdos problems --- the "statistical text" stuff has stopped being a useful description of what's happening, something closer to "it's made of atoms and obeys the laws of thermodynamics" than it is to "a real boundary condition of what it can accomplish".

    I don't doubt that there are many very real and meaningful limitations of these systems that deserve to be called out. But "text generation" isn't doing that work.

    • But the systems that do that impressive work are no longer just LLMs. Look at the Claude Code leak - it’s a sprawling, redundant maze relying on tools and tests to approximate useful output. The actual LLM is a small portion of the total system. It’s a useful tool, but it’s obviously not truly intelligent - it was hacked together using the near-trillions of dollars AI labs have received for this explicit purpose.

      8 replies →

  • Solving open math problems is strong evidence of intelligence so there's not really any need for rationalization? I don't understand why intelligence would require intent or motive? Isn't intent just the behaviour of making a specific thing happen rather than other things?