← Back to context

Comment by thesmtsolver2

9 hours ago

Remember when people thought multiplying numbers, remembering a large number of facts, and being good at rote calculations was intelligence?

Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.

Most intelligent people do not think that.

Eventually, we will arrive at the same conclusion for what LLMs are doing now.

I've had a similar notion that Time() is a necessary test function. Maybe it's because of the limitations of human cognition. (We have biases and blind-spots and human intelligence itself is erratic.)

I find it's helpful to avoid conflating the following three topics:

/1/ Is the tool useful?

/2/ At scale, what is the economic opportunity and social/environmental impact?

/3/ Is the tool intelligent?

Casual observation suggests that most people agree on /1/. An LLM can be a useful tool. (Present case: someone found a novel approach to a proof.) So are pocket calculators, personal computers, and portable telephones. None of these tools confers intelligence, although these tools may be used adeptly and intelligently.

For /2/, any level of observation suggests that LLMs offer a notable opportunity and have a social/environmental impact. (Present case: students benefitted in their studies.) A better understanding comes with Time() ... our species is just not good at preparing for risks at scale. The other challenge is that competing interests may see economic opportunities that don't align for social/environmental Good.

Topic /3/ is of course the source of energetic, contentious debate. Any claim of intelligence for a tool has always had a limited application. Even a complex tool like a computer, a modern aircraft, or a guided missile is not "intelligent". These tools are meant to be operated by educated/trained personnel. IBM's Deep Blue and Watson made headlines -- but was defeating humans at games proof of Intelligence?

On this particular point, we should worry seriously about conferring trust and confidence on stochastic software in any context where we expect humans to act responsibly and be fully accountable. No tool, no software system, no corporation has ever provided a guarantee that harm won't ensue. Instead, they hire very smart lawyers.

Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence? Surely the trend has to break at some point, if so what would be the thing that crosses the line to into real intelligence?

  • > Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence?

    Hah. It reminds me of this great quote, from the '80s:

    > There is a related “Theorem” about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of “real thinking”. The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This “Theorem” was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: “AI is whatever hasn’t been done yet.”

    We are seeing this right now in the comments. 50 years later, people are still doing this! Oh, this was solved, but it was trivial, of course this isn't real intelligence.

    • That is a “gotcha” born of either ignorance (nothing wrong with that, we’re all ignorant of something) or bad faith. Definitions shift as we learn more. Darwin’s definition of life is not the same as Descartes’ or Plato’s or anyone in between or since because we learn and evolve our thinking.

      Are you also going to argue definitions of life before we even learned of microscopic or single cell organisms are correct and that the definitions we use today are wrong? That they are shifting goal posts? That “centuries later, people are still doing this”? No, that would be absurd.

      5 replies →

  • Well, the famous Turing test was evidently insufficient. All that happened is that the test is dead and nobody ever mentions it anymore. I'm not sure that any other test would fare any better once solved.

  • I've spend a good chunk of time formalising mathematics.

    Doing formalized mathematics is as intelligent as multiplying numbers together.

    The only reason why it's so hard now is that the standard notation is the equivalent of Roman numerals.

    When you start using a sane metalanguage, and not just augmrnted English, to do proofs you gain the same increase in capabilities as going from word equations to algebra.

    • >the standard notation is the equivalent of Roman numerals.

      But the Roman numerals are easy. I was able to use them before 1st grade and I can't touch any "standard notation" to this day.

  • When will LLM folks realize that automated theorem provers have existed for decades and non-ML theorem provers have solved non-trivial Math problems tougher than this Erdos problem.

    Proposing and proving something like Gödel's theorem's definitely requires intelligence.

    Solving an already proposed problem is just crunching through a large search space.

    • Automated theorem provers can't prove this problem. Which non-trivial Math problem you think are thougher than this Erdos problem?

    • So the only intelligent people in history are those who invent new fields of mathematics, got it.

      You can just about make out those goalposts on the surface of the moon with a good telescope at this point.

    • "Hi ChatGPT, propose and prove something radically new in the genre of Gödel's theorem."

      How is this not just another proposed problem (albeit with a search space much larger than an Erdos problem's)?

      1 reply →