← Back to context

Comment by ecshafer

9 hours ago

I wonder how long we have until we start solving some truly hard problems with AI. How long until we throw AI at "connect general relativity and quantum physics", give the AI 6 months and a few data centers, and have it pop out a solution?

I think a very long time because part of our limit is experiment.

We need enough experimental results to explain to solve these theoretical mismatches and we don't and at present can't explore that frontier.

Once we have more results at that frontier we'd build a theory out from there that has two nearly independent limits for QFT and GR.

What we'd be asking if the AI is something that we can't expect a human to solve even with a lifetime of effort today.

It'll take something in par with Newton realising that the heavens and apples are under the same rules to do it. But at least Newton got to hold the apple and only had to imagine he could a star.

  • > I think a very long time because part of our limit is experiment.

    Yes, maybe. But if you are smarter, you can think up better experiments that you can actually do. Or re-use data from earlier experiments in novel and clever ways.

  • What prevents us from giving this system access to other real systems that live in physical labs? I don't see much difference between parameterizing and executing a particle accelerator run and invoking some SQL against a provider. It's just JSON on the wire at some level.

    • Nothing, we can give it all the data we have and have it lead experiments.

      But we can not yet experiment at the GR/QFT frontier.

      To do so with a particle accelerator it would need to be the size of the milky way.

  • The question is, if you trained an LLM on everything up until 1904, could it come up with E=MC² or not?

    • In 1900 Henri Poincaré wrote that radiation (light) has an effective mass given by E/c^2.

      So it really isn't far fetched. What intrigues me more is if it was capable of it would our Victorian conservative minded scientists have RLHF it out of that kind of thing?

Hold your horses, that’s a long way off. The best math AI tool we currently have, Aletheia, was only able to solve 13 out of 700 attempted open Erdos problems, only 4 of which were solved autonomously: https://arxiv.org/html/2601.22401v3

Clearly, these models still struggle with novel problems.

  • > Clearly, these models still struggle with novel problems.

    Do they struggle with novel problems more or less than humans?

If AGI will ever come, then. Currently, AI is only a statistical machines, and solutions like this are purely based on distribution and no logic/actual intelligence.

  • I swear that AI could independently develop a cure for cancer and people would still say that it's not actually intelligent, just matrix multiplications giving a statistically probable answer!

    LLMs are at least designed to be intelligent. Our monkey brains have much less reason to be intelligent, since we only evolved to survive nature, not to understand it.

    We are at this moment extremely deep into what most people would have been considered to be actual artificial intelligence a mere 15 years ago. We're not quite at human levels of intelligence, but it's close.

    • >AI could independently develop a cure for cancer

      All the answers for all your questions is contained in randomness. If you have a random sentence generator, there is a chance that it will output the answer to this question every time it is invoked.

      But that does not actually make it intelligent, does it?

      4 replies →

    • Last week I put "was val kilmer in heat" into the search box on my browser. The AI answer came back with "No, Val Kilmer was not in heat. Val Kilmer played Chris Shiherlis in the movie Heat but the film did not indicate that he was pregnant or in heat. His performance was nuanced and skilled and represents a high point of the film." I was not curious about whether he was pregnant.

      We are not only not close to human level of intelligence, we are not even at dog, cat, or mouse levels of intelligence. We are not actually at any level of intelligence. Devices that produce text, images, or code do not demonstrate intelligence any more than a printer producing pages of beautiful art demonstrate intelligence.

      6 replies →

    • That's wrong. Humans were evolved to have big brains so they can better understand the env and use it to their advantage.

      I still see AI making stupid silly mistakes. I rather think and not waste time on something that only remembers data, and doesn't even understand it.

      Reasoning in AI is only about finding contradictions between his "thoughts", not actually understand it.

      2 replies →

  • It only took 4 years, but it appears that this view is finally dying out on HN. I would advise everyone who found this viewpoint compelling to think about how those same blinders might be affecting how you are imagining the future to look like.

  • I don't even think that's the issue.

    The issue to my mind is a lack of data at the meeting of QFT/GR.

    Afterall few humans historically have been capable of the initial true leap between ontologies. But humans are pretty smart so we can't say that is a requirement for AGI.

    • When it comes to revolutionary/unsolved subjects, there will never be enough data. That's why its revolutionary/unsolved.

    • Maybe.

      “The laws of nature should be expressed in beautiful equations.”

      - Paul Dirac

      “It is, indeed, an incredible fact that what the human mind, at its deepest and most profound, perceives as beautiful finds its realisation in external nature. What is intelligible is also beautiful. We may well ask: how does it happen that beauty in the exact sciences becomes recognizable even before it is understood in detail and before it can be rationally demonstrated? In what does this power of illumination consist?”

      - Subrahmanyan Chandrasekhar

      “I often follow Plato’s strategy, proposing objects of mathematical beauty as models for Nature.”

      “It was beauty and symmetry that guided Maxwell and his followers.”

      - Frank Wilczek

      “Beauty, is bound up with symmetry.”

      - Herman Weyl

      "Still twice in the history of exact natural science has this shining-up of the great interconnection become the decisive signal for significant progress. I am thinking here of two events in the physics of our century: the rise of the theory of relativity and that of the quantum theory. In both cases, after yearlong unsuccessful striving for understanding, a bewildering abundance of details was almost suddenly ordered. This took place when an interconnection emerged which, thought largely unvisualizable, was finally simple in its substance. It convinced through its compactness and abstract beauty – it convinced all those who can understand and speak such an abstract language."

      - Werner Heisenberg

      Maybe (just maybe) these things (whatever you want to call them) will (somehow) gain access to some "compact", beautiful, "largely unvisualizable" "interconnection" which will be the self-evident solution. And if they do, many will be sure to label it a statistical accident from a stochastic parrot. And they'll right, for some definitions of "statistical", "accident", "stochastic", and "parrot".

  • Did you read the linked paper? Claude out-reasoned humans on a challenging (or at least, unsolved) math problem.

    • "humans"

      Donald Knuth is an extremal outlier human and the problem is squarely in his field of expertise.

      Claude, guided by Filip Stappers, a friend of Knuth, solved a problem that Knuth and Stappers had been working on for several weeks. Unfortunately, it doesn't seem (from my quick scan) to have been stated how long (or how many tokens or $) it took for Claude + Stappers to complete the proof.

      In response, Knuth said: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."

      Seems like good advice. From reading elsewhere in this comment section, the goalposts seem to be approaching the infrared and will soon disappear from the extreme redshift due to rate at which they are receding with each new achievement.

      4 replies →

Connecting them is easy, one is the math of the exchange and one of the state machine.

A better question might be why no one is paying more attention to Barandes at Harvard. He's been publishing the answer to that question for a while, if you stop trying to smuggle a Markovian embedding in a non-Markovian process you stop getting weird things like infinities at boundaries that can't be worked out from current position alone.

But you could just dump a prompt into an LLM and pull the handle a few dozen times and see what pops out too. Maybe whip up a Claw skill or two

Unconstrained solution space exploration is surely the way to solve the hard problems

Ask those Millenium Prize guys how well that's working out :)

Constraint engineering is all software development has ever been, or did we forget how entropy works? Someone should remind the folk chasing P=NP that the observer might need a pen to write down his answers, or are we smuggling more things for free that change the entire game? As soon as the locations of the witness cost, our poor little guy can't keep walking that hypercube forever. Can he?

Maybe 6 months and a few data centers will do it ;)