AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu

9 months ago (dwarkesh.com)

Might as well be 10 - 1000 years. Reality is no one knows how long it'll take to get to AGI, because:

1) No one knows what exactly makes humans "intelligent" and therefore 2) No one knows what it would take to achieve AGI

Go back through history and AI / AGI has been a couple of decades away for several decades now.

  • I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.

    To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.

    No really.

    You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.

    But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.

    No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.

    We can debate 'intelligence' until the sun dies out and will still never be satisfied.

    But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.

    (oh man, just read that back, I think I need to take a day off here, youch!)

    • > You somehow managed to get real people to chat with bots and pay to do so.

      He's_Outta_Line_But_He's_Right.gif

      Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.

      That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.

      So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.

      And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.

      6 replies →

    • I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.

      5 replies →

    • > But the reality is that we want money

      Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".

      But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.

      10 replies →

  • There are a lot of other things that follow this pattern. 10-30 year predictions are a way to sound confident about something that probably has very low confidence. Not a lot of people will care let alone remember to come back and check.

    On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..

  • That we don't have a single unified explanation doesn't mean that we don't have very good hints, or that we don't have very good understandings of specific components.

    Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.

    From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.

    • We'll be experiencing extreme social disruption well before we have to worry about the cost-efficiency of strong AI. We don't even need full "AGI" to experience socially momentous change. We might even be on the verge of self driving cars spreading to more cities.

      We don't need very powerful AI to do very powerful things.

      3 replies →

    • note that those are kilocalories, and that is ignoring the calories needed for the circulatory and immune systems which are somewhat necessary for proper function. Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W

      4 replies →

    • We are very good at generating energy. Even if AI is an order of magnitude less energy efficient an AI person equivalent would use ~ 4 kilowatt hours/day. At current rates thats like $1. Hardly the limiting factor here I think

    • Energy efficiency is not really a good target since you can brute force it by distilling classical ANNs to spiking neural networks.

  • Generalized, as an rule I believe is usually true: Any prediction made for an event happening greater-than ten years out is code for that person saying "definitely not in the next few years, beyond that I have no idea", whether they realize it or not.

  • If you look back at predictions of the future in the past in general, then so many of them have just been wrong. Especially during a "hype phase". Perhaps the best example is what people were predicting in 1969 after we landed on the moon: this is just the first step in the colonisation of the moon, Mars, and beyond. etc. etc. We just have to have our tech a bit better.

    It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.

    Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.

Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.

The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )

  • I think having a real life JARVIS would be super cool and useful, especially if it's plugged into various things and can take action. Yes, also potentially dangerous, but I want to feel like Ironman.

  • I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.

  • Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs

    But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.

    • > But AGI is important in the sense that it have a huge impact on the path humanity takes

      The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?

      AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.

  • AI winter is relative, and it's more about outlook and point of view than actual state of the field.

  • AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.

    It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.

    So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.

    But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.

    • > All this talk about "alignment", when applied to actual sentient beings, is just slavery.

      I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.

      3 replies →

    • I'm more concerned about the humans in charge of powerful machines who use them to abuse other humans, than ethical concerns about the treatment of machines. The former is a threat today, while the latter can be addressed once this technology is only used for the benefit of all humankind.

    • > AGI is important for the future of humanity.

      says who?

      > Maybe they will have legal personhood some day. Maybe they will be our heirs.

      Hopefully that will never come to pass. it means total failure of humans as a species.

      > They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.

      Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.

      1 reply →

    • Why do you believe AGI is important for the future of humanity? That's probably the most controversial part of your post but you don't even bother to defend it. Just because it features in some significant (but hardly universal) chunk of Sci Fi doesn't mean we need it in order to have a great future, nor do I see any evidence that it would be a net positive to create a whole different form of sentience.

      1 reply →

    • Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?

      27 replies →

  • I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?

    The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.

My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.

And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).

There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.

  • I like chollet's definition: something that can quickly learn any skill without any innate prior knowledge or training.

    • I like Chollet's line of thinking.

      Yet, if you take "any" literally, the answer is simple - there will never be one. Not even for practical reasons, but closer to why there isn't "a set of all sets".

      Picking a sensible benchmark is the hard part.

      1 reply →

  • >There’s no consistent, universally accepted definition.

    That's because of the I part. An actual complete description accepted by different practices in the scientific community.

    "Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"

  • > There’s no consistent, universally accepted definition

    What word or term does?

I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

  • > And from my brief experience on this planet I don't believe that premise.

    A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.

    So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.

  • > why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism

    Then you've missed the part of software.

    Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.

    If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.

    • It's not about the random numbers it's about the tree of possibilities having to be defined up front (in software or hardware). That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.

      This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.

      But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).

      You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.

      3 replies →

  • > It is science fiction to think that a system like a computer can behave at all like a brain

    It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly

    Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no

    • BTW Planes are fully inspired by birds and they mimic the core principles of the bird flight.

      Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly

  • Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?

    Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?

    • 1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new

      2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment

      2 replies →

  • The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.

    We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.

    But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?

    • Seems to me you are a bit overconfident that "we" (who is "we"?) understand how the brain works. F.ex. how does a neuron actively stretching a tentacle trying to reach other neurons work in your model? Genuine question, I am not looking to make fun of you, it's just that your confidence seems a bit much.

      1 reply →

  • This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.

    It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.

    • Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.

      That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.

      1 reply →

    • I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.

      Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.

      What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.

      In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)

      4 replies →

  • If the physics underlying the brain's behavior are deterministic, they can be simulated by software and so does the brain.

    (and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)

  • What you're mentioning is like the difference between digital vs analog music.

    For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.

    In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.

    You can approximate reality, but it'll never quite be reality.

    I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.

    • Are you familiar with the Nyquist–Shannon sampling theorem?

      If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?

      How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?

      6 replies →

  • > Ask yourself, why is it so hard to get a cryptographically secure random number?

    I mean, humans aren't exactly good at generating random numbers either.

    And of course, every Intel and AMD CPU these days has a hardware random number generator in it.

  • Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.

The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.

So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.

  • Really the only people for whom this is bad news is OpenAI and their investors. If there is no AGI race to win then OpenAI is just a wildly overvalued vendor of a hot commodity in a crowded market, not the best current shot at building a money printing machine.

I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.

  • You just asked it to design or implement?

    If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?

    • why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack

      31 replies →

    • yeah unless you have very specific requirements I think the baseline here is not building/designing it yourself but setting up an off-the-shelf commercial or OSS solution, which I doubt would take two weeks...

      8 replies →

  • While impressive, I'm not convinced that improved performance on tasks of this nature are indicative of progress toward AGI. Building a scheduler is a well studied problem space. Something like the ARC benchmark is much more indicative of progress toward true AGI, but probably still insufficient.

    • The point is that AGI is the wrong bar to be aiming for. LLMs are sufficiently useful at their current state that even if it does take us 30 years to get to AGI, even just incremental improvements from now until then, they'll still be useful enough to provide value to users/customers for some companies to win big. VC funding will run out and some companies won't make it, but some of them will, to the delight of their investors. AGI when? is an interesting question, but might just be academic. we have self driving cars, weight loss drugs that work, reusable rockets, and useful computer AI. We're living in the future, man, and robot maids are just around the corner.

    • the other models failed at this miserably. There were also specific technical requirements I gave it related to my tech stack

  • “It does something well” ≠ “it will become AGI”.

    Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.

  • I find now I quickly bucket people in to "have not/have barely used the latest AI models" or "trolls" when they express a belief current LLMs aren't intelligent.

    • You can put me in that bucket then. It's not true, I've been working with AI almost daily for 18 months, and I KNOW it's no where close to being intelligent, but it doesn't look like your buckets are based on truth but appeal. I disagree with your assessment so you think I don't know what I'm talking about. I hope you can understand that other people who know just as much as you (or even more) can disagree without being wrong or uninformed. LLMs are amazing, but they're nowhere close to intelligent.

  • I’ve had similar things over the last couple days with o3. It was one-shotting whole features into my Rust codebase. Very impressive.

    I remember before ChatGPT, smart people would come on podcasts and say we were 100 or 300 years away from AGI.

    Then we saw GPT shock them. The reality is these people have no idea, it’s just catchy to talk this way.

    With the amount of money going into the problem and the linear increases we see over time, it’s much more likely we see AGI sooner than later.

Can someone throw some light on this Dwarkesh character? He landed a Zucc podcast pretty early on... how connected is he? Is he an industry plant?

  • He's awesome.

    I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.

    But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.

  • He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.

    He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.

    He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.

    Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.

Most people talking about Ai and economic growth have vested interests in talking about how it will increase economic growth but don't talk about that under current economic system that the world has would mean most if not all of the growth will go to > 0.0001% of the population.

And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

  • Got it. So this is now a competition between...

    1. Fusion power plants 2. AGI 3. Quantum computers 4. Commercially viable cultured meat

    May the best "imminent" fantasy tech win!

  • People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."

    • Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.

      LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"

      If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.

      13 replies →

    • What the hell is general intelligence anyway? People seem to think it means human-like intelligence, but I can't imagine we have any good reason to believe that our kinds of intelligence constitute all possible kinds of intelligence--which, from the words, must be what "general" intelligence means.

      It seems like even if it's possible to achieve GI, artificial or otherwise, you'd never be able to know for sure that thats what you've done. It's not exactly "useful benchmark" material.

      9 replies →

    • Looking back at CUDA, deep learning, and now LLM hypes, I would bet it'll be cycles of giant groundbreaking leaps followed by giant complete stagnations, rather than LLM improving 3% per year for coming 30 years.

    • They‘ll get cheaper and less hardware demanding but the quality improvements get smaller and smaller, sometimes hardly noticeable outside benchmarks

    • What was the point of this comment? It's confrontational and doesn't add anything to the conversation. If you disagree, you could have just said that, or not commented at all.

      4 replies →

Doesn't even matter. The capabilities of the AI that's out NOW will take a decade or more to digest.

  • I feel like it's already been pretty well digested and excreted for the most part, now we're into the re-ingestion phase until the bubble bursts.

    • I am tech founder, who spends most of my day in my own startup deploying LLM-based tools into my own operations, and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today.

      5 replies →

    • Not even close. Software can now understand human language... this is going to mean computers can be a lot more places than they ever could. Furthermore, software can now understand the content of images... eventually this will have a wild impact on nearly everything.

      4 replies →

    • To push this metaphor, I'm very curious to see what happens as new organic training material becomes increasingly rare, and AI is fed nothing but its own excrement. What happens as hallucinations become actual training data? Will Google start citing sources for their AI overviews that were in turn AI-generated? Is this already happening?

      I figure this problem is why the billionaires are chasing social media dominance, but even on social media I don't know how they'll differentiate organic content from AI content.

    • maybe silicon valley and the world move at basically different rates

      idk AI is just a speck outside of the HN and SV info-bubbles

      still early to mass adoption like the smartphone or the internet, mostly nerds playing w it

      14 replies →

  • Agreed. A hot take I have is that I think AI is over-hyped in its long-term capabilities, but under-hyped in its short-term ones. We're at the point today or in the next twelve months where all the frontier labs could stop investing any money into research, they'd still see revenue growth via usage of what they've built, and humanity will still be significantly more productive every year, year-over-year, for quite a bit, because of it.

    The real driver of productivity growth from AI systems over the next few years isn't going to be model advancements; it'll be the more traditional software engineering, electrical engineering, robotics, etc systems that get built around the models. Phrased another way: If you're an AI researcher thinking you're safe but the software engineers are going to lose their jobs, I'd bet every dollar on reality being the reverse of that.

Apparently Dwarkesh's podcast is a big hit in SV -- it was covered by the Economist just recently. I thought the "All in" podcast was the voice of tech but their content has been going politcal with MAGA lately and their episodes are basically shouting matches with their guests.

And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?

I'll take the "under" on 30 years. Demis Hassabis (who has more credibility than whoever these 3 people are combined) says 5-10 years: https://time.com/7277608/demis-hassabis-interview-time100-20...

  • That's in line with Ray Kurzweil sticking to his long-held predictions: 2029 for AGI and 2045 for the singularity.

    • A lot of Kurzweil's predictions are nowhere close to coming correct though.

      For example, he thought by 2019 we'd have millions of nanorobots in our blood, fighting disease and improving cognition. As near as I can tell we are not tangibly closer to that than we were when he wrote about it 25 years ago. By 2030, he expected humans to be immortal.

You can’t put a date on AGI until the required technology is invented and that hasn’t happened yet.

This "AGI" definition is extremely loose depending on who you talk to. Ask "what does AGI mean to you" and sometimes the answer is:

1. Millions of layoffs across industries due to AI with some form of questionable UBI (not sure if this works)

2. 100BN in profits. (Microsoft / OpenAI definition)

3. Abundance in slopware. (VC's definition)

4. Raise more money to reach AGI / ASI.

5. Any job that a human can do which is economically significant.

6. Safe AI (Researchers definition).

7. All the above that AI could possibly do better.

I am sure there must be a industry aligned and concrete definition that everyone can agree on rather the goal post moving definitions.

30 years away seems rather unlikely to me, if you define AGI as being able to do the stuff humans do. I mean like Dawkesh says:

>We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college.

Also we've recently reached the point where relatively reasonable hardware can do as much compute as the human brain so we just need some algorithms.

1. LLM interactions can feel real. Projections and psychological mirroring is very real.

2. I believe that AI researchers will require some level of embodiment to demonstrate:

a. ability to understand the physical world.

b. make changes to the physical world.

c. predict the outcome to changes in the physical world.

d. learn from the success or failure of those predictions and update their internal model of the external world.

---

I cannot quickly find proposed tests in this discussion.

Huh, so it should be ready around the same time as practical fusion reactors then. I'll warm up the car.

Fusion power will arrive first. And, it will be needed to power the Cambrian explosion of datacenters just for weak AI.

I could be wrong but AGI maybe a cold fusion or flying cars boondoggle: chasing a dream that no one needs, costs too much, or is best left unrealized.

One thing in the podcast I found really interesting from a personal pov was:

> I remember talking to a very senior person who’s now at Anthropic, in 2017. And then he told various people that they shouldn’t do a PhD because by the time they completed it everyone will be automated.

Don’t tell young people things like this. Predicting the future is hard, and it is the height of hubris to think otherwise.

I remember as a teen, I had thought that I was a supposed to be a pilot for all my life. I was ready to enroll in a school with a two year program.

However, I was also into computers. One person who I looked up to in that world said to me “don’t be a pilot, it will all be automated soon and you will just be buss drivers, at best.” This entirely took the wind out of my piloting sails.

This was in the early 90’s, and 30 years later, it is still wrong.

LLMs are basically a library that can talk.

That’s not artificial intelligence.

  • There’s increasing evidence that LLMs are more than that. Especially work by Anthropic has been showing how to trace the internal logic of an LLM as it answers a question. They can in fact reason over facts contained in the model, not just repeat already seen information.

    A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.

    Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.

    https://www.anthropic.com/research/tracing-thoughts-language...

    • I don’t think that is the core of this paper. If anything the paper shows that LLMs have no internal reasoning for math at all. The example they demonstrate is that it triggers the same tokens in randomly unrelated numbers. They kind of just “vibe” there way to a solution

    • > sometimes this "chain of thought" ends up being misleading; Claude sometimes makes up plausible-sounding steps to get where it wants to go. From a reliability perspective, the problem is that Claude’s "faked" reasoning can be very convincing.

      If you ask the LLM to explain how it got the answer the response it gives you won't necessarily be the steps it used to figure out the answer.

    • Oy vey not this paper again.

      "Our methods study the model indirectly using a more interpretable “replacement model,” which incompletely and imperfectly captures the original."

      "(...) we build a replacement model that approximately reproduces the activations of the original model using more interpretable components. Our replacement model is based on a cross-layer transcoder (CLT) architecture (...)"

      https://transformer-circuits.pub/2025/attribution-graphs/bio...

      "Remarkably, we can substitute our learned CLT features for the model's MLPs while matching the underlying model's outputs in ~50% of cases."

      "Our cross-layer transcoder is trained to mimic the activations of the underlying model at each layer. However, even when it accurately reconstructs the model’s activations, there is no guarantee that it does so via the same mechanisms."

      https://transformer-circuits.pub/2025/attribution-graphs/met...

      These two papers were designed to be used as the sort of argument that you're making. You point to a blog post that glazes over it. You have to click through the "Read the paper" to find a ~100 page paper, referencing another ~100 page paper to find any of these caveats. The blog post you linked doesn't even feature the words "replacement (model)" or any discussion of the reliability of this approach.

      Yet it is happy to make bold claims such as "we look inside Claude 3.5 Haiku, performing deep studies of simple tasks representative of ten crucial model behaviors" which is simply not true.

      Sure, they added to the blog post: "the mechanisms we do see may have some artifacts based on our tools which don't reflect what is going on in the underlying model" but that seems like a lot of indirection when the fact is that all observations commented in the papers and the blog posts are about nothing but such artifacts.

  • Grammar engines. Or value matrix engines.

    Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.

    They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.

    • I've only really been experimenting with them for a few days, but I'm kind of torn on it. On the one hand, I can see a lot of things it could be useful for, like indexing all the cluttered files I've saved over the years and looking things up for me faster than I could find|grep. Heck, yesterday I asked one a relationship question, and it gave me pretty good advice. Nothing I couldn't have gotten out of a thousand books and magazines, but it was a lot faster and more focused than doing that.

      On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.

      I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.

      Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.

      1 reply →

    • I feel the opposite.

      LLMs are unbelievably useful for me - never have I had a tool more powerful to assist my brain work. I useLLMs for work and play constantly every day.

      It pretends to sound like a person and can mimic speech and write and is all around perhaps the greatest wonder created by humanity.

      It’s still not artificial intelligence though, it’s a talking library.

      2 replies →

  • We invented a calculator for language-like things, which is cool, but it’s got a lot of people really mixed up.

    The hype men trying to make a buck off them aren’t helping, of course.

You cannot have AGI without a physical manifestation that can generate its own training data based on inputs from the external outside world with e.g. sensors and constantly refine its model.

Pure language or pure image-models are just one aspect of intelligence - just very refined pattern recognition.

You will also probably need some aspect of self-awareness in order or the system to set auxiliary goals and directives related to self-maintenance.

But you don't need AGI in order to have something useful (which I think a lot of readers are confused about). No one is making the argument that you need AGI to bring tons of value.

The Anthropic's research on how LLMs reason shows that LLMs are quite flawed.

I wonder if we can use an LLM to deeply analyze and fix the flaws.

Explosive growth? Interesting. But at some point, human civilization hits a saturation point. There’s only so much people can eat, wear, drive, stream, or hoard. Extending that logic, there’s a natural ceiling to demand - one that even AGI can’t code its way out of.

Sure, you might double the world economy for a decade, but then what? We’ll run out of people to sell things to. And that’s when things get weird.

To sustain growth, we’d have to start manufacturing demand itself - perhaps by turning autonomous robots into wage-earning members of society. They’d buy goods, subscribe to services, maybe even pay taxes. In effect, they become synthetic consumers fueling a post-human economy.

I call this post-human consumerism. It’s when the synthesis of demand would hit the next gear - if we keep moving in this direction.

Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.

  • If/when we will have AGI, we will likely have something fundamentally superhuman very soon after, and that will be very recognizable.

    This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?

    This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.

    • Would AGI be recognisable to us? When a human pushes over an anthill, what do the ants think happened? Do they even know the anthill is gone; did they have concept of the anthill as a huge edifice, or did they only know earth to squeeze through and some biological instinct.

      If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?

    • But that's not ten times the workdays. That's just taking a bunch of speed and sitting by yourself worrying about something. Results may be eccentric.

      Though I don't know what you mean by "width of a human brain".

      4 replies →

  • Mustafa Suleyman says AGI is when a (single) machine can perform every cognitive task better than the best humans. That is significantly different from OpenAIs definition (...when we make enough $$$$$, it's AGI).

    Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.

    He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.

    His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.

    The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!

    Good advice; and go (re-?) read Minsky's "Society of Mind".

  • We sort of are able to recognize Nobel-worthy breakthroughs

    One of the many definitions I have for AGI is being able to create the proofs for the 2030, 2050, 2100, etc Nobel Prizes, today

    A sillier one I like is that AGI would output a correct proof that P ≠ NP on day 1

    • Isn't AGI just "general" intelligence as in -like a regular human- turing test kinda deal?

      aren't you thinking about ASI/ Superintelligence way capable of outdoing humans?

      7 replies →

  • you'd be able to give them a novel problem and have them generalize from known concepts to solve it. here's an example:

    1 write a specification for a language in natural language

    2 write an example program

    can you feed 1 into a model and have it produce a compiler for 2 that works as reliably as a classically built one?

    I think that's a low bar that hasn't been approached yet. until then I don't see evidence of language models' ability to reason.

    • I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.

    • You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.

  • AI will face the same limitations we face: availability of information and the non deterministic nature of the world.

  • AGI isn't ASI; it's not supposed to be smarter than humans. The people who say AGI is far away are unscientific woo-mongers, because they never give a concrete, empirically measurable definition of AGI. The closest we have is Humanity's Last Exam, which LLMs are already well on the path to acing.

    • Consider this: Being born/trained in 1900 if that were possible and given a year to adapt to the world of 2025, how well would an LLM do on any test? Compare that to how a 15 years old human in the same situation would do.

    • I'd expect it to be generalised, where we (and everything else we've ever met) are specialised. Our intelligence is shaped by our biology and our environment; the limitations on our thinking are themselves concepts the best of us can barely glimpse. Some kind of intelligence that inherently transcends its substrate.

      What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.

Is it me or the signal/noise is needle in a haystack for all these cheerleader tech podcasts? In general, I really miss the podcast scene from 10 years ago, less polished but more human and with reasonable content. Not this speculative blabber that seems to be looking to generate clickbait clips. I don't know what happened a few years ago, but even solid podcasts are practically garbage now.

I used to listen to podcasts daily for at least an hour. Now I'm stuck with uploading blogs and pdfs to Eleven Reader. I tried the Google thing to make a podcast but it's very repetitive and dumb.

”‘AGI is x years away’ is a proposition that is both true and false at the same time. Like all such propositions, it is therefore meaningless.”

AGI is here today... go have a kid.

  • Not artificial, but yes, it's unclear what advantage an artificial person has over a natural one, or how it's supposed to gain special insights into fusion reactor design and etc. even if it can think very fast.

I do not like those who try to play God. The future of humanity will not be determined by some tech giant in their ivory tower, no matter how high it may be. This is a battle that goes deeper than ones and zeros. It's a battle for the soul of our society. It's a battle we must win, or face the consequences of a future we cannot even imagine... and that, I fear, is truly terrifying.

  • > The future of humanity will not be determined by some tech giant in their ivory tower

    Really? Because it kinda seems like it already has been. Jony Ive designed the most iconic smartphone in the world from a position beyond reproach even when he messed up (eg. Bendgate). Google decides what your future is algorithmically, basically eschewing determinism to sell an ad or recommend a viral video. Instagram, Facebook and TikTok all have disproportionate influence over how ordinary people live their lives.

    From where I'm standing, the future of humanity has already been cast by tech giants. The notion of AI taking control is almost a relief considering how illogical and obstinate human leadership can be.

"Literally who" and "literally who" put out statements while others out there ship out products.

Many such cases.

TURKS MENTIONED RAAAAAAAAAAAAAAAHHHHH!!1!1!! TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE