← Back to context

Comment by atleastoptimal

7 months ago

So what happens when they achieve AGI? Will a benevolent network of vastly smarter-than-human intelligences insist on maintaining the wealth hierarchies that humans had before AGI arrived? Isn't the point of AGI removing scarcity?

I worry that those who became billionaires in the AI boom won't want the relative status of their wealth to become moot once AGI hits. Most likely this will come in the form of artificial barriers to using AI that, for ostensible safety reasons, makes it prohibitively difficult for all but the wealthiest or AGI-lab adjacent social circles to use.

This will cause a natural exacerbation of the existing wealth disparities, as if you have access to a smarter AI than everyone else, you can leverage your compute to be tactically superior in any domain with a reward.

All we can hope for is a general benevolence and popular consensus that avoids a runaway race to the bottom effect as a result of all this.

How can anyone still believe the AGI scam

  • If you think the possibility of AGI within 7-10 years is a scam then you aren't paying attention to trends.

    • I wouldn't call 7-10 years a scam, but I would call it low odds. It is pretty hard to be accurate on predictions of a 10 year window. But I definitely think 2027 and 2030 predictions are a scam. Majority of researchers think it is further away than 10 years, if you are looking at surveys from the AI conferences rather than predictions in the news.

      14 replies →

    • Even if we spent 1 million years on LLM it will not result in AGI, we are no closer to AGI with LLM technology than we were with toaster technology

      1 reply →

  • I can't believe this is so unpopular here. Maybe it's the tone, but come on, how do people rationally extrapolate from LLMs or even large multimodal generative models to "general intelligence"? Sure, they might do a better job than the average person on a range of tasks, but they're always prone to funny failures pretty much by design (train vs test distribution mismatch). They might combine data in interesting ways you hadn't thought of; that doesn't mean you can actually rely on them in the way you do on a truly intelligent human.

    • I think it’s selection bias - a y-combinator forum is going to have a larger percentage of people who are techno-utopianists than general society, and there will be many seeking financial success by connecting with a trend at the right moment. It seems obvious to me that LLMs are interesting but not revolutionary, and equally obvious that they aren’t heading for any kind of “general intelligence”. They’re good at pretending, and only good at that to the extent that they can mine what has already been expressed.

      I suppose some are genuine materialists who think that ultimately that is all we are as humans, just a reconstitution of what has come before. I think we’re much more complicated than that.

      LLMs are like the myth of Narcissus and hypnotically reflect our own humanity back at us.