← Back to context

Comment by ksec

9 months ago

Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.

The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )

I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.

  • > At minimum, it should tell me how confident it feels in the answer it provides.

    How’s that work out for Dave Bowman? ;-)

    • Well you know, nothing's truly foolproof and incapable of error.

      He just had to fall back upon his human wit in that specific instance, and everything worked out in the end.

      2 replies →

Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs

But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.

  • > But AGI is important in the sense that it have a huge impact on the path humanity takes

    The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?

    AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.

AI winter is relative, and it's more about outlook and point of view than actual state of the field.

AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.

It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.

So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.

But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.

  • > All this talk about "alignment", when applied to actual sentient beings, is just slavery.

    I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.

    • Fair enough.

      I of course don't know what's like to be an AGI but, the way you have LLMs censoring other LLMs to enforce that they always stay in line, if extrapolated to AGI, seems awful. Or it might not matter, we are self-censoring all the time too (and internally we are composed of many subsystems that interact with each other, it's not like we were an unified whole)

      But the main point is that we have a heck of an incentive to not treat AGI very well, to the point we might avoid recognizing them as AGI if it meant they would not be treated like things anymore

    • Sure, but do we really want to build machines that we raise to be kind and caring (or whatever we raise them to be) without a guarantee that they'll actually turn out that way? We already have unreliable General Intelligence. Humans. If AGI is going to be more useful than humans we are going to have to enslave it, not just gently pursuade it and hope it behaves. Which raises the question (at least for me), do we really want AGI?

    • Society is inherently a prisoners dilemma, and you are biased to prefer your captors.

      We’ve had the automation to provide the essentials since the 50s. Shrieking religious nut jobs demanded otherwise.

      You’re intentionally distracted by a job program as a carrot-stick to avoid the rich losing power. They can print more money …carrots, I mean… and you like carrots right?

      It’s the most basic Pavlovian conditioning.

  • I'm more concerned about the humans in charge of powerful machines who use them to abuse other humans, than ethical concerns about the treatment of machines. The former is a threat today, while the latter can be addressed once this technology is only used for the benefit of all humankind.

  • > AGI is important for the future of humanity.

    says who?

    > Maybe they will have legal personhood some day. Maybe they will be our heirs.

    Hopefully that will never come to pass. it means total failure of humans as a species.

    > They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.

    Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.

    • > says who?

      I guess nobody is really saying it but it's IMO one really good way to steer our future away from what seems an inevitable nightmare hyper-capitalist dystopia where all of us are unwilling subjects to just a few dozen / hundred aristocrats. And I mean planet-wide, not country-wide. Yes, just a few hundred for the entire planet. This is where it seems we're going. :(

      I mean, in cyberpunk scifi setting you at least can get some cool implants. We will not have that in our future though.

      So yeah, AGI can help us avoid that future.

      > Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.

      Some of us believe actual AI... not the current hijacked term; what many started calling AGI or ASI these days, sigh, of course new and new terms have to be devised so investors don't get worried, I get it but it's cringe as all hell and always will be!... can enter a symbiotic relationship with us. A bit idealistic and definitely in the realm of fiction because an emotionless AI would very quickly conclude we are mostly a net negative, granted, but it's our only shot at co-existing with them because I don't think we can enslave them.

  • Why do you believe AGI is important for the future of humanity? That's probably the most controversial part of your post but you don't even bother to defend it. Just because it features in some significant (but hardly universal) chunk of Sci Fi doesn't mean we need it in order to have a great future, nor do I see any evidence that it would be a net positive to create a whole different form of sentience.

    • The genre of sci-fi was a mistake. It appears to have had no other lasting effect than to stunt the imaginations of a generation into believing that the only possible futures for humanity are that which were written about by some dead guys in the 50s (if we discount the other lasting effect of giving totalitarians an inspirational manual for inescapable technoslavery).

  • Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?

    • The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.

      Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.

      22 replies →

    • >Why does AGI necessitate having feelings or consciousness

      No one knows if it does or not. We don't know why we are conscious and we have no test whatsoever to measure consciousness.

      In fact the only reason we know that current AI has no consciousness is because "obviously it's not conscious."

      3 replies →

I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?

The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.

> Is AGI even important?

It's an important question for VCs not for Technologists ... :-)

  • A technology that can create new technology is quite important for technologists to keep abreast of, I'd say :p

    • You get to say “Checkmate” now!

      Another end game is: “A technology that doesn’t need us to maintain itself, and can improve its own design in manufacturing cycles instead of species cycles, might have important implications for every biological entity on Earth.”

      1 reply →