Comment by PurpleRamen
1 day ago
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
1 day ago
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.
> It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy
That's not the definition they have been using. The definition was "$100B in profits". That's less than the net income of Microsoft. It would be an interesting milestone, but certainly not "most of the jobs in an economy".
Yeah I think this is more coherent than people realize. Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.
It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.
It's maybe somewhat nice conceptually, and certainly an useful added value - but the elsewhere mentioned $100 billion profit is not the right metric.
And then I think coming up with the right metric is just as subjective on this field as the technological one.
> Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.
Deep scientific discoveries are also cognitively demanding, but are not really valued (see the precarious work environment in academia).
Another point: a lot of work is rather valued in the first place because the work centers around being submissive/docile with regard to bullshit (see the phenomenon of bullshit jobs). You really know better, but you have to keep your mouth shut.
Was there a better way than setting an arbitrary $100b threshold?
e.g. average cost to complete a set of representative tasks
1 reply →
> They redefined AGI to be an economical thing
Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.
Around the end of 2024, it was reported that OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit
Wow. Maybe they spelled it out as aggregate gross income :P.
Yea, seems like this was stage setting for them to exit. They were already trying to break the deal then. So, I feel like that is lawyers find a way to bend whatever to get out of the deal.
Companies that have created "AGI":
Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...
2 replies →
So no human on Earth is intelligent by that metric.
1 reply →
It’s a system that generates $100 billion in profit. [0]
[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
Are there inflation markers included?
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity
From: https://openai.com/charter/
All humanity will benefit, but some humanity will benefit more than others.
1 reply →
Marketing
6 replies →
AGI is when the capitalists are not forced to share their profits with the intelligentsia.
Translation: IPO.
Here's the sauce you requested: [0]
"OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits."
Given that the definition of AGI is beyond meaningless, it is clear that the "I" in AGI stands for IPO.
[0] https://finance.yahoo.com/news/microsoft-openai-financial-de...
Please reveal the “scientific” definition of AGI.
When we are having serious conversations about AI rights and shutting off a model + harness was impactful as a death sentence. (I'm extremely skeptical that given the scale of computer/investment needed to produce the models we have _good as they are_ that our current llm architecture gets us there if there is even somewhere we want to go).
[dead]
It makes sense though. Humans are coherent to the economy based on their ability to perform useful work. If an AI system can perform work as well as or better than any human, than with respect to "anything any human has ever been willing to pay for", it is AGI.
I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.
It doesn’t though, AGI have far greater implications than doing mundane work of today. Actual AGI would self improve, that in itself would change literally every single thing of human civilization, instead we are talking about replacing white collar jobs.
An AGI that can do all that would also necessarily be able to do all white collar work. That latter definition I'd consider a "soft threshold" that would be hit before recursive self-improvement, which I imagine would happen soon after.
The current estimation on the time between this is fairly small, bottlenecked most likely by compute constraints, risk aversion, and need to implement safeguards. Metaculus puts it at about 32 months
https://www.metaculus.com/questions/4123/time-between-weak-a...
2 replies →
Not to worry, humanoid, generally useful robots are only a few years away.