Comment by tombert
4 days ago
The name of this CPU is bordering on securities fraud. When people see the term "AGI" now, they are assuming "Artificial General Intelligence", not "Agentic AI Infrastructure".
Of course people don't realize that, and people will buy ARM stock thinking they've cracked AGI. The people running Arm absolutely know this, so this name is what we in the industry call a "lie".
Considering AGI has been degraded into a generic feelgood marketing word, I can't wait to get my AGI-scented deodorant.
Long Blockchain Corp. remembers [1].
[1] https://en.wikipedia.org/wiki/Long_Blockchain_Corp.
You can already drink AGI! Oh sorry, AG1. The resemblance must be a complete coincidence.
Oh, is that what they're implementing in schools? No, wait, that was A1, probably the sauce.
1 reply →
> The resemblance must be a complete coincidence.
I don't know why so many people are willing to descend into flippant, lazy conspiracy instead of a 7 second Google search before making a claim?
AG1 was started in 2010 by a police officer from New Zealand and AG stands for Athletic Greens.
There is a fair amount of controversy around the company's claims, so I suppose that is one symmetry between AG1 and AGI.
2 replies →
Pretty sure in that case AG stands for Athletic Greens.
I think the name change also came before the AI hype.
1 reply →
Artificial Gut Incense?
Buy it in combo with the good ol' Blockchain perfume!
You mean iced tea, right?
> I can't wait to get my AGI-scented deodorant.
Old spice for me, thanks!
Old Spice, that's OG!
The marketers did this for 5G also, calling their product 5G before it was actually deployed, only because theirs came after 4G but wanted to ride the upcoming 5G buzz.
It seems marketing /depends/ on conflating terms and misleading consumers. Shakespeare might have gotten it wrong with his quip about lawyers.
https://www.pbs.org/newshour/economy/att-to-drop-misleading-...
There was soooo much intentional disinformation around 5G. Everyone who wanted to sell anything intentionally confused the >1Gbps millimeter wave line-of-sight kind of 5G with the "4G but with some changes to handle more devices connected to one tower" kind of 5G. I wonder how many bought a "5G phone" expecting millimeter wave but only got the slightly improved 4G.
This is mostly the standard’s fault, right? Putting more conventional wavelengths and the mm stuff together in one standard was… a choice.
2 replies →
Wait til you search the term “6g”.
Bill Hicks had some thoughts, too:
https://youtube.com/watch?v=GaD8y-CGhMw
It’s been a long long time since I’ve heard that name come up in conversation.
Thanks for the trip down memory lane.
Yes, my wireless router has "5G WiFi" but only does 4G. I didn't have a choice about using it since it comes from the provider, but still stupid.
5G and 4G are not terms applied to WiFi. We have 802.11a/b/g/n/ac/ax and WiFi6/7
WiFi operates in the 2.4, 5, 6GHz bands, but those frequency bands are not used to differentiate WiFi standards because you can mix and match WiFi 6/7 on all three bands.
There are also more WiFi bands below 2.4 and above 6GHz, but they're not common worldwide.
1 reply →
What is 5G WiFi? Do you mean 5Ghz WiFi?
If rich people are this stupid then they deserve to be parted with their cash.
If you invest money so mindlessly that you don’t even check what you buy, then no legislation in the world will manage to protect you from your own mind
It’s not just rich people though. Most people (at least in the US) have their retirements and the like in things like 401ks, tied to some kind of index like the S&P 500. A company doing bullshit to manipulate the stock affects pretty much anyone who uses an index fund or ETF, which is pretty much everyone in the US.
You invest in index funds and etfs so your money averages out and you don’t get impacted by a single companies stupidity.
2 replies →
AGI is a poorly-defined concept anyway. It’s just vibes, nothing descriptive.
AGI is the automation of self-regulation of language
source: 100% personal certainty
This smells like the beginning of entshitification at ARM. I'm not saying AMD or Intel are a whole lot better, but the move to compete with licencees of ARM tech and to cheekily use AGI in the name is not going to ensure confidence in the short or long term.
ARM have given enough time for their licenses, even with special terms and discount just hoping they could crack the server market. But six years in and looking at the next 3-4 years. They are no where near competing with x86 on server other than on hypersalers. I don't see how entering this market now is enshitification . If anything I wish they had enter the market two years earlier.
That's a really good point - thanks for chiming in
This sort of thing really bugs me! Marketing departments appropriate an existing term and use it in some new, often deceptive way. This goes all the way back to when IBM released “The IBM Personal Computer”, at a time when “personal computer” was a category name. Then Microsoft released Windows, when “windows” was a generic term for windowing systems. Intel did it with their “core” architecture. The list goes on.
(Disclosure: I am a casual investor in ARM.)
> Of course people don't realize that, and people will buy ARM stock thinking they've cracked AGI.
Doesn't seem like a very credible assertion. Picking stocks in this way would remove you from the market pretty quickly.
Didn't random companies add block chain to their names only just a few years ago and get 30+% jumps in stock price immediately?
That’s quite different, BlockChain was a buzzword label for existing tech. AGI is a label for something we famously haven’t achieved, and which would be revolutionary if we had.
This seems more like calling your spaceship company, I dunno, “Interplanetary Passengers” or something.
2 replies →
> Just because the stock goes up doesn't mean anyone was tricked. People invest in sentiment, in momentum, in all kinds of second order effects.
2 replies →
Yes, that's how fraud works a lot of the time. It removes you from the market but not until after it's removed your money. And there's an endless supply of new people ready to make the same mistake after you've learned your lesson.
I didn't say it would be a wise decision to pick stocks that way, but this has already happened: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.
Does an iced tea company changing their name to Long Blockchain make any sense? No, not really, it's pretty stupid actually, but it managed to bump the stock by apparently 380%.
The stock market can be pretty dumb sometimes. Let's not forget the weird GME bubble.
[flagged]
4 replies →
idk, but when i see the abbreviation "AGI" i associate it with "ads generated income" ...
every other interpretation is hollowed out / reduced to "marketing speak" by now ;)
People ruined the word AGI before this to be fair, we will need another word to describe "real AGI" when it comes
Marketing is marketing, nothing about it was ever about being factual when there is a total addressable market to go after and dollars to be made! This is inline with much of the other marketing that exists in the AI space as it stands now, not mention the use of AGI within the space as it stands currently.
Sure, but there are plenty of cases where a deceptive name has been considered enough to at least warrant an investigation: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.
I'm not saying anything is going to happen, ARM holdings has a lot more money and lawyers than Long Blockchain did, but I'm just saying that it's not weird to think that a deceptive name could be considered false advertising.
That would not hold up considering that they consistently use 'agentic' in their press release and make no mention of 'artificial general intelligence'. Just because two things have the same acronym does not mean that they stand for the same thing. Marketing being cheeky is not a crime.
4 replies →
An unappreciated aspect of Arm is they really were the Robin Saxby show. https://en.wikipedia.org/wiki/Robin_Saxby Whichever ISA had him selling it was going to win.
While AArch64 represents the technical revolution they needed their business compass has just gone ever since he stepped down. This grimy stuff, and as others noted competing with your own customers, were no goes in the earlier era.
It's HD and ai and 5G and and that
Can you imagine being an engineer and working hard to create something new and cool and some jackass in marketing slaps the name “AGI CPU” on it?
Do you think that we should live in a world where investors who buy on a comical misinterpretation of an acronym are protected from their naivety?
Why isn't there a minority shareholder lawsuit on the news because someone bought MSFT not realizing that Copilot isn't actually certified to fly an airliner? A certain type of people would likely just buy MSFT on a massive lever and then if the bet fails to work out sue pretending that they did not understand.
You're being purposefully obtuse.
People have been hearing for the last three years about how a specific acronym, "AGI", is the final frontier of artificial intelligence and how it's going to change the entire economy around it. They've been hearing about this quasi-theoretical, very specific thing, and a lot of them don't even know what the "G" stands for.
People haven't been hearing for years about a mythical "copilot", and as such I think people are much more likely to think it's not anything more than a cute nickname.
Are you suggesting that this is just a coincidence? The acronym AGI doesn't even make sense for Agentic AI Infrastructure, which should be AAII; they're clearly calling it AGI to mislead people. I refuse to think that the people running Arm are so stupid that they didn't even Google the acronym before releasing the chip.
You think it's a "comical misinterpretation", but I don't think it is. When I saw the article, I thought "shit; did they manage to crack AGI?", and I clicked the article and was disappointed. I suspect a lot of people aren't even going to read the press release.
If sind can't do the most basic due diligence as in reading up on stuff you invest in using Wikipedia or a search engine, best of luck to them.
On the contrary, I love that companies are semantically overloading this stupid concept (purposefully or not) which is 100% hype marketing.
I don't understand why this label is still a thing in the current discourse, and I hope such moves will finally help people and the industry move on.
People buying these kinds of chips will know. AGI is barely a popular concept. Nobody in my family knows what it means.
I mean we can all meme on investors, but I don't thing many people can submit a buy order whilst assuming they missed the AGI news headline because of a product name.
Those in the industry don't call it a lie, they call it "marketing".
It's those out of the industry who call them lies.
Touché. I guess I should have said "I call it a lie".
the whole AI space is rife with much worse example of what could be considered securities fraud tbh
I'm "people" and AGI means nothing to me
In case you haven't noticed, this whole thing has been a grift since 2022. It's kind of amazing that nobody thought of making AGI processors before
It's just going the way of "Smartphone" and "Smart Car" they'll market it as such to get people riled up about it. Consumers will eat it up. I'm sure Scam Altman is ready to show us "AGI" next too. If ARM is making AGI's meaning shift to a CPU descriptor, anyone can call their tech "AGI" by just using their chips.
> The name of this CPU is bordering on securities fraud.
No. For it to be securities fraud, Arm would need to make a materially false statement of fact that misleads investors. Naming the CPU in this way doesn't clear the bar because:
a) the name is clearly product brand, similar to how macOS Lion, or Microsoft Windows, or Ford Mustang, or Yves Saint Laurent Black Opium don't mean literally what they say)
b) Arm explicitly defines it as silicon "designed to power the next generation of AI infrastructure", with the technical specs fully disclosed
c) sophisticated investors, the relevant standard for securities fraud, can read a spec sheet
d) Arms' EVP said "We think that the CPU is going to be fundamental to ultimately achieving AGI", framing it as contribution towards AGI, not AGI itself
I was on board with A through C, but then with D it's either clearly a lie or stupidity. I guess it's not a lie technically if they believe it though, so the latter then. But I also don't want to assume someone in their position to be stupid, so then I'm back to the former.
So D undermines A - C in your mind? That doesn't make sense.
1 reply →
I thought they were adding support for AGI slots
If this headline lead you to believe that ARM has somehow cracked AGI, you deserve to lose your money..
ARM has cracked Agentic AI infrastructure. What are you on about? AGI is a solved problem. The next generation models will have AGI capabilities.
I really hope this is satire. If not, please see a psychiatrist
Honestly: The people who buy stock because a product says "AGI" in the name deserve to lose their shirt.
And no, it's not "a lie", because only an utter idiot would consider a product name an actual fact. It's a name. The Hopper GPUs also didn't ship with a lifesize cutout of Grace Hopper.
No, it's actually a lie, and it's different than the Hopper GPU you mentioned.
People have been seeing every big AI company talk about how AGI is the holy grail of AI, and how they're all trying to reach it. Arm naming a chip AGI is clearly meant to make casual observers think they cracked AGI.
The Hopper GPU isn't the same, because Nvidia isn't actively trying to make people think that it includes a lifesize cutout of Grace Hopper. Not a dig on her, but most people don't know who Grace Hopper is, people haven't been hearing on the news for the last several years about how having a Grace Hopper is going to make every job irrelevant.
[dead]
[dead]
If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
We have to keep defining AGI upwards or nitpick it to show that we haven't achieved it.
I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
We don't have clear ASI yet, but we definitely are in a AGI-era.
I think we are missing an ego/motiviations in the AGI and them having self-sufficiency independent of us, but that is just a bit of engineering that would actually make them more dangerous, it isn't really a significant scientific hurdle.
Ok, but it's not AGI. People five years ago would have been wrong. People who don't have all the information are often wrong about things.
ETA:
You updated your comment, which is fine but I wanted to reply to your points.
> I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
I would actually argue that they are decidedly not smarter than even dumb humans right now. They're useful but they are glorified text predictors. Yes, they have more individual facts memorized than the average person but that's not the same thing; Wikipedia, even before LLMs also had many more facts than the average person but you wouldn't say that Wikipedia is "smarter" than a human because that doesn't make sense.
Intelligence isn't just about memorizing facts, it's about reasoning. The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.
> We don't have clear ASI yet, but we definitely are in a AGI-era.
Nah, not really.
> They're useful but they are glorified text predictors.
There is a long history of people arguing that intelligence is actually the ability to predict accurately.
https://www.explainablestartup.com/2017/06/why-prediction-is...
> Intelligence isn't just about memorizing facts, it's about reasoning.
Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.
That said, there is definitely a biased towards training set material, but that is also the case with the large majority of humans.
For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?
I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.
4 replies →
What does AGI look like in your opinion?
Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.
Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.
Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.
It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.
In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.
Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.
4 replies →
> The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.
I’m really not sure how well a typical human would do writing brainfuck. It’d take me a long time to write some pretty basic things in a bunch of those languages and I’m a SE.
2 replies →
My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.
5 years ago we thought that language is the be-all and end-all of intelligence and treated it as the most impressive thing humans do. We were wrong. We now have these models that are very good at language, but still very bad at tasks that we wrongly considered prerequisites for language.
> My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.
Wait, could you make your qualifiers specific here? Is your definition of AGI that it be able to perform/learn any intellectual task that is achievable by every human, or by any human?
Those are almost incomparably different standards. For the first, a nascent AGI would only need to perform a bit better than a "profound intellectual disability" level. For the second, AGI would need to be a real "Renaissance AGI," capable of advancing the frontiers of thought in every discipline, but at the same time every human would likely fail that bar.
8 replies →
> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
Would they? Perhaps if you only showed them glossy demos that obscure all the ways in which LLMs fail catastrophically and are very obviously nowhere even close to AGI.
Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.
> Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.
To be fair, I am pretty sure Claude Code will download and run stockfish, if you task it to play chess with you. It's not like a human who read 100 books about chess, but never played, would be able to play well with their eyes closed, and someone whispering board position into their ear
1 reply →
It doesn't look anything like AGI and no one who knows what that means would be confused in any era.
Is it useful? Yes. Is it as smart as a person? Not even remotely. It can't even remember things it already was told 5 minutes ago. Sometimes even if they are still in the context window un compacted!
It doesn’t need to be human level, and if I walk into a room and forget why I went in am I no longer a general intelligence?
3 replies →
No they aren't
ChatGPT Health failed hilariously bad at just spotting emergencies.
A few weeks ago most of them failed hilariously bad at the question if you should drive or walk to the service station if you want to wash your car
Idk about the health story, but in my use, ChatGPT has dramatically improved my understanding of my health issues and given sound and careful advice.
The second question sounds like a useless and artificial metric to judge on. The average person might miss such a “gotcha” logical quiz too, for the same reason - because they expect to be asked “is it walking distance.”
No one has ever relied on anyone else’s judgment, nor an AI, to answer “should I bring my car to the carwash.” Same for the ol’ “how many rocks shall I eat?” that people got the AI Overview tricked with.
I’m not saying anything categorically “is AGI” but by relying on jokes like this you’re lying to yourself about what’s relevant.
2 replies →
I would accuse you of nitpicking. My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.
2 replies →
> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
But this is a CPU! It's not a GPU / TPU. Even if you think we've achieved AGI, this is not where the matrix multiplication magic happens. It's pure marketing hype.
I did AI back before it was cool and I think we have agi. Imo the whole distinction was from extremely narrow AI to general intelligence. A classifier for engine failure can only do that - a route planner can only do that…
Now we have things I can ask a pretty arbitrary question and they can answer it. Translate, understand nuance (the multitude of ways of parsing sentences, getting sarcasm was an unsolved problem), write code, go and read and find answers elsewhere, use tools… these aren’t one trick ponies.
There are finer points to this where the level of autonomy or learning over time may be important parts to you but to me it was the generality that was the important part. And I think we’re clearly there.
Agi doesn’t have to be human level, and it doesn’t have to be equal to experts in every field all at once.
An interesting perspective: general, absolutely, just nowhere near superhuman in all kinds of tasks. Not even close to human in many. But intelligent? No doubt, far beyond all not entirely unrealistic expectations.
But that seems almost like an unavoidable trade-off. Fiction about the old "AI means logic!" type of AI is full of thought experiments where the logic imposes a limitation and those fictional challenges appear to be just what the AI we have excels at.
> LLMs are actually smarter than the majority of humans right now
I consider myself a bit of a misanthrope but this makes me an optimist by comparison.
Even stupid people are waaaaaay smarter than any LLM.
The problem is the continued habit humans have of anthropomorphizing computers that spit out pretty words. It’s like Eliza only prettier. More useful for sure. Still just a computer.
I really feel like we have not encountered the same stupid people. Most stupid people I know respond to every question with some form of will-not-attempt. What's 74 times 2? Use a calculator! Should I drive or walk to the car wash? Not my problem! How many R's in strawberry? Who cares! They'll lose to the LLM 100%.
2 replies →
> Still just a computer.
I don't believe in a separation of mind and spirit. So I do think fundamentally, outside of a reliance on quantum effects in cognition (some of theorized but it isn't proven), its processes can be replicated in a fashion in computers. So I think that intelligence likely can be "just a computer" in theory and I think we are in the era where this is now true.
1 reply →
A human can think logically with reason, not to say they are smart or smarter. But LLMs cannot. You can convince a LLM anything is correct and it will believe you. You can't convince a human anything is correct.
I can't argue that LLMs do not know an absolute insane amount of information about everything. But you can't just say LLMs are smarter then most humans. We've already decided that smartness is not about how much data you know, but thinking about that data with logical reasoning. Including the fact it may or may not be true.
I can run a LLM through absolutely incorrect data, and tell it that data is 100% true. Then ask it questions about that data and get those incorrect results as answers. That's not easy to do with humans.
That just implies LLMs are suggestible. The same is true of children. As we get older and build a more complete world model in our heads, it's harder to get us to believe things which go against that model.
Tell a 5-yr old about Santa, and they will believe it sincerely. Do the same with a 30-year old immigrant who has never heard of Santa, and I suspect you'll have a harder time.
That's not because the 5-year old is dumber, but just because their life-experience ("training data") is much more limited.
Even so, trying to convince a modern LLM of something ridiculous is getting harder. I invite you to try telling ChatGPT or Gemini that the president died a week ago and was replaced by a body-double facsimile until January 2027, so that Vance can have a full term. I suspect you'll have significant difficulty.
3 replies →
The problem with definitions is that they are all wrong when you try to apply them outside mathematical models. Descriptive terms are more useful than normative ones when you are dealing with the real world. Their meaning naturally evolves when people understand the topic better.
General intelligence, as a description, covers many aspects of intelligence. I would say that the current AIs are almost but not quite generally intelligent. They still have severe deficiencies in learning and long-term memory. As a consequence, they tend to get worse rather than better with experience. To work around those deficiencies, people routinely discard the context and start over with a fresh instance.
AGI wouldn't lie to me every chance it got. Current LLMs are just slop generators, nothing more.
> I would argue that LLMs are actually smarter than the majority of humans right now
This (surprisingly common) view belies a wild misunderstanding of how LLMs work.
"look, it completely lied about params that don't exist in a CLI!"
AGI doesn't mean perfect. It means human like and the latest models are pretty human like in terms of their fallibility and capabilities.