Comment by killerstorm
6 months ago
On the other hand we have DeepMind / Demis Hassabis, delivering:
* AlphaFold - SotA protein folding
* AlphaEvolve + other stuff accelerating research mathematics: https://arxiv.org/abs/2511.02864
* "An AI system to help scientists write expert-level empirical software" - demonstrating SotA results for many kinds of scientific software
So what's the "fantasy" here, the actual lab delivering results or a sob story about "data workers" and water?
I believe AlphaFold, AlphaEvolve etc are _not_ looking to get to AGI. The whole article is a case against AGI chasing, not ML or LLM overall.
AlphaEvolve is a general system which works in many domains. How is that not a step towards general intelligence?
And it is effectively a loop around LLM.
But my point is that we have evidence that Demis Hassabis knows his shit. Just doubting him on a general vibe is not smart
AlphaEvolve is a system for evolving symbolic computer programs.
Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.
DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the pack who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.
Has he, his team, or DeepMind used any AGI rhetoric, even just as advertising?
3 replies →
Yeah, in reality it seems that DeepMind are more the good guys, at least in comparison to the others.
You can argue about whether the pursuit of "AGI" (however you care to define it) is a positive for society, or even whether LLMs are, but the AI companies are all pursuing this, so that doesn't set them apart.
What makes DeepMind different is that they are at least also trying to use AI/ML for things like AlphaFold that are a positive, and Hassabis' appears genuinely passionate about the use of AI/ML to accelerate scientific research.
It seems that some of the other AI companies are now belatedly trying to at least appear to be interested in scientific research, but whether this is just PR posturing or something they will dedicate substantial resources to, and be successful at, remains to be seen. It's hard to see OpenAI, planning to release SexChatGPT, as being sincerely committed to anything other than making themselves a huge pile of money.
Hao is not just a "ai is bad" book... Those exist but Hao is a highly credited journalist.
I’m not sure you understand what AGI is given the citations you’ve provided.
> "While AlphaEvolve is currently being applied across math and computing, its *general* nature means it can be applied to any problem whose solution can be described as an algorithm, and automatically verified. We believe AlphaEvolve could be transformative across many more areas such as material science, drug discovery, sustainability and wider technological and business applications."
Is that not general enough for you? or not intelligent?
Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
> Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
AGI means it can replace basically all human white collar work, alpha evolve can't do that while average humans can. White collar work is mostly done by average humans after all, if average humans can learn that then so should an AGI.
An easier test is that the AGI must be able to beat most computer games without being trained on those games, average humans can beat most computer games without anyone telling them how to do it, they play and learn until they beat it 40 hours later.
AGI was always defined as an AI that could do what typical humans can do, like learn a new domain to become a professional or play and beat most video games etc. If the AI can't study to become a professional then its not as smart or general as an average human, so unless it can replace most professionals its not an AGI because you can train a human of average intelligence to become a professional in most domains.
5 replies →
Isn't the point that DeepMind is producing products providing value to humanity, where AGI looks like something that will produce mainly harm?