Comment by hi_hi
10 days ago
Here's a thought. Lets all arbitrarily agree AGI is here. I can't even be bothered discussing what the definition of AGI is. It's just here, accept it. Or vice versa.
Now what....? Whats happening right now that should make me care that AGI is here (or not). Whats the magic thing thats happening with AGI that wasn't happening before?
<looks out of window> <checks news websites> <checks social media...briefly> <asks wife>
Right, so, not much has changed from 1-2 years ago that I can tell. The job markets a bit shit if you're in software...is that what we get for billions of dollars spent?
Cultural changes take time. It took decades for the internet to move from nerdy curiosity to an essential part of everyone's life.
The writing is on the wall. Even if there's no new advances in technology, the current state is upending jobs, education, media, etc
> It took decades
It took one September. Then as soon as you could take payments on the internet the rest was inevitable and in _clear_ demand. People got on long waiting lists just to get the technology in their homes.
> no new advances in technology
The reason the internet became so accessible is because Moore was generally correct. There was two corresponding exponential processes that vastly changed the available rate of adoption. This wasn't at all like cars being introduced into society. This was a monumental shift.
I see no advances in LLMs that suggest any form of the same exponential processes exist. In fact the inverse is true. They're not reducing power budgets fast enough to even imagine that they're anywhere near AGI, and even if they were, that they'd ever be able to sustainably power it.
> the current state is upending jobs
The difference is companies fought _against_ the internet because it was so disruptive to their business model. This is quite the opposite. We don't have a labor crisis, we have a retention crisis, because companies do not want to pay fair value for labor. We can wax on and off about technology, and perceptrons, and training techniques, or power budgets, but this fundamental fact seems the hardest to ignore.
If they're wrong this all collapses. If I'm wrong I can learn how to write prompts in a week.
> It took one September.
It's the classic "slowly, then suddenly" paradigm. It took decades to get to that one September. Then years more before we all had internet in our pocket.
> The reason the internet became so accessible is because Moore was generally correct.
Can you explain how Moore's law is relevant to the rise of the internet? People didn't start buying couches online because their home computer lacked sufficient compute power.
> I see no advances in LLMs that suggest any form of the same exponential processes exist.
LLMs have seen enormous growth in power over the last 3 years. Nothing else comes close. I think they'll continue to get better, but critically: even if LLMs stay exactly as powerful as they are today, it's enough to disrupt society. IMHO we're already at AGI.
> The difference is companies fought _against_ the internet
Some did, some didn't. As in any cultural shift, there were winners and losers. In this shift, too, there will be winner and losers. The panicked spending on data centers right now is a symptom of the desire to be on the right side of that.
> because companies do not want to pay fair value for labor.
Companies have never wanted to pay fair value for labor. That's a fundamental attribute of companies, arising as a consequence of the system of incentives provided in capitalism. In the past, there have been opportunities for labor to fight back: government regulation, unions. This time that won't help.
> If I'm wrong I can learn how to write prompts in a week.
Why would you think that anyone would want you to write prompts?
what September?
1 reply →
I really think corporations are overplaying their hand if they think they can transform society once again in the next 10 years.
Rapid de industrialization followed by the internet and social media almost broke our society.
Also, I don’t think people necessarily realize how close we were to the cliff in 2007.
I think another transformation now would rip society apart rather than take us to the great beyond.
I worry that if the reality lives up to investors dreams it will be massively disruptive for society which will lead us down dark paths. On the other hand if it _doesn't_ live up to their dreams, then there is so much invested in that dream financially that it will lead to massive societal disruption when the public is left holding the bag, which will also lead us down dark paths.
2 replies →
I think corporations can definitely transform society in the near future. I don't think it will be a positive transformation, but it will be a transformation.
Most of all, AI will exacerbate the lack of trust in people and institutions that was kicked into high gear by the internet. It will be easy and cheap to convince large numbers of people about almost anything.
As a young adult in 2007, what cliff were we close to?
The GFC was a big recession, but I never thought society was near collapse.
3 replies →
I'm still not buying that AI will change society anywhere as much as the internet or smart phones for the matter.
The internet made it so that you can share and access information in a few minute if not seconds.
Smart phones build on the internet by making this sharing and access of information could done from anywhere and by anyone.
AI seems occupies the same space as google in the broader internet ecosystem.I dont know what AI provides me that a few hours of Google searches. It makes information retrieval faster, but that was the never the hard part. The hard part was understanding the information, so that you're able to apply it to your particalar situation.
Being able to write to-do apps X1000 faster is not innovation!
You are assuming that the change can only happen in the west.
The rest of the world has mostly been experiencing industrialisation, and was only indirectly affected by the great crash.
If there is a transformation in the rest of the world the west cannot escape it.
A lot of people in the west seem to have their heads in the sand, very much like when Japan and China tried to ignore the west.
China is the world's second biggest economy by nominal GDP, India the fourth. We have a globalised economy where everything is interlinked.
3 replies →
> Cultural changes take time. It took decades for the internet to move from nerdy curiosity to an essential part of everyone's life.
99% of people only ever use proprietary networks from FAANG corporations. That's not "the internet", that's an evolution of CompuServe and AOL.
We got TCP/IP and the "web-browser" as a standard UI toolkit stack out of it, but the idea of the world wide web is completely dead.
Shockingly rare how few realize this. It's a series of mega cities interconnected by ghost towns out here.
yeah, this is a good point, transition and transformation to new technologies takes time. I'm not sure I agree the current state is upending things though. It's forcing some adaption for sure, but the status quo remains.
It also took years for the Internet to be usable by most folks. It was hard, expensive and unpractical for decades.
Just about the time it hit the mainstream coincidentally, is when the enshitification began to go exponential. Be careful what you wish for.
Allow me to clarify: I'm not wishing for change. I am an AI pessimist. I think our society is not prepared to deal with what's about to happen. You're right: AI is the key to the enshitification of everything, most of all trust.
1 reply →
What's happening with AGI depends on what you mean by AGI so "can't even be bothered discussing what the definition" means you can't say what's happening.
My usual way of thinking about it is AGI means can do all the stuff humans do which means you'd probably after a while look out the window and see robots building houses and the like. I don't think that's happening for a while yet.
Indeed: particularly given that—just as a nonexhaustive "for instance"—one of the fairly common things expected in AGI is that it's sapient. Meaning, essentially, that we have created a new life form, that should be given its own rights.
Now, I do not in the least believe that we have created AGI, nor that we are actually close. But you're absolutely right that we can't just handwave away the definitions. They are crucial both to what it means to have AGI, and to whether we do (or soon will) or not.
I'm not sure how the rights thing will go. Humans have proved quite able not to give many rights to animals or other groups of humans even if they are quite smart. Then again there was that post yesterday with a lady accusing OpenAI of murdering her AI boyfriend by turning off 4o so no doubt there will be lots of arguments over that stuff. (https://news.ycombinator.com/item?id=47020525)
Who would the robots build houses for? No one has a job and no one is having kids in that future.
Where are the robots going to sleep? Outside in the rain?
The billionaire elite. Isn’t it obvious? They want to get rid of us
Before enlightenment^WAGI: chop wood, fetch water, prepare food
After enlightenment^WAGI: chop wood, fetch water, prepare food
One of the most impactful books I ever read was Alvin Toffler's Future Shock.
Its core thesis was: Every era doubled the amount of technological change of the prior era in one half the time.
At the time he wrote the book in 1970, he was making the point that the pace of technological change had, for the first time in human history, rendered the knowledge of society's elders - previously the holders of all valuable information - irrelevant.
The pace of change has continued to steadily increase in the ensuing 55 years.
Edit: grammar
> Here's a thought. Lets all arbitrarily agree AGI is here.
A slightly different angle on this - perhaps AGI doesn't matter (or perhaps not in the ways that we think).
LLMs have changed a lot in software in the last 1-2 years (indeed, the last 1-2 months); I don't think it's a wild extrapolation to see that'll come to many domains very soon.
Which domains? Will we see a lot of changes in plumbing?
If most of your work involves working with a monitor and keyboard, you're in one of the the domains.
Even if it doesn't, you will be indirectly affected. People will flock to trades if knowledge work is no longer a source of viable income.
Depends on the cost to run it.say it costs 5k to do a years worth of something intellectual with it. That means the price ceiling on 90% of lawyer/accountant/radiologist/low to middle management is 5k now. It will be epic and temporarily terrible when it happens as long as reasonably competent models are opensource. I also don't think we are near that at all though
> Lets all arbitrarily agree AGI is here. I can't even be bothered discussing what the definition of AGI is.
There is a definition of AGI the AI companies are using to justify their valuation. It's not what most people would call AGI but it does that job well enough, and you will care when it arrives.
They define it as an AI that can develop other AI's faster than the best team of human engineers. Once they build one of those in house they outpace the competition and become the winner that takes all. Personally I think it's more likely they will all achieve it at a similar time. That would mean the the race will continues, accelerating as fast as they can build data centres and power plants to feed them.
It will impact everyone, because the already dizzying pace of the current advances will accelerate. I don't know about you, but I'm having trouble figuring out what my job will be next year as it is.
An AI that just develops other AI's could hardly be called "general" in my book, but my opinion doesn't count for much.
May I ask, what experiences are you personally having with LLMs right now that is leading you to the conclusion that they will become "intelligent" enough to identify, organise, and build advancing improvements to themselves, without any human interaction in the near future (1 - 2 years lets say)?
> May I ask, what experiences are you personally having with LLMs right now that is leading you to the conclusion that they will become "intelligent" enough to identify, organise, and build advancing improvements to themselves, without any human interaction in the near future (1 - 2 years lets say)?
None, as I don't develop LLM's.
I wasn't saying I think they will succeed, but I think it is worth noting their AGI ambitions are not as grand as the term implies. Nonetheless, if they achieve them, the world will change.
2 replies →
If AGI is already here actions would be so greatly accelerated humans wouldn’t have time to respond.
Remember that weather balloon the US found a few years ago that for days was on the news as a Chinese spy balloon?
Well whether it was a spy balloon or a weather balloon but the first hint of its existence could have triggered a nuclear war that could have already been the end of the world as we know it because AGI will almost certainly be deployed to control the U.S. and Chinese military systems and it would have acted well before any human would have time to intercept its actions.
That’s the apocalyptic nuclear winter scenario.
There are many other scenarios.
An AGI which has been infused with a tremendous amount of ethics so the above doesn’t happen, may also lead to terrible outcomes for a human. An AGI would essentially be a different species (although a non biological one). If it replicated human ethics even when we apply them inconsistently, it would learn that treating other species brutally (we breed, enslave, imprison, torture, and then kill over 80 billion land animals annually in animal agriculture, and possibly trillions of water animals). There’s no reason it wouldn’t do that to us.
Finally, if we infuse it with our ethics but it’s smart enough to apply them consistently (even a basic application of our ethics would have us end animal agriculture immediately), so it realizes that humans are wrong and doesn’t do the same thing to humans, it might still create an existential crisis for humans as our entire identity is based on thinking we are smarter and intellectually superior to all other species, which wouldn’t be true anymore. Further it would erode beliefs in gods and other supernatural BS we believe which might at the very least lead humans to stop reproducing due to the existential despair this might cause.
You're talking about superintelligence. AGI is just...an AI that's roughly on par with humans on most things. There's no inherent reason why AGI will lead to ASI.
What a silly comment. You're literally describing the plot of several sci-fi movies. Nuclear command and control systems are not taken so lightly.
And as for the Chinese spy balloon, there was never any risk of a war (at least not from that specific cause). The US, China, Russia, and other countries routinely spy on each other through a variety of unarmed technical means. Occasionally it gets exposed and turns into a diplomatic incident but that's about it. Everyone knows how the game is played.
"Nuclear command and control systems are not taken so lightly."
https://gizmodo.com/for-20-years-the-nuclear-launch-code-at-...
AGI is not a death sentence for humanity. It all depends on who leverages the tool. And in any case, AGI won’t be here for decades to come.
Your sentence seems to imply that we will delegate all AI decisions to one person who can decide how he wants to use it - to build or destroy.
Strong agentic AIs are a death sentence memo pad (or a malevolent djinn lamp if you like) that anyone can write on, because the tools will be freely available to leverage. A plutonium breeder reactor in every backyard. Try not to think of paperclips.
Sounds fun let's do it.
I do strongly agree on the framing, but I'd argue with the conclusion
Yeah, it really doesn't matter if AGI has happened, is going to happen, will never happen, whatever. No matter what sort of definition we make for it, someone's always doing to disagree anyway. For a looong time, we thought the Turing test was the standard, and that only a truly intelligent computer could beat it. It's been blown out of the water for years now, and now we're all arguing about new definitions for AGI
At the end of the day, like you say, it doesn't matter a bit how we define terms. We can label it whatever we want, but the label doesn't change what it can DO
What it can DO is the important part. I think a lot of software devs are coming to terms with the idea that AI will be able to replace vast chunks of our jobs in the very near future.
If you use these things heavily, you can see the trajectory.
6 months ago I'd only trust them for boiler plate code generation and writing/reviewing short in-line documentation.
Today, with the latest models and tools, I'm trusting them with short/low impact tasks (go implement this UI fix, then redeploy the app locally, navigate to it, and verify the fix looks correct).
6 months from now, my best guess is that they'll continue to become more capable of handling longer + more complex tasks on their own.
5 years from now, I'm seeing a real possibility that they'll be handling all the code, end to end.
Doesn't matter if we call that AGI or not. It very much will matter whose jobs get cut, because one person with AI can do the work of 20 developers
AGI would render humans obsolete and eradicate us sooner or later.
Pretty sure marketing team s are already working on AGI v2
AGI is a pipe dream and will never exist
Odd to see someone so adamantly insist that we have souls on a forum like HN.
I think you are missing the point: If we assume that AGI is *not* yet here, but may be here soon, what will change when it arrives? Those changes could be big enough to affect you.
I'm missing the point? I literally asked the same thing you did.
>Now what....? Whats happening right now that should make me care that AGI is here (or not).
Do you have any insight into what those changes might concretely be? Or are you just trying to instil fear in people who lack critical thinking skills?
You did not ask the same thing. You framed the question such that readers are supposed to look at their current lives and realize nothing is different ergo AGI is lame. Your approach utilizes the availability bias and argument from consequences logical fallacies.
I think what you are trying to say is can we define AGI so that we can have an intelligent conversation about what that will mean for our daily lives?. But you oddly introduced your argument by stating you didn't want to explore this definition...
people are taking actions based on its advice.
The economy is shit if you’re anything except a nurse or providing care to old people.
Electricians are also doing pretty well. Someone has to wire up those new data centers.
> The job markets a bit shit if you're in software
That's Trump's economy, not LLMs.
Many devs don’t write code anymore. Can really deliver a lot more per dev.
Many people slowly losing jobs and can’t find new ones. You’ll see effects in a few years
Deliver a lot more tech debt
My LLMs do create non-zero amounts of tech debt, but they are also massively decreasing human-made tech debt by finding mountains of code that can be removed or refactored when using the newest frameworks.
That tech debt will be cleaned up with a model in 2 years. Not that human don't make tech debt.
6 replies →
I've been writing code for 20 years. AI has completely changed my life and the way I write code and run my business. Nothing is the same anymore, and I feel I will be saying that again by the end of 2026. My productive output as a programmer in software and business have expanded 3x *compounding monthly*.
>My productive output as a programmer in software and business have expanded 3x compounding monthly.
In what units?
Tasks completed in my todo list software I’ve been measuring my output for 5 years. Time saved because I built one off tools to automate many common workflows. And yes even dollars earned.
I don’t mean 3x compounding monthly every month, I mean 3x total since I started using Claude Code about 6 months ago but the benefits keep compounding.
GWh
1 reply →
Vibes
Going from punch cards to terminals also "completely changed my life and the way I write code and run my business"
Firefox introducing their dev debugger many years ago "completely changed my life and the way I write code and run my business"
You get the idea. Yes, the day to day job of software engineering has changed. The world at large cares not one jot.
I mean 2025 had the weakest job creation growth numbers outside of recession periods since at least 2003. The world seems to care in a pretty tangible way. There are other big influencing factors for that, too, of course.
Okay. So software engineers are vastly more efficient. Good I guess. "Revolutionize the entire world such that we rethink society down to its very basics like money and ownership" doesn't follow from that.
Man you guys are impatient. It takes decades even for earth shattering technologies to mature and take root.
3 replies →
Are you working for 3x less the time compounding monthly?
Are you making 3x the money compounding monthly ?
No?
Then what's the point?
Yes and yes.
5 replies →
It's weird that you guys keep posting the same comments with the exact same formatting
You're not fooling anyone
I actually think it is here. Singularity happened. We're just playing catch up at this point.
Has it runaway yet? Not sure, but is it currently in the process of increasing intelligence with little input from us? Yes.
Exponential graphs always have a slow curve in the beginning.
Didn't you get the memo? Tuesday. Tuesday is when the Singularity happens.
Will there still be ice cream after Tuesday? General societal collapse would be hard to bare without ice cream.
Tuesday at 4 p.m to be specific.