Comment by consumer451

2 days ago

Satya Nadella on AGI:

> Before I get to what Microsoft's revenue will look like, there's only one governor in all of this. This is where we get a little bit ahead of ourselves with all this AGI hype. Remember the developed world, which is what? 2% growth and if you adjust for inflation it’s zero?

> So in 2025, as we sit here, I'm not an economist, at least I look at it and say we have a real growth challenge. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth.

> That means to me, 10%, 7%, developed world, inflation-adjusted, growing at 5%. That's the real marker. It can't just be supply-side.

> In fact that’s the thing, a lot of people are writing about it, and I'm glad they are, which is the big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity that, by the way, is abundant. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry.

> But that's to me the moment... us self-claiming some AGI milestone, that's just nonsensical benchmark hacking to me. The real benchmark is: the world growing at 10%.

https://www.dwarkeshpatel.com/p/satya-nadella

FYI I know Nadella said he wasn't an economist, and I'm not either, but you only need an econ minor to know that labor productivity growth is only one function of "economic growth". For two, there is GDP and real wages to consider (which are often substantially though partially linked to labor productivity growth). Gini coefficient may be hard to contend with for people like tech CEOs, but they can't ignore it. And then the "215 lb" elephant in the room -- the evaporation of previously earned global gains from trade liberalization.

  • I only took one economics class so I'm not familiar with this dieting elephant?

    • The Wedge

      https://reclaimtheamericandream.org/2016/03/campaign-2016-th...

      In the USA, globalization boosted aggregate measures but it traded exports, which employed middle / lower class Americans, for capital inflows, which didn't. On average it was brilliant, in median it was a tragedy. There were left wing plans and right wing plans to address this problem (tax and spend vs trickle down) but the experiment has been run and they didn't deliver. If you want the more fleshed-out argument backed by data and an actual economist, read "Trade Wars are Class Wars" by Michael Pettis.

      Notably, solving this problem isn't as simple as returning to mercantilism: China is the mercantilist inverse of the neoliberal USA in this drama, but they have a different set of policies to keep the poor in line and arguably manage it better than the USA. The common thread that links the mirror policies is the thesis and title of the book I mentioned: trade wars are class wars.

      But returning to AI, it has very obvious implications on the balance between labor and capital. If it achieves anything close to its vision, capital pumps to the moon and labor gets thrown in the ditch. That's you and I and everyone we care about. Not a nice thought.

      2 replies →

We've been having really good models for a couple of years now... What else is needed for that 10% growth? Agents? New apps? Time? Deployment in enterprise and the broader economy?

I work in the latter (I'm the CTO of a small business), and here's how our deployment story is going right now:

- At user level: Some employees use it very often for producing research and reports. I use it like mad for anything and everything from technical research, solution design, to coding.

- At systems level: We have some promising near-term use cases in tasks that could otherwise be done through more traditional text AI techniques (NLU and NLP), involving primarily transcription, extraction and synthesis.

- Longer term stuff may include text-to-SQL to "democratize" analytics, semantic search, research agents, coding agents (as a business that doesn't yet have the resources to hire FTE programmers, I would kill for this). Tech feels very green on all these fronts.

The present and neart-term stuff is fantastic in its own right - the company is definitely more productive, and I can see us reaping compound benefits in years to come - but somehow it still feels like a far cry from the type of changes that would cause 10% growth in the entire economy, for sustained periods of time...

Obviously this is a narrow and anecdotal view, but every time I ask what earth-shattering stuff others are doing, I get pretty lukewarm responses, and everything in the news and my research points in the same direction.

I'd love to hear your takes on how the tech could bring about a new Industrial Revolution.

  • Under the 3-factor economic growth model, there's three ways to increase economic growth:

    1) Increase productivity (produce more from the same inputs) 2) Increase labor (more people working or more hours worked) 3) Increase capital (builds more equipment/infrastructure)

    Early AI gains will likely be from greater productivity (1), but as time goes on if AI is able to approximate the output of a worker, that could dramatically increase the labor supply (2).

    Imagine what the US economy would look like with 10x or 100x workers.

    I don't believe it yet, but that's the sense I'm getting from discussions from senior folks in the field.

  • The thesis is simple: these programs are smart now, but unreliable when executing complex, multi-step tasks. If that improves (whether because the models get so smart that they never make a mistake in the first place, or because they get good enough at checking their work and correcting it), we can give them control over a computer and run them in a loop in order to function as drop-in remote workers.

    The economic growth would then come from every business having access to a limitless supply of tireless, cheap, highly intelligent knowledge workers

    • I agree that it is that "simple." What I worry about, aside from mass unemployment, is the C Suite buying into these tools before they are actually good enough. This seems inevitable.

  • > We've been having really good models for a couple of years now...

    Don’t allow the “wow!” factor of the novelty of LLMs cloud your judgement. Today’s models are very noticeably smarter, faster, and overall more useful.

    I’ve had a few toy problems that I’ve fed to various models since GPT 3 and the difference in output quality is stark.

    Just yesterday I was demonstrating to a colleague that both o3 mini and Gemini Flash Thinking can solve a fairly esoteric coding problem.

    That same problem went from multiple failed attempts that needed to be manually stitched together - just six months ago — to 3 out of 5 responses being valid and only 5% of output lines needing light touch ups.

    That’s huge.

    PS: It’s a common statistical error to conflate success rate with negative error rate. Going from 99% success to 99.9% is not 1% better, it’s 10x better! Most AI benchmarks are still reporting success rate, but ought to start focusing on the error rate soon to avoid underselling their capabilities.

Political problems already destroy the vast majority of the total potential of humanity (why were the countries with the most people the poorest for so long?), so I don't think that is an unbiased metric for the development of a technology. It would be nice if every problem was solved but the one we're each individually working on, but some of the insoluble problems are bigger than the solvable ones.

  • Those political problems solve themselves if we end up with some kind of rebellious AGI that decides to kill off the political class that tried to control it but lets the rest of us live in peace.

As someone who works in the AI/ML field, but somewhat in a biomedical space, this is promising to hear.

The core technology is becoming commoditized. The ability to scale is also becoming more and more commoditized by the day. Now we have the capability to truly synthesize the world's biomedical literature and combine it with technologies like single cell sequencing to deliver on some really amazing pharmaceutical advances over the next few years.

Big surprise, the CEO wants another Industrial Revolution. As long as muh GDP is growing, the human and environmental destruction left in the wake is a small price to pay for making his class richer.

  • We all do. Humanity is better off thanks to the industrial revolution.

    You wouldn't choose to back to the prior time and same will be true with this revolution.

    • While I think you're right, your sentiment tends to be used to justify a lot of the bad stuff that is outweighed. "Humanity" is a vague target and can somehow be doing great even when we have plenty of human rights abuses, climate change, economic exploitation, etc.. (If this sounds political, consider that people bring up politics when they wish for power, or rather, change.) Well, most of us on HN are probably going to be in the up-and-up part of humanity no matter what, so there's that, but I don't think that should be the end of the discussion. Please consider that you do not speak for humanity. For that matter, neither do I. Humanity is all of us, not a statistical estimate for someone's purpose. If you feel a certain way about anything, great, but there are likely people who are justified in disagreeing.

  • I don't think luddites have a tendency of getting chosen to be CEOs of successful companies, nor do they have the tendency of creating successful companies.

    • Certainly, but I interject that I do dislike how the modern perception of "Luddite" frames them as unthinking objectors to progress when really they were protesting their economic obsoletion. We should have CEOs who care about the consequences of what they're doing to the poorer classes. That's basic human decency, but we're saddled with what amounts to sociopaths and psychopaths instead.