Comment by aerhardt
1 day ago
We've been having really good models for a couple of years now... What else is needed for that 10% growth? Agents? New apps? Time? Deployment in enterprise and the broader economy?
I work in the latter (I'm the CTO of a small business), and here's how our deployment story is going right now:
- At user level: Some employees use it very often for producing research and reports. I use it like mad for anything and everything from technical research, solution design, to coding.
- At systems level: We have some promising near-term use cases in tasks that could otherwise be done through more traditional text AI techniques (NLU and NLP), involving primarily transcription, extraction and synthesis.
- Longer term stuff may include text-to-SQL to "democratize" analytics, semantic search, research agents, coding agents (as a business that doesn't yet have the resources to hire FTE programmers, I would kill for this). Tech feels very green on all these fronts.
The present and neart-term stuff is fantastic in its own right - the company is definitely more productive, and I can see us reaping compound benefits in years to come - but somehow it still feels like a far cry from the type of changes that would cause 10% growth in the entire economy, for sustained periods of time...
Obviously this is a narrow and anecdotal view, but every time I ask what earth-shattering stuff others are doing, I get pretty lukewarm responses, and everything in the news and my research points in the same direction.
I'd love to hear your takes on how the tech could bring about a new Industrial Revolution.
Under the 3-factor economic growth model, there's three ways to increase economic growth:
1) Increase productivity (produce more from the same inputs) 2) Increase labor (more people working or more hours worked) 3) Increase capital (builds more equipment/infrastructure)
Early AI gains will likely be from greater productivity (1), but as time goes on if AI is able to approximate the output of a worker, that could dramatically increase the labor supply (2).
Imagine what the US economy would look like with 10x or 100x workers.
I don't believe it yet, but that's the sense I'm getting from discussions from senior folks in the field.
The thesis is simple: these programs are smart now, but unreliable when executing complex, multi-step tasks. If that improves (whether because the models get so smart that they never make a mistake in the first place, or because they get good enough at checking their work and correcting it), we can give them control over a computer and run them in a loop in order to function as drop-in remote workers.
The economic growth would then come from every business having access to a limitless supply of tireless, cheap, highly intelligent knowledge workers
I agree that it is that "simple." What I worry about, aside from mass unemployment, is the C Suite buying into these tools before they are actually good enough. This seems inevitable.
> We've been having really good models for a couple of years now...
Don’t allow the “wow!” factor of the novelty of LLMs cloud your judgement. Today’s models are very noticeably smarter, faster, and overall more useful.
I’ve had a few toy problems that I’ve fed to various models since GPT 3 and the difference in output quality is stark.
Just yesterday I was demonstrating to a colleague that both o3 mini and Gemini Flash Thinking can solve a fairly esoteric coding problem.
That same problem went from multiple failed attempts that needed to be manually stitched together - just six months ago — to 3 out of 5 responses being valid and only 5% of output lines needing light touch ups.
That’s huge.
PS: It’s a common statistical error to conflate success rate with negative error rate. Going from 99% success to 99.9% is not 1% better, it’s 10x better! Most AI benchmarks are still reporting success rate, but ought to start focusing on the error rate soon to avoid underselling their capabilities.