Comment by cpard
4 days ago
AI can be an amazing productivity multiplier for people who know what they're doing.
This result reminded me of the C compiler case that Anthropic posted recently. Sure, agents wrote the code for hours but there was a human there giving them directions, scoping the problem, finding the test suites needed for the agentic loops to actually work etc etc. In general making sure the output actually works and that it's a story worth sharing with others.
The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding. It works great for creating impressions and building brand value but also does a disservice to the actual researchers, engineers and humans in general, who do the hard work of problem formulation, validation and at the end, solving the problem using another tool in their toolbox.
>AI can be an amazing productivity multiplier for people who know what they're doing.
>[...]
>The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.
You're sort of acting like it's all or nothing. What about the the humans that used to be that "force multiplier" on a team with the person guiding the research?
If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.
For a more current example: do you think all the displaced Uber/Lyft drivers aren't going to think "AI took my job" just because there's a team of people in a building somewhere handling the occasional Waymo low confidence intervention, as opposed to being 100% autonomous?
Where I work, we're now building things that were completely out of reach before. The 90% job loss prediction would only hold true if we were near the ceiling of what software can do, but we're probably very, very far from it.
A website that cost hundreds of thousands of dollars in 2000 could be replaced by a wordpress blog built in an afternoon by a teenager in 2015. Did that kill web development? No, it just expanded what was worth building
> If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.
Yes, but this assumes a finite amount of software that people and businesses need and want. Will AI be the first productivity increase where humanity says ‘now we have enough’? I’m skeptical.
> Yes, but this assumes a finite amount of software that people and businesses need and want.
A lot of software exists because humans are needy and kinda incompetent, but we needed to enable to process data at scale? Like, would you build SAP as it is today, for LLMs?
This is all inevitable with the trajectory of technology, and has been apparent for a long time. The issue isn't AI, it's that our leaders haven't bothered to think or care about what happens to us when our labor loses value en masse due to such advances.
Maybe it requires fundamentally changing or economic systems? Who knows what the solution is, but the problem is most definitely rooted in lack of initiative by our representatives and an economic system that doesn't accommodate us for when shit inevitably hits the fan with labor markets.
there's 90% job loss assuming that this is a zero sum type of thing where humans and agents compete for working on a fixed amount of work.
I'm curious why you think I'm acting like it's all or nothing. What I was trying to communicate is the exact opposite, that it's not all or nothing. Maybe it's the way I articulate things, I'm genuinely interested what makes it sound like this.
Fully agree with your og comment and I didn’t get the same read as the person above at all.
This is a bizarre time to be living in, on one hand these tools are capable of doing more and more of the tasks any knowledge worker today handles, especially when used by an experienced person in X field.
On the other, it feels like something is about to give. All the superbowl ads, AI in what feels like every single piece of copy coming out these days. AI CEOs hopping from one podcast to another warning about the upcoming career apocalypse…I’m not fully buying it.
The optimistic case is that instead of a team of 10 people working on one project, you could have those 10 people using AI assistants to work on 10 independent projects.
That, of course, assumes that there are 9 other projects that are both known (or knowable) and worth doing. And in the case of Uber/Lyft drivers, there's a skillset mismatch between the "deprecated" jobs and their replacements.
Well those Uber drivers are usually pretty quick to note that Uber is not their job, just a side hustle. It's too bad I won't know what they think by then since we won't be interacting any more.
> The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.
It's also a legitimate concern. We happen to be in a place where humans are needed for that "last critical 10%," or the first critical 10% of problem formulation, and so humans are still crucial to the overall system, at least for most complex tasks.
But there's no logical reason that needs to be the case. Once it's not, humans will be replaced.
The reason there is a marketing opportunity is because, to your point, there is a legitimate concern. Marketing builds and amplifies the concern to create awareness.
When the systems turn into something trivial to manage with the new tooling, humans build more complex or add more layers on the existing systems.
The logical reason is that humans are exceptionally good at operating at the edge of what the technology of the time can do. We will find entire classes of tech problems which AI can't solve on its own. You have people today with job descriptions that even 15 years ago would have been unimaginable, much less predictable.
To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
> To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
AGI means fully general, meaning everything the human brain can do and more. I agree that currently it still feels far (at least it may be far), but there is no reason to think there's some magic human ingredient that will keep us perpetually in the loop. I would say that is delusional.
We used to think there was human-specific magic in chess, in poker, in Go, in code, and in writing. All those have fallen, the latter two albeit only in part but even that part was once thought to be the exclusive domain of humans.
2 replies →
I'm not sure you can call something an optimizing C compiler if it doesn't optimize or enforce C semantics (well, it compiles C but also a lot of things that aren't syntactically valid C). It seemed to generate a lot of code (wow!) that wasn't well-integrated and didn't do what it promised to, and the human didn't have the requisite expertise to understand that. I'm not a theoretical physicist but I will hold to my skepticism here, for similar reasons.
sure, I won't argue on this, although it did manage to deliver the marketing value they were looking for, at the end their goal was not to replace gcc but to make people talk about AI and Anthropic.
What I said in my original comment is that AI delivers when it's used by experts, in this case there was someone who was definitely not a C compiler expert, what would happen if there was a real expert doing this?
Deliver what exactly? False hope and lies?
https://github.com/anthropics/claudes-c-compiler/issues/228
Actually, the results were far worse and way less impressive than what the media said.
the c compiler results or the physics results this post is about?
The C compiler.
1 reply →
His point is going to be some copium like since the c compiler is not as optimized as gcc, it was not impressive.
8 replies →
AI is indeed an amazing productivity multiplier! Sadly that multiplier is in the range [0; 1).
> for people who know what they're doing.
I worry we're not producing as many of those as we used to
We will be producing them even less. I fear for the future graduates, hell even for school children, who are now uncontrollably using ChatGPT for their homework. Next level brainrot
Right. If it hadn't been Nicholas Carlini driving Claude, with his decades of experience, there wouldn't be a Claude c compiler. It still required his expertise and knowledge for it to get there.
Everytime I see a RL startup, a data startup or even a startup focused on a specific vertical, I think this exact same thing about LLMs.