← Back to context

Comment by IanCal

16 hours ago

It doesn’t need to do all of a job to reduce total jobs in an area. Remove the programming part then you can reduce the number of people for the same output and/or bring people who can’t program but can do the other parts into the fold.

> If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?

Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.

> You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.

These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.

And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?

Agree. I agree with many of the article's points, but not its conclusion.

> “You got way more productive, so we’re letting you go” is not a sentence that makes a lot of sense.

Actually, this sentence makes perfect sense if you tweak it slightly:

> You and your teammate got way more productive, so we’re letting (just) you go

This literally happens all the time with automation. Does anyone think the number of people employed in the field of accounting would be the same or higher without the use of calculators or computers?

> And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?

I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.

I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.