Comment by IncreasePosts
8 days ago
Why exactly do you think people not doing that kind of work will be automated but your kind of work won't be automated?
If AI really is all that, then whatever "special" thing you are doing will be automated as well.
Thats exactly what we as software engineers do. We are constantly automating ourselves out of a job. The trick is that we never actually accomplish that, there will always be things for humans to do.
We're discovering so much latent demand for software, Jevon's paradox is in full effect and we're working more than ever with AI (at least I am).
Software engineering is being automated. But building intelligent automation is just starting. AI engineer will be the only job left in the future as long as there are things to automate. It's really all the other jobs that will be automated first before AI engineer.
Most knowledge worker use computers today to do their work, but we don't necessarily call them computer or software engineers. I think it will be something like that, but the economy will need to adapt and grow in order to accommodate it.
OP compared AI to interns, and how they need to guide it and instuct it on simple things, like using unit tests. Well, what about when AI is actually more like an ultra talented programmer. What exactly would OP bring to the table apart from being able to ask it to solve a certain problem?
Their comment about people who don't operate like them being out of a job might be true if AI doesn't progress past the current stage but I really don't see progress slowing down, at least in coding models, for quite some time.
So, whatever relevance OPs specific methods have right now will quickly be integrated into the models themselves.
I don't disagree, aspects of that will be automated, but two things will remain: Intent and Judgement.
Building AI systems will be about determining the right thing to build and ensuring your AI system fully understands it. For example, I have a trading bot that trades. I spent a lot of time on refining the optimization statement for the AI. If you give it the wrong goal or there's any ambiguity, it can go down the wrong path.
On the back end, I then judge the outcomes. As an engineer I can understand if the work it did actually accomplished the outcomes I wanted. In the future it will be applying that judgement to every field out there.
You're trusting AI to trade with your real money?
I mean, real algo trading shops use "AI" to do it all the time, they just don't use LLMs. While I'm not the GP I think the idea they're trying to express is that the nuts and bolts of structuring programs is going away. The engineer of today, according to this claim and similar to Karpathy's Software 3.0 idea, structures their work in terms of blocks of intelligence and uses these blocks to construct programs. Nothing stopping Claude Code or another LLM coding harness from generating the scaffolding for a time-series model and then letting the author refactor the model and its hyperparameters as needed to achieve fit.
Though I don't know of any algo trading shop that relies purely on algorithms as market regimes change frequently and the alpha of new edge ends up getting competed away frequently.
(And personally I'm a believer of the jagged intelligence theory of LLMs where there's some tasks that LLMs are great at and other tasks that they'll continue being idiotic at for a while, and think there's plenty of work left for nuts and bolts program writers to do.)
1 reply →
Not a lot of money because I haven't built enough confidence but yes it's the ultimate test of can it do economically useful work
1 reply →
How technical do you need to be with your optimization statements and outcome checking? Isn't that moat constantly shrinking if AI is constantly getting better?
Another way of saying this is most line engineers will be moving into management, but managing AIs instead of people.
I see variations of this non-stop these days, people who seem to be sure AI is going to automate everything right up to them.