Comment by jader201
2 months ago
I feel like the comments/articles that continue to point out how LLMs cannot solve complex problems are missing a few important points:
1. LLMs are only getting better from here. With each release, we continue to see improvements in their capabilities. And strong competition is driving this innovation, and will probably not stop anytime soon. Much of the world is focused on this right now, and therefore much of the world’s investments are being poured into solving this problem.
2. They’re using the wrong models for the wrong job. Some models are better than others at some tasks. And this gap is only shrinking (see point 1).
3. Even if LLMs can’t solve complex problems (and I believe they can/will, see points 1 and 2), much of our jobs is refactoring, writing tests, and hand coding simpler tasks, which LLMs are undeniably good at.
4. It’s natural to deny LLMs can eventually replace much/all of what we do. Our careers depend on LLMs not being able to solve complex problems so that we don’t risk/fear losing our careers. Not to mention the overall impact to the general way our lives are impacted if AGI becomes a reality.
I’ve been doing this a while, and I’ve never seen the boost to productivity that LLMs bring. Yes, I’ve seen them make a mess of things and get things wrong. But see points 1-3.
> Our careers depend on LLMs not being able to solve complex problems so that we don’t risk/fear losing our careers
I think both this sentiment and the article are on the same wrong track, which is to see programming as solving well defined problems. The way I see it, the job is mostly about taking ill defined needs and turning them into well defined problems. The rest is just writing code. From this perspective, whether LLM prompting can replace writing code is only marginally relevant. It’s automating the easy part.
Sounds like philosophers will be the new programmers if playing around with language and definitions is all that will be left.
Sure, if writing code is applied math, deciding what needs to be written is applied philosophy. I don’t think we give ourselves enough credit for applied philosophy.
Also 5: "But LLMs produce a bunch of code I need to read and review".
Yes, but so do your coworkers? Do you complain about every PR you need to read? Are they all compressed diamonds of pure genious, not a single missed character or unoptimised function? Not a bad or suboptimal decision in sight?
If so, shoot me a mail, I want to work where you're working =)
My coworkers learn, and an important part of my job is teaching them. LLM-based tools don't.
A circular saw doesn't learn either. It's a tool, just like an LLM.
The LLM isn't replacing your coworkers, it's a tool they can (and IMO should) learn to use, just like an IDE or a debugger.
My coworkers do sure. But I don’t have to completely reread what I wrote to grok it. That’s the issue.
You either prompt and read the code it wrote to understand it before making a PR, or you prompt and drop the PR for your team. The latter is disrespectful.
This has been my biggest hurdle so far. Sure the models do great at writing code, find things faster than I would. But time wise I still have to read what it wrote, in depth.
ETA: I also implicitly trust my coworkers. This trust has been built over years. I don’t trust LLM generated code the same way.
Prompt & Drop is just plain stupid and warrants me walking over to said coworker's desk to smack them on the back of the head. =)
As for "reading in depth", it all depends on what you're doing, for most stuff I can just see if it looks good or not, I don't need to check out the PR and run it on my machine with a step-debugger to see what's going on.
And stuff should have unit tests anyway, if the tests pass and test coverage is sufficient - do you really need to go through the code with a fine toothed comb? If it quacks like a duck, walks like a duck and looks like a duck, isn't it duck enough. Do you need to dissect it and see if it's Duck all the way through?
At some point you just need to trust the tools you're using.
lol