Comment by TeMPOraL
1 month ago
Counterpoint: perhaps it's not about escaping all the details, just the irrelevant ones, and the need to have them figured out up front. Making the process more iterative, an exploration of medium under supervision or assistance of domain expert, turns it more into a journey of creation and discovery, in which you learn what you need (and learn what you need to learn) just-in-time.
I see no reason why this wouldn't be achievable. Having lived most of my life in the land of details, country of software development, I'm acutely aware 90% of effort goes into giving precise answers to irrelevant questions. In almost all problems I've worked on, whether at tactical or strategic scale, there's either a single family of answers, or a broad class of different ones. However, no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters". Either way, I'm forced to pick and spell out a concrete answer myself, by hand. Fortunately, LLMs are slowly starting to help with that.
From my experience the issue really is, unfortunately, that it is impossible to tell if a particular detail is irrelevant until after you have analyzed and answered all of them.
In other words, it all looks easy in hindsight only.
I think the the most coveted ability of a skilled senior developer, is precisely this "uncanny" ability to predict beforehand if some particular detail is important or irrelevant. This ability can only be obtained through years of experience and hubris.
Woah woah woah, that sounds like a skill set we might have to _pay_ someone for??? Can’t we just prompt the model to do that??
1 reply →
Yeah, most of that intuition only comes from making those mistakes yourself originally, and getting it wrong. At least for me that was the case
1 reply →
> no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters"
Programming languages already take lots of decisions implicitly and explicitly on one’s behalf. But there are way more details of course, which are then handled by frameworks, libraries, etc. Surely at some point, one has to take a decision? Your underlying point is about avoiding boilerplate, and LLMs definitely help with that already - to a larger extent than cookie cutter repos, but none of them can solve IRL details that are found through rigorous understanding of the problem and exploration via user interviews, business challenges, etc.
But that's the hard part. You have to explore the details to determine if they need to be included or not.
You can't just know right off the back. Doing so contradicts the premise. You cannot determine if a detail isn't important unless you get detailed. If you only care about a few grains of sand in a bucket you still have to search through a bucket of sand for those few grains
Right. But that's where tight feedback loop comes into place. New AI developments enable that in at least two ways: offloading busywork and necessary but straightforward work (LLMs can already write and iterate orders of magnitude faster than people), and having a multi-domain expert on call to lean on.
The thing about important details is that what ultimately matters is getting them right eventually, not necessarily the first time around. The real cost limiting creative and engineering efforts isn't the one of making a bad choice, but that of undoing it. In software development, AI makes even large-scale rewrites orders of magnitude cheaper than they ever were before, which makes a lot more decisions easily undoable in practice, when before that used to be prohibitively costly. I see that as one major way towards enabling this kind of iterative, detail-light development.
I don't feel like this is an accurate description. My experience is that LLMs have a very large knowledge base but that getting them to go in depth is much more difficult.
But we run into the same problem... how do you evaluate that which you are not qualified to evaluate? It is a grave mistake to conflate "domain expert" with "appears to know more than me". Doesn't matter if it is people or machine, it is a mistake. It's how a lot of conartists work, and we've all seen people who are in high positions and we're all left wondering how in the world they got there.
Weird reasoning... because I agree and this is the exact reason I find LLMs painful to work with. They dump code at you rather than tightening it up, making it clear and elegant. Code is just harder to rebase or simplify when there are more lines. Writing lines has never been and never will be the bottleneck because the old advice still holds true that if you're doing things over and over again, you're doing it wrong. One of the key things that makes programming so amazing is that you can abstract out repetitive tasks, even when there is variation. Repetition and replication only make code harder to debug and harder to "undo bad choices".
Also, in my experience it is even difficult to get LLMs to simplify, even when explicitly instructing them to and pointing them to specific functions and even giving strong hints of what exactly needs to be done. They promptly tell me how smart I am and then fail to do any of that actual abstraction. Code isn't useful when you have the same function written 30 different places and 20 different files. That's way harder to back out of decisions. They're good at giving a rough sketch but it still feels reckless to me to let them actually write into the codebase where they are creating this tech debt.
Fully agree with this. Not all labor is equally worth doing.