Comment by gitremote

2 months ago

Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.

In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements. I work in backend, API development, which is completely different from fullstack development with backend development. If you ask PMs about backend requirements, they will dodge you, and if you ask front-end or web developers, they are waiting for you to provide them the API. The hardest part is understanding the requirements. It's not because of illiteracy. It's because software development is a lot more than coding and requires critical thinking to discover the requirements.

> Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.

As a Professor of English who teaches programming to humanities students, the writer has had an extremely interesting and unusual academic career [1]. He sounds awesome, but I think it's fair to suggest he may not have much experience of large scale commercial software development or be particularly well placed to predict what will or will not work in that environment. (Not that he necessarily claims to, but it's implicit in strong predictions about what the "future of programming" will be.)

[1] https://stephenramsay.net/about/

  • Hard to say but to back his claim that he was programming since the 90's his CV shows he was working on stuff that's clearly more than your basic undergraduate skill level since the early 2000's. I'd be willing to bet he has more years under his belt than most HN users. I mean I'm considered old here, in my mid 30's, and this guy has been programming most my life. Though that doesn't explicitly imply experience, or more specifically experience in what.

    That said, I think people really under appreciate how diverse programmers actually are. I started in physics and came over when I went to grad school. While I wouldn't expect a physicist to do super well on leetcode problems I've seen those same people write incredible code that's optimized for HPC systems and they're really good at tracing bottlenecks (it's a skill that translates from physics really really well). Hell, the best programmer I've ever met got that way because he was doing his PhD in mechanical engineering. He's practically the leading expert in data streaming for HPC systems and gained this skill because he needed more performance for his other work.

    There's a lot of different types of programmers out there but I think it's too easy to think the field is narrow.

    • >I'm considered old here, in my mid 30's

      I'm 62, and I'm not old yet, you're just a kid. ;-)

      Seriously, there are some folks here who started on punch cards and/or paper tape in the 1960s.

      11 replies →

    • > I'm considered old here, in my mid 30's,

      The 30s is the first decade of life that people experience where there are adults younger than them. This inevitably leads people in their 30s to start saying that they are "old" even though they generally have decades of vigor ahead of them.

    • My first home computer was bought in 1986, before that the only electronics at home were Game & Watch handhelds, like Manhole.

      I guess I am reaching Gandalf status then. :)

    • 38 there. If you didn't suffer Win9x's 'stability', then editing X11 config files by hand, getting mad with ALSA/Dmix, writing new ad-hoc drivers for weird BTTV tuners reusing old known ones for $WEIRDBRAND, you didn't live.

      2 replies →

  • That was such a strange aspect. If you will excuse my use of the tortured analogy of comparing programming to wood working, there are is a lot of talk about hand tools versus power tools, but for people who aren't in a production capacity--not making cabinets for a living, not making furniture for a living--you see people choosing to exclusively use hand tools because they just enjoy it more. There isn't pressure about "you most use power tools or else you're in self-denial about their superiority." Well , at least for people who actually practice the hobby. You'll find plenty of armchair woodworkers in the comments section on YouTube. But I digress. For someone who claims to enjoy programming for the sake of programming, it was a very strange statement to make about coding.

    I very much enjoy the act of programming, but I'm also a professional software developer. Incidentally, I've almost always worked in fields where subtly wrong answers could get someone hurt or killed. I just can't imagine either giving up my joy in the former case or abdicating my responsibility to understand my code in the latter.

    And this is why the wood working analogy falls down. The scale at which damage can occur due to the decision to use power tools over hand tools is, for most practical purposes, limited to just myself. With computers, we can share our fuck ups with the whole world.

    • Another key difference is that wood itself has built in visual transparency as to the goodness of the solution - as it is pretty easy to figure out that a cabinet is horrible (I do get that there are defects in wood joining techniques that can surface after some time due to moisture, etc - but still, lot of transparency out of the box). Software has no such transparency built in.

      The advantage of hand coded solutions is that the author of the code has some sense of what the code really does and so is a proxy for transparency, vibe coded solutions not so much.

      I mean, it is 2025 and still customers are the best detectors of bad software over all quality apparatus to date.

      1 reply →

    • so what you are saying is that for production we should use AI, and hand code for hobby, got it. Lemme log back into the vpn and set the agents on the Enterprise monorepo /jk

  • >As a Professor of English who teaches programming to humanities students

    That is the strangest thing I've heard today.

    • The world of the Digital Humanities is a lot of fun (and one I've been a part of, teaching programming to Historians and Philosophers of Science!) It uses computation to provide new types of evidence for historical or rhetorical arguments and data-driven critiques. There's an art to it as well, showing evidence for things like multiple interpretations of a text through the stochasticity of various text extraction models.

      From the author's about page:

      > I discovered digital humanities (“humanities computing,” as it was then called) while I was a graduate student at the University of Virginia in the mid-nineties. I found the whole thing very exciting, but felt that before I could get on to things like computational text analysis and other kinds of humanistic geekery, I needed to work through a set of thorny philosophical problems. Is there such a thing as “algorithmic” literary criticism? Is there a distinct, humanistic form of visualization that differs from its scientific counterpart? What does it mean to “read” a text with a machine? Computational analysis of the human record seems to imply a different conception of hermeneutics, but what is that new conception?

      https://stephenramsay.net/about/

      2 replies →

  • Exactly, I don't think ppl understand why programming languages even came about anymore. Lotsa ppl don't understand why a natural language is not suitable for programming and by extension prompting an LLM

I have done both strict back-end, strict front-end, full stack, QA automation and some devops as well, I worked in an all Linux shop where we were encouraged by great senior devs to always strive for better software all around. I think you're right, it mostly depends on your mindset and how much you expose yourself to the craft. I can tackle obscure front-end things sometimes better than back-end issues despite hating front-end but knowing enough to be dangerous. (My first job in tech really had me doing everything imaginable)

I find the LLMs boost my productivity because I've always had a sort of architectural mindset, I love looking up projects that solve specific problems and keeping them on the back of my mind, turns out I was building myself up for instructing LLMs on how to build me software, and it takes several months worth of effort and spits it out in a few hours.

Speaking of vibe coding in archaic languages, I'm using LLMs to understand old Shockwave Lingo to translate it to a more modern language, so I can rebuild a legacy game in a modern language. Maybe once I spin up my blog again I'll start documenting that fun journey.

  • > Speaking of vibe coding in archaic languages

    Well, I think we can say C is archaic when most developers write in something that for one isn't C, two isn't a language itself written in C, or three isn't running on something written in C :)

    • If we take the most popular programming languages and look at what their reference (or most popular) implementations are written in, then we get:

        C++: JavaScript (V8), Java, C#
      
        C: Python, PHP, Lua, Ruby
      
        Self-hosted: Go, Rust
      

      Far from archaic indeed. We're still living in the C/C++ world.

      5 replies →

  • Hehe. In the "someone should make a website"™ department: using a crap tons of legacy protocols and plugins semi-interoperable with modern while offering legacy browsers loaded with legacy plugins something usable to test with, i.e.,

    - SSL 2.0-TLS 1.1, HTTP/0.9-HTTP/1.1, ftp, WAIS, gopher, finger, telnet, rwho, TinyFugue MUD, UUCP email, SHOUTcast streaming some public domain radio whatever

    - <blink>, <marquee>, <object>, XHTML, SGML

    - Java <applet>, Java Web Start

    - MSJVM/J++, ActiveX, Silverlight

    - Flash, Shockwave (of course), Adobe Air

    - (Cosmo) VRML

    - Joke ActiveX control or toolbar that turns a Win 9x/NT-XP box into a "real" ProgressBar95. ;)

    (Gov't mandated PSA: Run vintage {good,bad}ness with care.)

    • To be fair, we have Flash emulators that run in modern browsers, and a Shockwave one as well, though it seems to be slowing down a bit in traction. Man, VRML brought me back. Don't forget VBScript!

The thing is that some imagined AI that can reliably produce reliable software will also likely be able to be smart enough to come up with the requirements on its own. If vibe coding is that capable, then even vibe coding itself is redundant. In other words, vibe coding cannot possibly be "the future", because the moment vibe coding can do all that, vibe coding doesn't need to exist.

The converse is that if vibe coding is the future, that means we assume there are things the AI cannot do well (such as come up with requirements), at which point it's also likely it cannot actually vibe code that well.

The general problem is that once we start talking about imagined AI capabilities, both the capabilities and the constraints become arbitrary. If we imagine an AI that does X but not Y, we could just as easily imagine an AI that does both X and Y.

  • This is the most coherent comment in this thread. People who believe in vibe coding but not in generalizing it to “engineering”... brother the LLMs speak English. They can even hold conversations with your uncle.

  • My bet is that it will be good enough to devise the requirements.

    They already can brainstorm new features and make roadmaps. If you give them more context about the business strategy/goals then they will make better guesses. If you give them more details about the user personas / feedback / etc they will prioritize better.

    We're still just working our way up the ladder of systematizing that context, building better abstractions, workflows, etc.

    If you were to start a new company with an AI assistant and feed it every piece of information (which it structures / summarizes synthesizes etc in a systematic way) even with finite context it's going to be damn good. I mean just imagine a system that can continuously read and structure all the data from regular news, market reports, competitor press releases, public user forums, sales call transcripts, etc etc. It's the dream of "big data".

    • If it gets to that point, why is the customer even talking to a software company? Just have the AI build whatever. And if an AI assistant can synthesize every piece of business information, why is there a need for a new company? The end user can just ask it to do whatever.

      1 reply →

  • I agree with the first part which is basically 'being able to do a software engineers full job' is basically ASI/AGI complete.

    But I think it is certainly possible that we reach a point/plateau where everything is just 'english -> code' compilation but that 'vibe coding' compilation step is really really good.

    • It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs. In fact, why not the opposite? I think that currently LLMs are better at writing general prose, offering advice etc.., than they are at writing code. They are better at knowing what people generally want than they are at solving complex logic puzzle that require many deduction steps. Once we're reduced to imagining what AI can and cannot do, we can imagine pretty much any capability or restriction we like. We can imagine something is possible, and we can just as well choose to imagine it's not possible. We're now in the realm of, literally, science fiction.

      2 replies →

  • Following similar thinking, there's no world in which AI becomes exactly capable of replacing all software developers and then stops there, miraculously saving the jobs of everyone else next to and above them in the corporate hierarchy. There may be a human, C-suite driven cost-cutting effort to pause progress there for some brief time, but if AI can do all dev work, there's no reason it can't do all office work to replace every human in front of a keyboard. Either we're all similarly affected, or else AI still isn't good enough, in which case fleets of programmers are still needed, and among those, the presumed "helpfulness" of AI will vary wildly. Not unlike what we see already.

    • > if AI can do all dev work, there's no reason it can't do all office work to replace every human in front of a keyboard

      There are plenty of reasons.

      Radiologists aren’t being replaced by AI because of liability. Same for e.g. civil engineers. Coders don’t have liability for shipping shit code. That makes switching to an AI that’s equally blameless easier.

      Also, data: the web is first and foremost a lot of code. AI is getting good at coding first for good reason.

      Finally, as OP says, the hard work in engineering is actually scoping requirements and then executing and iterating on that. Some of that is technical know-how. A lot is also political and social skills. Again, customers are okay with a vibe-coded website in a way most people are not with even support chatbots.

      4 replies →

  • What do you mean "come up with the requirements"? Like if self-driving cars got so good that they didn't just drive you somewhere but decided where you should go?

    • No, I mean that instead of vibe coding - i.e. guiding the AI through features - you'll just tell it what you want in broad strokes, e.g. "create a tax filing system that's convenient enough for the average person to use", or, "I like the following games ... Build a game involving spaceships that I'll enjoy", and it will figure out the rest.

Yup. I would never be able to give my Jira tickets to an LLM because they're too damn vague or incomplete. Getting the requirements first needs 4 rounds of lobbying with all stakeholders.

  • We had a client who'd create incredibly detailed Jira tickets. Their lead developer (also their only developer) would write exactly how he'd want us to implement a given feature, and what the expected output would be.

    The guy is also a complete tool. I'd point out that what he described wasn't actually what they needed, and that there functionality was ... strange and didn't actually do anything useful. We'd be told to just do as we where being told, seeing as they where the ones paying the bills. Sometimes we'd read between the lines, and just deliver what was actually needed, then we'd be told just do as we where told next time, and they'd then use the code we wrote anyway. At some point we got tired of the complaining and just did exactly as the tasks described, complete with tests that showed that everything worked as specified. Then we where told that our deliveries didn't work, because that wasn't what they'd asked for, but couldn't tell us where we misunderstood the Jira task. Plus the tests showed that the code functioned as specified.

    Even if the Jira tasks are in a state where it seems like you could feed them directly to an LLM, there's no context (or incorrect context) and how is a chatbot to know that the author of the task is a moron?

    • Every time I've received overly detailed JIRA tickets like this it's always been significantly more of a headache than the vague ones from product people. You end up with someone with enough tech knowledge to have an opinion, but separated enough from the work that their opinions don't quite work.

      1 reply →

    • > how is a chatbot to know that the author of the task is a moron?

      Does it matter?

      The chatbot could deliver exactly what was asked for (even if it wasn't what was needed) without any angst or interpersonal issues.

      Don't get me wrong. I feel you. I've been there, done that.

      OTOH, maybe we should leave the morons to their shiny new toys and let them get on with specifying enough rope to hang themselves from the tallest available structure.

  • A significant part of my LLM workflow involves having the LLM write and update tickets for me.

    It can make a vague ticket precise and that can be an easy platform to have discussions with stakeholders.

    • I like this use of LLM because I assume both the developer and ticket owner will review the text and agree to its contents. The LLM could help ensure the ticket is thorough and its meaning is understood by all parties. One downside is verbosity, but the humans in the loop can edit mercilessly. Without human review, these tickets would have all the downsides of vibe coding.

      Thank you for sharing this workflow. I have low tolerance for LLM written text, but this seems like a really good use case.

    • A significant part of my workflow is getting a ticket that is ill-defined or confused and rewriting it so that it is something I can do or not do.

      From time to time I have talked over a ticket with an LLM and gotten back what I think is a useful analysis of the problem and put it into the text or comments and I find my peeps tend to think these are TLDR.

      1 reply →

  • Who says an LLM can’t be taught or given a system prompt that enables them to do this?

    Agentic AI can now do 20 rounds of lobbying with all stake holders as long as it’s over something like slack.

  • Claude Code et al. asks clarifying questions in plan mode before implementing. This will eventually extend to jira comments

    • You think the business line stakeholder is going to patiently hang out in JIRA, engaging with an overly cheerful robot that keeps "missing the point" and being "intentionally obtuse" with its "irrelevant questions"?

      This is how most non-technical stakeholders feel when you probe for consistent, thorough requirements and a key professional skill for many more senior developers and consultants is in mastering the soft skills that keep them attentive and sufficiently helpful. Those skills are not generic sycophancy, but involve personal attunement to the stakeholder, patience (exercising and engendering), and cycling the right balance between persistence and de-escalation.

      Or do you just mean there will be some PM who acts as proxy between for the stakeholder on the ticket, but still needs to get them onto the phone and into meetings so the answers can be secured?

      Because in the real world, the prior is outlandish and the latter doesn't gain much.

      1 reply →

I write a library which is used by customers to implement integrations with our platform. The #1 thing I think about is not

> How do I express this code in Typescript?

it's

> What is the best way to express this idea in a way that won't confuse or anger our users? Where in the library should I put this new idea? Upstream of X? Downstream of Y? How do I make it flexible so they can choose how to integrate this? Or maybe I don't want to make it flexible - maybe I want to force them to use this new format?

> Plus making sure that whatever changes I make are non-breaking, which means that if I update some function with new parameters, they need to be made optional, so now I need to remember, downstream, that this particular argument may or may not be `undefined` because I don't want to break implementations from customers who just upgraded the most recent minor or patch version

The majority of the problems I solve are philosophical, not linguistic

> very few people can correctly articulate requirements

The observation from Lean is that the faster you can build a prototype, the faster you can validate the real/unspoken/unclear requirements.

This applies for backends too. A lot of the “enterprise-y” patterns like BFFs, hexagonal, and so on, will make it really easy to compose new APIs from your building blocks. We don’t do this now because it’s too expensive to write all the boilerplate involved. But one BFF microservice per customer would be totally feasible for a sales engineer to vibe code, in the right architecture.

To be honest I've never worked in an environment that seemed too complex. On my side my primary blocker is writing code. I have an unending list of features, protocols, experiments, etc. to implement, and so far the main limit was the time necessary to actually write the damn code.

  • That sounds like papier mache more than bridge building, forever pasting more code on as ideas and time permit without the foresight to engineer or architect towards some cohesive long-term vision.

    Most software products built that way seem to move fast at first but become monstrous abominations over time. If those are the only places you keep finding yourself in, be careful!

    • There are a wide number of small problems for which we do not need bridges.

      As a stupid example, I hate the functionality that YouTube has to maintain playlists. However, I don't have the time to build something by hand. It turns out that the general case is hard, but the "for me" case is vibe codable. (Yes, I could code it myself. No, I'm not going to spend the time to do so.)

      Or, using the Jira API to extract the statistics I need instead of spending a Thursday night away from the family or pushing out other work.

      Or, any number of tools that are within my capabilities but not within my time budget. And there's more potential software that fits this bill than software that needs to be bridge-stable.

      2 replies →

    • The vision is "being compatible with protocols used in my field". There's hundreds over hundreds of those. Example: this app supports more than 700 protocols, hardware, etc. (https://bitfocus.io/connections) and still it's missing an AWFUL LOT and only handles fairly basic cases in general. There's just no way around writing the code for each custom bespoke protocol for whatever $APPLIANCE people are going to bring and expect to work. Even if each protocol fits neatly in a single self-contained class or two.

  • I don’t want to imply this is your case, because of course I’ve no idea how you work. But I’ve seen way too often, the reason for so many separate features is:

    A) as stated by parent comment, the ones doing req. mngmt. Are doing a poor job of abstracting the requirements, and what could be done as one feature suddenly turns in 25.

    B) in a similar manner as A, all solutions imply writing more and more code, and never refactor and abstract parts away.

    • My guess would be that the long list is maybe not self contained features (although still can be, I know I have more feature ideas than I can deliver in the next couple years myself), but behaviors or requirements of one or a handful of product feature areas.

      When you start getting down into the weeds, there can be tons and tons of little details around state maintenance, accessibility, edge cases, failure modes, alternate operation modes etc.

      That all combines to make lots of code that is highly interconnected, so you need to write even more code to test it. Sometimes much more than even the target implementations code.

  • Man I wish this was my job. I savor the days when I actually don’t have to do requirements gathering and can just code.

> the bigger bottleneck to productivity is that very few people can correctly articulate requirements.

One could argue that "vibe coding" forces you (eventually) to think in terms of requirements. There's a range of approaches, from "nitpick over every line written by AI" to "yolo this entire thing", but one thing they have in common is they all accelerate failure if the specs are not there. You very quickly find out you don't know where you're going.

I see this in my work as well, the biggest bottleneck is squeezing coherent, well-defined requirements out of PMs. It's easy to get a vision board, endless stacks of slides about priorities and direction, even great big nests of AWS / Azure thingnames masquerading as architecture diagrams. But actual "this is the functionality we want to implement and here are the key characteristics of it" detail? Absolutely scarce.

Unfortunately a lot of it is also because of illiteracy.

Lots of people hide the fact that they struggle with reading and a lot of people hide or try to hide the fact they don’t understand something.

I really wonder if we're going to see a reversion to old school project management, with a PMBOK's worth of detailed project documents for every major initiative instead of the modern Agile/Scrum/Kanban approach that seems to work better for human devs. If you can get everyone to agree on the minutia of, for example, the Stakeholder Management Plan, up front then the LLM actually has a chance of developing a decent program that has the features everyone wants.

Then again, if humans could agree on all the project minutia at the outset, we never would have developed the other systems.

I don’t mind the coding, it’s the requirements gathering and status meetings I want AI to automate away. Those are the parts I don’t like and where we’d see the biggest productivity gains. They are also the hardest to solve for, because so much of it is subjective. It also often involves decisions from leadership which can come with a lot of personal bias and occasionally some ego.

  • This is like the reverse centaur form of coding. The machine tells you what to make, and the human types the code to do the thing.

    • Well, when put like that it sounds pretty bad too.

      I was thinking more that the human would tell the machine want to make. The machine would help flesh out the idea into actual requirements, and make any decisions the humans are too afraid or indecisive to make. Then the coding can start.

Is there anyone doing dev work that operates in an environment where people can clearly articulate what they want? I've never worked in a place like that in 20 years doing software.

This means your difficulty is not programming per se, but that you are working on a very suboptimal industry / company / system. With all due respect, you use programming at work, but true programming is the act of creating a system that you or your team designed and want to make alive. Confusing the reality of writing code for a living in some company with what Programming with capitalized P is, produces a lot of misunderstanding.

I don't think the author would disagree with you. Ad you point out coding is just one part of software development. I understand his point to be that the coding portion of the job is going to be very different going forward. A skilled developer is still going to need to understand frameworks and tradeoffs so that they can turn requirements into a potential solution. It just they might not be coding up the implementation.

The only class I've ever failed was a c++ class where the instructor was so terrible at explaining the tasks that I literally could not figure out what he wanted.

I had to retake it with the same instructor but by some luck I was able to take it online, where I would spend the majority of the time trying to decipher what he was asking me to do.

Ultimately I found that the actual ask was being given as a 3 second aside in a 50 minute lecture. Once I figured out his quirk I was able to isolate the ask and code it up, ended with an A+ in the class on the second take.

I would like to say that I learned a lot about programming from that teacher, but what I actually learned is what you're saying.

Smart, educated, capable people are broken when it comes to clearly communicating their needs to other people just slightly outside of their domain. If you can learn the skill of figuring out what the hell they're asking for and delivering that, that one skill will be more valuable to you in your career than competency itself.

In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements. [...] software development is a lot more than coding and requires critical thinking to discover the requirements.

Very well said. More often than not, the job isn't to translate the product requirements into compiling/correctly executing computer code, but rather to reveal the hidden contradictions in a seemingly straightforward natural-language feature specification.

Once these are ironed out, the translation into code quite often does become a somewhat mechanical exercise, at least in my line of work.

We're basically the lawyers the person finding the magic lamp should have consulted with before opening their mouth while facing the genie ;)

AI coding allows me to simulate that system, cross reference it with what the document says wrt what the customer wants and figure out holes in the spec. Having to code something to find the hole in the definition, the spec, the problem, the anything was a necessary step in building a sound working system.

That is no longer the case.

Yeah, the hardest part is understanding the requirements. But it then still takes hours and hours and hours to actually build the damn thing.

Except that now it still takes me the same time to understand the requirements ... and then the coding takes 1/2 or 1/3 of the time. The coding also always takes 1/3 of the effort so I leave my job less burned out.

Context: web app development agency.

I really don't understand this "if it does not replace me 100% it's not making me more productive" mentality. Yeah, it's not a perfect replacement for a senior developer ... but it is like putting the senior developer on a bike and pretending that it's not making them go any faster because they are still using their legs.

I constantly run into issues where features are planned and broken down outside-in, and it always makes perfect sense if you consider it in terms of the pure user interface and behaviour. It completely breaks down when you consider the API, or the backend, is a cross-cutting concern across many of those tidy looking tasks and cannot map to them 1:1 without creating an absolute mess.

Trying to insert myself, or the right backend people, into the process, is more challenging now than it used to be, and a bad API can make or break the user experience as the UI gets tangled in the web of spaghetti.

It hobbles the effectiveness of whatever you could get an LLM to do because you’re already starting on the backfoot, requirements-wise.

I like my requirements articulated so clearly and unambiguously that an extremely dumb electronic logic machine can follow every aspect of the requirements and implement them "perfectly" (limited only by the physical reliability of the machine).

This is one reason I think spec driven development is never really going to work the way people claim it should. It's MUCH harder to write a truly correct, comprehensive, and useful spec than the code in many cases.

> very few people can correctly articulate requirements

This is the new programming. Programming and requirements are both a form of semantics. One conveys meaning to a computer at a lower level, the other conveys it to a human at a higher level. Well now we need to convey it at a higher level to an LLM so it can take care of the lower-level translation.

I wonder if the LLM will eventually skip the programming part and just start moving bits around in response to requirements?

  • My solution as a consultant was to build some artifact that we could use as a starting point. Otherwise, you're sitting around spinning your wheels and billing big $ and the pressure is mounting. Building something at least allows you to demonstrate you are working on their behalf with the promise that it will be refined or completely changed as needed. It's very hard when you don't get people who can send down requirements, but that was like 100% of the places I worked. I very seldom ran into people who could articulate what they needed until I stepped up, showed them something they could sort of stand on, and then go from there.

    Mythical Man Month had it all--build one to throw away.

    • > I very seldom ran into people who could articulate what they needed

      The people with the needs and ideas are often so divorced from the "how" that they don't even bother trying to nail down the details. I think in their mind they are delegating that to the specialists.

      This question of who writes the requirements is so ubiquitous you would think we'd have better solutions for it. I know some people solve it with processes like BDD but personally I think we'd be better off if we just had clearer role definitions.

      For example, in a waterfall project the requirements usually land in the lap of the Business Analyst. Well when you look at Business Analyst roles you see they are expected to do a lot more than documenting requirements, so it's viewed as acceptable when they are somewhat bad at it. They also spend most of their time with the business so they are unaware of the limitations of the team who is expected to implement the changes.

      For another example look at Scrum. It talks a lot about good requirements in the form of user stories, but it stops short of assigning this responsibility to any one of the formal roles, presumably making it a team activity or expecting it to be organic.

      When we want someone to write code we hire a programmer, and writing code is what they are expected to do. Where is the role that is strictly requirements and nothing else? Considering how often I hear complaints about bad requirements, it seems overdue that we establish one.

  • We have a machine that turns requirements into code. It's called a compiler. What happened to programming after the invention of the compiler?

>In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements.

Agreed.

In addition, on the other side of the pipeline, code reviews are another bottleneck. We could have more MRs in review thanks to AI, but we can't really move at the speed of LLM's outputs unless we blindly trust it (or trust another AI to do the reviews, at which point what are we doing here at all...)

The solo projects I do are 10x, the team projects I do maybe 2-3x in productivity. I think in big companies it's much much less.

Highest gains are def in full stack frameworks (like nextjs), with Database ORM, and building large features in one go, not having to go back & forth with stakeholders or collegues.

This feels like addressing a point TFA did not make. TFA talks mostly about vibe-coding speeding up coding, whereas your comment is about software development as a whole. As you point out, coding is just one aspect of engineering and we must be clear about what "productivity" we are talking about.

Sure, there are the overhypers who talk about software engineers getting entirely replaced, but I get the sense those are not people who've ever done software development in their lives. And I have not seen any credible person claiming that engineering as whole can be done by AI.

On the other hand, the most grounded comments about AI-assisted programming everywhere are about the code, and maybe some architecture and design aspects. I personally, along with many other commenters here and actual large-scale studies, have found that AI does significantly boost coding productivity.

So yes, actual software engineering is much more than coding. But note that even if coding is, say, only 25% of engineering (there are actually studies about this), putting a significant dent in that is still a huge boost to overall productivity.

Convince your PMs to use an LLM to help "breadboard" their requirements. It's a really good use case. They can ask their dumb questions they are afraid to and an LLM will do a decent job of parsing their ideas, asking questions, and putting together a halfway decent set of requirements.

  • PMs wouldn't be able to ask the right questions. They have zero experience with developer experience (DevEx) and they only have experience with user experience (UX).

    • You can hope that an LLM might have some instructions related to DevEx in its prompt at least. There's no way to completely fix stupid, anymore than you can convince a naive vibecoder that just vibing a new Linux-compatible kernel written entirely in Zig is a feasible project.

Also that requirements engineering in general isn't being done correctly.

I'm the last guy to be enthused about any "ritualistic" seeming businessy processes. Just let me code...

However, some things do need actually well defined adhered to processes where all parties are aware of and agreeing with the protocol.

Sounds like you work with inexperienced PMs that are not doing their job, did you try having a serious conversation about this pattern with them? I'm pretty sure some communication would go a long way towards getting you on a better collaboration groove.

  • I've been doing API development for over ten years and worked at different companies. Most PMs are not technical and it's the development team's job figure out the technical specifications for APIs we build. If you press the PMs, they will ask the engineering/development manager for the written technical requirements, and if the manager is not technical, they will assign it to the developers/engineers. Technical requirements for an API are really a system design question.

    • The technical design is definitely the job of the technical team for the most part, but the business requirements should be squarely on the pm. The list of use cases, how the API feels, the performance etc... All of that the business owner should be able to describe to you to ensure it does the job it needs and is fit for the market.

"the bigger bottleneck to productivity is that very few people can correctly articulate requirements."

I've found the same way. I just published an AI AUP for my company and most of it is teaching folks HOW to use AI.

and in reality, all the separate roles should be deprecated

we vibe requirements to our ticket tracker with an api key, vibe code ticket effort, and manage the state of the tickets via our commits and pull requests and deployments

just teach the guy the product manager is shielding you from not to micromanage and all the frictions are gone

in this same year I've worked at an organization that didn't allow AI use at all, and by Q2, Co-Pilot was somehow solving their data security concerns (gigglesnort)

in a different organization none of those restrictions are there and the productivity boost is through an order of magnitude greater

> In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements.

Totally agree. I’ve tried to explain this in many places: AI coding (and creative tools in general) will ultimately remain tools. Only people who can clearly and thoughtfully articulate requirements will be able to fully leverage them.

Another hot take: the “one-person company.” Headcount isn’t the key variable. With AI, the real constraint is how well you can understand a problem and clearly define a solution.

- If a problem can be clearly defined and fully understood by one person, then one person is enough to solve it.

- If a problem is more complex and requires two fundamentally different areas of expertise, then it will likely take two capable people to solve it—no more, no less.

This is like saying the typewriter won’t make a newspaper company more productive because the biggest bottlenecks are the research and review processes rather than the typing. It’s absolutely true, but it was still worth it to go up to typewriters, and the fact that people were spending less effort and time on the handwriting part helps all aspects of energy levels etc across their job.

AI is making coding so cheap, you can now program a few versions of the API and choose what works better.

If AI doesn't make you more productive you're using it wrong, end of story.

Even if you don't let it author or write a single line of code, from collecting information, inspecting code, reviewing requirements, reviewing PRs, finding bugs, hell even researching information online, there's so many things it does well and fast that if you're not leveraging it, you're either in denial or have ai skill issues period.

  • Not to refute your point but I’ve met overly confident people with “AI skills” who are “extremely productive” with it, while producing garbage without knowing, or not being able to tell the difference.

    • I've not really seen this outside of extremely junior engineers. On the flip side I've seen plenty of seniors who can't manage to understand how to interact with AI tools come away thinking they are useless when just watching them for a bit it's really clear the issue is the engineer.

    • Garbage to whom? Are we talking about something that the user shudders to think about, or something more like a product the user loves, but behind the scenes the worst code ever created?

      1 reply →

  • It sounds like you're the one in denial? AI makes some things faster, like working in a language I don't know very well. It makes other things slower, like working in a language I already know very well. In both cases, writing code is a small percentage of the total development effort.

    • No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.

      Even if you limit your AI experience to finding information online through deep research it's such a time saver and productivity booster that makes a lot of difference.

      The list of things it can do for you is massive, even if you don't have it write a single line of code.

      Yet the counter argument is like "bu..but..my colleague is pushing slop and it's not good at writing code for me", come on, then use it at things it's good at, not things you don't find it satisfactory.

      16 replies →

  • My company mandates AI usage and logs AI usage metrics as input to performance evaluation, so I use it every day. It's a Copilot subscription, though.