← Back to context

Comment by hnlmorg

6 hours ago

The hard part of software development is equivalent to the hard part of engineering:

Anyone can draw a sketch of what a house should look like. But designing a house that is safe, conforms to building regulations, and which wouldn't be uncomfortable to live in (for example, poor choice of heat insulation for the local climate) is the stuff people train on. Not the sketching part.

It's the same for software development. All we've done is replace FORTRAN / Javascript / whatever with a subset of a natural language. But we still need to thoroughly understand the problem and describe it to the LLM. Plus the way we format these markdown prompts, you're basically still programming. Albeit in a less strict syntax and the "compiler" is non-deterministic.

This is why I get so mythed by comments about AI replacing programmers. That's not what's happening. Programming is just shifting to a language that looks more like Jira tickets than source code. And the orgs that think they can replace developers with AI (and I don't for one second believe many of the technology leaders think this, but some smaller orgs likely do) are heading for a very unpleasant realisation soon.

I will caveat this by saying: there are far too many naff developers out there that genuinely aren't any better than an LLM. And maybe what we need is more regulation around software development, just like there is in proper engineering professions.

> Programming is just shifting to a language that looks more like Jira tickets than source code.

Sure, but now I need to be fluent in prompt-lang and the underlying programming language if you want me to be confident in the output (and you probably do, right?)

  • No, you have to be fluent in the domain. That is ultimately where the program is acting. You can be confident it works if it passes domain level tests.

    You save all the time that was wasted forcing the language into the shape you intended. A lot of trivial little things ate up time, until AI came along. The big things, well, you still need to understand them.

    • > You can be confident it works if it passes domain level tests.

      This is generally true for things you run locally on your machine IF your domain isn't super heavy on external dependencies or data dependencies that cause edge cases and cause explosions in test cases. But again, easier to inspect/be sure of those things locally for single-player utilities.

      Generally much less true for anything that touches the internet and deals with money and/or long-term persistent storage of other people's data. If you aren't fluent in that world you'll run software built on old versions of third party code with iterations to make further changes that have to be increasingly broad in scope against a set of test cases that is almost certainly not as creative as a real attacker.

      Personally I would love to see stuff move back to local user machines vs the Google-et-al-owned online world. But I don't think "cheap freeware" was the missing ingredient that prevented the corporate consolidation. And so people/companies who want to play in that massively-online world (where the money is) are still going to have to know the broader technical domain of operating online services safely and securely, which touches deep into the code.

      So I, personally, don't have to be confident in one-off or utility scripts for manual tasks or ops that I write, because I can be confident in the domain of their behavior since I'm intimately familiar with the surrounding systems. Saves me a TON of time. Time I can devote to the important-to-get-correct code. But what about the next generation? Not familiar with the surrounding systems, so not even aware of what the domains they need to know (or not know) in depth are? (Maybe they'll pay us a bunch of money to help clean up a mess, which is a classic post-just-build-shit-fast successful startup story.)

    • I think the GP is correct.

      You can get some of the way writing prompts with very little effort. But you almost always hit problems after a while. And once you do, it feels almost impossible to recover without restarting from a new context. And that can sometimes be a painful step.

      But with learning to write effective prompts will get you a lot further, a lot quicker and with less friction.

      So there’s definitely an element of learning a “prompt-lang” to effective use of LLMs.