← Back to context

Comment by hnlmorg

4 hours ago

The hard part of software development is equivalent to the hard part of engineering:

Anyone can draw a sketch of what a house should look like. But designing a house that is safe, conforms to building regulations, and which wouldn't be uncomfortable to live in (for example, poor choice of heat insulation for the local climate) is the stuff people train on. Not the sketching part.

It's the same for software development. All we've done is replace FORTRAN / Javascript / whatever with a subset of a natural language. But we still need to thoroughly understand the problem and describe it to the LLM. Plus the way we format these markdown prompts, you're basically still programming. Albeit in a less strict syntax and the "compiler" is non-deterministic.

This is why I get so mythed by comments about AI replacing programmers. That's not what's happening. Programming is just shifting to a language that looks more like Jira tickets than source code. And the orgs that think they can replace developers with AI (and I don't for one second believe many of the technology leaders think this, but some smaller orgs likely do) are heading for a very unpleasant realisation soon.

I will caveat this by saying: there are far too many naff developers out there that genuinely aren't any better than an LLM. And maybe what we need is more regulation around software development, just like there is in proper engineering professions.

> Programming is just shifting to a language that looks more like Jira tickets than source code.

Sure, but now I need to be fluent in prompt-lang and the underlying programming language if you want me to be confident in the output (and you probably do, right?)

  • No, you have to be fluent in the domain. That is ultimately where the program is acting. You can be confident it works if it passes domain level tests.

    You save all the time that was wasted forcing the language into the shape you intended. A lot of trivial little things ate up time, until AI came along. The big things, well, you still need to understand them.

    • I think the GP is correct.

      You can get some of the way writing prompts with very little effort. But you almost always hit problems after a while. And once you do, it feels almost impossible to recover without restarting from a new context. And that can sometimes be a painful step.

      But with learning to write effective prompts will get you a lot further, a lot quicker and with less friction.

      So there’s definitely an element of learning a “prompt-lang” to effective use of LLMs.