← Back to context

Comment by vidarh

7 months ago

The majority of my code over theast few months has been written by LLMs. Including systems I rely on for my business daily.

Maybe consider it's not all on the AI tools if they work for others but not for you.

Sure man, maybe also share that bit with your clients and see how excited they'll be to learn their vital code or infrastructure may be designed by a stochastical system (*reliable a solid number of times).

  • My clients are perfectly happy about that, because they care about the results, not FUD. They know the quality of what I deliver from first-hand experience.

    Human-written code also needs reviews, and is also frequently broken until subjected to testing, iteration, and reviews, and so our processes are built around proper qa, and proper reviews, and then the original source does not matter much.

    It's however a lot easier to force an LLM into a straighjacket of enforced linters, enforced test-suite runs, enforced sanity checks, enforced processes at a level that human developers would quit over, and so as we build out the harness around the AI code generation, we're seeing the quality of that code increase a lot faster than the quality delivered by human developers. It still doesn't beat a good senior developer, but it does often deliver code that handles tasks I could never hand to my juniors.

    (In fact, the harness I'm forcing my AI generated code through was written about 95%+ by an LLM, iteratively, with its own code being forced through the verification steps with every new iteration after the first 100 lines of code or so)

    • So to summarise - the quality of code you generated with LLM is increasing a lot faster, but somehow never reaching senior level. How is that a lot faster? I mean if it never reaches the (fairly modest) goal. But that's not the end of it. Your mid-junior LLMs are also enforcing quality gates and harnesses on the rest of your LLM-mid-juniors. If only there was some proof for that, like a project demo, so it could at least look believable...

      5 replies →

> written by LLMs

Writing code is often easier than reading it. I suspect that coders soon will face what translators face now: fixing machine output at 2x to 3x less pay.