← Back to context

Comment by tptacek

16 hours ago

Sorry, I was referring the the prompt, not the code.

I was referring to the prompt/prose as well.

The median-quality code just doesn't seem like a valuable asset en route to final product, but I guess it's a matter of process at that point.

Generative AI, as I've managed to use it, brings me to a place in the software lifecycle that I don't want to be. Median-quality code that lacks the context or polish needed to be usable. Or in some cases even parseable.

I may be missing essential details though. Smart people are getting more out of AI than I am. I'd love to see a Youtube/Twitch/etc video of someone who knows what they're doing demoing the build of a typical TODO app or similar, from paragraphs to product.

  • Median-quality code is extraordinarily valuable. It is most of the load-bearing code people actually ship. What's almost certainly happening here is that you and I have differing definitions of "median-quality" commercial code.

    I'm pretty sure that if we triangle-tested (say) a Go project from 'jerf and Gemini 2.5 Go output for the same (substantial; say, 2,000 lines) project --- not whatever Gemini's initial spew is, but a final product where Gemini is the author of 80+% of the lines --- you would not be able to pick the human code out from the LLM code.

    • This is probably true. I'm using your "median-quality" label, but that would be a generous description of the code I'm getting from LLMs.

      I'm getting median-quality junior code. If you're getting median-quality commercial code, then you are speaking better LLMish than I.

      1 reply →