← Back to context

Comment by quesera

1 day ago

You introduced the word into the thread. I quoted you.

Unless you're operating at some notational level above the literal, yes I think you did.

Sorry, I was referring the the prompt, not the code.

  • I was referring to the prompt/prose as well.

    The median-quality code just doesn't seem like a valuable asset en route to final product, but I guess it's a matter of process at that point.

    Generative AI, as I've managed to use it, brings me to a place in the software lifecycle that I don't want to be. Median-quality code that lacks the context or polish needed to be usable. Or in some cases even parseable.

    I may be missing essential details though. Smart people are getting more out of AI than I am. I'd love to see a Youtube/Twitch/etc video of someone who knows what they're doing demoing the build of a typical TODO app or similar, from paragraphs to product.

    • Median-quality code is extraordinarily valuable. It is most of the load-bearing code people actually ship. What's almost certainly happening here is that you and I have differing definitions of "median-quality" commercial code.

      I'm pretty sure that if we triangle-tested (say) a Go project from 'jerf and Gemini 2.5 Go output for the same (substantial; say, 2,000 lines) project --- not whatever Gemini's initial spew is, but a final product where Gemini is the author of 80+% of the lines --- you would not be able to pick the human code out from the LLM code.

      3 replies →