← Back to context

Comment by tptacek

3 days ago

No, they're not. People talk about LLM-generated code the same way they talk about any code they're responsible for producing; it's not in fact the norm for any discussion about code here to include links to the code.

But if you're looking for success stories with code, they're easy to find.

https://alexgaynor.net/2025/jun/20/serialize-some-der/

> it's not in fact the norm for any discussion about code here to include links to the code.

I certainly didn't interpret "these types of posts" to mean "any discussion about code", and I highly doubt anyone else did.

The top-level comment is making a significant claim, not a casual remark about code they produced. We should expect it to be presented with substantiating artifacts.

  • I guess. I kind of side-eyed the original one-shotting claim, not because I don't believe it, but because I don't believe it matters. Serious LLM-driven code generation runs in an iterative process. I'm not sure why first-output quality matters that much; I care about the outcome, not the intermediate steps.

    So if we're looking for stories about LLMs one-shotting high-quality code, accompanied by the generated code, I'm less sure of where those examples would be!

I could write a blog post exactly like this with my chatGPT history handy. That wasn't the point I was making. I am extremely skeptical of any claims that say someone can 1 shot quality cloud infrastructure without seeing what they produced. I'd even take away the 1-shot requirement - unless the person behind the prompt knows what they're doing, pretty much every example I've seen has been terrible.

  • I mean, I agree with you that the person behind the prompt needs to know what they're doing! And I don't care about 1-shotting, as I said in a sibling comment, so if that's all this is about, I yield my time. :)

    There are just other comments on this thread that take as axiomatic that LLM-generated code is bad. That's obviously not true as a rule.