Comment by quesera
17 hours ago
> Currently I usually start by laying out as much detail about the problem as I can
I know you are speaking from experience, and I know that I must be one of the people who hasn't gotten far enough along the curve to see the magic.
But your description of how you do it does not encourage me.
It sounds like the trade-off is that you spend more time describing the problem and iterating on the multiple waves of wrong or incomplete solutions, than on solving the problem directly.
I can understand why many people would prefer that, or be more successful with that approach.
But I don't understand what the magic is. Is there a scaling factor where once you learn to manage your AI team in the language that they understand best, they can generate more code than you could alone?
My experience so far is net negative. Like the first couple weeks of a new junior hire. A few sparks of solid work, but mostly repeating or backing up, and trying not to be too annoyed at simpering and obvious falsehoods ("I'm deeply sorry, I'm really having trouble today! Thank you for your keen eye and corrections, here is the FINAL REVISED code, which has been tested and verified correct"). Umm, no it has not, you don't have that ability, and I can see that it will not even parse on this fifteenth iteration.
By the way, I'm unfailingly polite to these things. I did nothing to elicit the simpering. I'm also confused by the fawning apologies. The LLM is not sorry, why pretend? If a human said those things to me, I'd take it as a sign that I was coming off as a jerk. :)
I haven't seen that kind of fawning apology, which makes me wonder what model you're using.
More broadly though, yes, this is a different way of working. And to be fair, I'm not sure if I prefer it yet either. I do prefer the results though.
And yes, those results are that I can write better code, faster than I otherwise would with this approach. It also helps me write code in areas I'm less familiar with. Yes, these models hallucinate APIs, but the SOTA models do so much less frequently than I hear people complaining about, at least for the areas I work in.