Comment by dml2135
10 months ago
> I am still trying to sort out why experiences are so divergent. I've had much more positive LLM experiences while coding than many other people seem to, even as someone who's deeply skeptical of what's being promised about them. I don't know how to reconcile the two.
I am also trying to sort this out, but I'm probably someone you'd consider to be on the other, "anti-LLM" side.
I wonder if part of this is simply level of patience, or, similarly, having a work environment that's chill enough to allow for experimentation?
From my admittedly short attempts to use agentic coding so far, I usually give up pretty quickly because I experience, as others in the thread described, the agent just spinning its wheels or going off and mangling the codebase like a lawnmower.
Now, I could totally see a scenario where if I spent time tweaking prompts, writing rule files, and experimenting with different models, I could improve that output significantly. But this is being sold to me as a productivity tool. I've got code to write, and I'm pretty sure I can write it fairly quickly myself, and I simply don't have time at my start up to muck around with babysitting an AI all day -- I have human junior engineers that need babysitting.
I feel like I need to be a lot more inspired that the current models can actually improve my productivity in order to spend the time required to get there. Maybe that's a chicken-or-egg problem, but that's how it is.
> I'm probably someone you'd consider to be on the other, "anti-LLM" side.
I think if you're trying stuff, you're not, otherwise, you wouldn't even use them. What I'd say is more that you're having a bad time, whereas I'm not.
> I wonder if part of this is simply level of patience, or, similarly, having a work environment that's chill enough to allow for experimentation?
Maybe? I don't feel like I've had to had a ton of patience. But maybe I'm just discounting that, or chiller or something, as you allude to.
> Now, I could totally see a scenario where if I spent time tweaking prompts, writing rule files, and experimenting with different models, I could improve that output significantly.
I think this is it. Some people are willing to invest the time in writing natural language code for the LLM.
> I spent time tweaking prompts, writing rule files, and experimenting with different models, I could improve that output significantly. But this is being sold to me as a productivity tool. I've got code to write, and I'm pretty sure I can write it fairly quickly myself, and I simply don't have time at my start up to muck around with babysitting an AI all day -- I have human junior engineers that need babysitting.
I agree and this is the divide I think: skeptical people think this is a flimsy patch that will eventually collapse. I for one can't see how trying to maintain ever growing files in natural language won't lead to a huge cognitive load quite soon and I bet we're about to hear people discussing how to use LLMs to do that.