Comment by JohnKemeny
6 days ago
But they say "yes, it didn't work 6 months ago, but it does now", and they say this every month. They're constantly resetting the goal post.
Today it works, it didn't in the past, but it does now. Rinse and repeat.
6 days ago
But they say "yes, it didn't work 6 months ago, but it does now", and they say this every month. They're constantly resetting the goal post.
Today it works, it didn't in the past, but it does now. Rinse and repeat.
It doesn’t really matter what this or that person said six months ago or what they are saying today. This morning I used cursor to write something in under an hour that previously would have taken me a couple of days. That is what matters to me. I gain nothing from posting about my experience here. I’ve got nothing to sell and nothing to prove.
You write like this is some grand debate you are engaging in and trying to win. But to people on what you see as the other side, there is no debate. The debate is over.
You drag your feet at your own peril.
The thing about people making claims like “An LLM did something for me in an hour that would take me days” is that people conveniently leave out what their own skill level is.
I’ve definitely seen humans do stuff in an hour that takes others days to do. In fact, I see it all the time. And sometimes, I know people who have skills to do stuff very quickly but they choose not to because they’d rather procrastinate and not get pressured to pick up even more work.
And some people waste even more time writing stuff from scratch when libraries exist for whatever they’re trying to do, which could get them up and running quickly.
So really I don’t think these bold claims of LLMs being so much faster than humans hit as hard as some people think they do.
And here’s the thing: unless you’re using the time you save to fill yourself up with even more work, you’re not really making productivity gains, you’re just using an LLM to acquire more free time on the company dime.
Again, implicit in this comment is the belief that I am out to or need to convince you of something. You would be the only person who would benefit from that. I don’t gain anything from it. All I get out of this is having insulting comments about my “skill level” posted by someone who knows nothing about me.
1 reply →
>And some people waste even more time writing stuff from scratch when libraries exist for whatever they’re trying to do
That's an argument for LLMs.
>you’re just using an LLM to acquire more free time on the company dime.
This is a bad thing?
> you’re just using an LLM to acquire more free time on the company dime
You might as well do that since any productivity gains will go to your employer, not you.
3 replies →
this is only a compelling counter-argument if you are referring to a single, individual person who is saying this repeatedly. and there probably are! but the author of this article is not that person, and is also speaking to a very specific loop that only first truly became prevalent 6-9 months ago.
I don’t think this is true actually. There was a huge shift of llm coding ability with the release of sonnet 2.5. That was a real shift in how people started using LMS for coding. Before that it was more of a novelty not something people used a lot for real work. As someone who is not a software engineer, as of about November 2024, I “write” hundreds of lines of code a day for meaningful work to get done.
How did you manage before?
The work just wasn’t done. Or it took enough time for me to go and learn how to do it.
"they say this every month" But I think the commenter is saying "they" comprises many different people, and they can each honestly say, at different times, "LLMs just started working". I had been loving LLMs for solving NLP since they came out, and playing with them all the time, but in my field I've only found them to improve productivity earlier this year (gemini 2.5).
Why focus on the 6 months or however long you think the cycle is. The milestones of AI coding are self-explanatory: autocomplete (shit) -> multi-files edit (useful for simple cases) -> agent (feedback loop with rag & tool use), this is where we are.
Really think about it and ask yourself if it's possible that AI can make any, ANY work a little more efficient?
I don't really get this argument. Technology can be improving can't it? You're just saying that people saying it's improving, isn't a great signal. Maybe not, but you still don't conclude that the tech isn't improving, right? If you're old enough, remember the internet was very much hyped. Al gore was involved. But it's probably been every bit as transformative as promised.
Technology improving is not the issue.
1. LLM fanboy: "LLMs are awesome, they can do x, y, and z really well."
2. LLM skeptic: "OK, but I tried them and found them wanting for doing x, y, and z"
3. LLM fanboy: "You're doing it wrong. Do it this way ..."
4. The LLM skeptic goes to try it that way, still finds it unsatisfactory. A few months pass....
5. LLM fanboy: "Hey, have you tried model a.b.c-new? The problems with doing x, y, and z have now been fixed" (implicitly now agrees that the original complaints were valid)
6. LLM skeptic: "What the heck, I though you denied there were problems with LLMs doing x, y, and z? And I still have problems getting them to do it well"
7. Goto 3