Comment by Ronsenshi
2 days ago
Why is not using LLMs self harming? We've been writing code without help of LLMs for many decades just fine.
2 days ago
Why is not using LLMs self harming? We've been writing code without help of LLMs for many decades just fine.
This is what I find amusing, we're in tech, how can you ask that? We've been rubbing stones to start fire too, but if you stick to that and refuse to use electricity, that's self-harming because you lose out on all the benefits.
You can't keep doing things the same way, that's now how technology as an industry works, the whole point is to come up with newer and better things. If you're a user of technology, then you can think like that and keep using old tech until the natural balance of things forces you.
I fully expect LLMs to be obsolete in a few decades, and I'm now wondering if people then will say how LLMs have been serving them fine for a few decades.
"just fine" isn't good enough in tech, "better" is always the goal.
> that's self-harming because you lose out on all the benefits.
This is some big stretch in reasoning. Four years ago we were not harming ourselves by not having AI and writing code ourselves.
At most you're hurting your ability to compete with "vibe coders" whose metric so far has been lines of code and $ spend per day running agents, not successful products.
Have you considered how relying on AI affects your own programming skills?
I think you have to keep in mind that what you write is more important than how you write it. Can you write better programs with this tool or not? All the time I used to spend on stack over flow, asking people questions, scouring through bad documentation and source code, now AI can do all that for me. Now, if you're a 10x engineer or whatever the term is these days, and you're telling me you have no need for research, for PoC code, for critical analysis of your code, then power to you. But most people aren't in that boat.
> Have you considered how relying on AI affects your own programming skills?
Yes, about as much as it did before but now I have more time to think more deeply about solutions instead of cosmetic things, syntactic nonsense,tedious b.s.,etc... It's like asking how using an IDE with auto-complete affects your programming skills when you could be writing in vim or notepad. Doing tedious things, knowing how to beg right on the correct forum, or hunting down the best library, the best doc,etc.. these are not programming skills. Like I said in my original comment, I'm not using it to write the code for me, I'm using it to do things that have nothing to do with solving the problem at hand.
With AI's assistance, you can become a better problem solver. heck, you might even be good at spotting mistakes other people make, just by force of habit learned from correcting mistakes AI makes. It's just a utility, take all the benefits from it and discard things you don't like about it. Nothing forces to use the code it generates. think of it as a junior programmer that's doing PRs for menial tasks for you, and that's the best case scenario. It's just a really good google search that gives you the results you're looking for instead of jumping through hoops to get something of lesser quality. Many times I've mistrusted the AI and did things the old way, and its solution contained more nuance, details and considerations I failed to glean with my initial attempts.
Before AI, the story of people copy pasting things from stack overflow, including bugs, and including good code that gets buggy in certain situations was a major trend. AI does that, except better both in terms of result quality, and in helping you avoid pitfalls. But imho, you still shouldn't trust it, you still have to vet everything it does and stop using it if that effort takes more time and it would have doing things manually.