Comment by keyle
19 hours ago
I don't get it.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
I find it interesting, the comments on this post (not just this particular comment per se) and the sheer inability to relate or ATTEMPT to relate to another persons experience or feeling. The post itself articulated a viewpoint and experience, your having a different one does not negate the other. Nor does your perspective mean the other does not exist. I'm dumbfounded at many of the comments.
Here are some clipped comments that I pulled from the overall post
> I don't get it.
> I'm using LLMs to code and I'm still thinking hard.
> I don't. I miss being outside, in the sun, living my life. And if there's one thing AI has done it's save my time.
> Then think hard? Have a level of self discipline and don’t consistently turn to AI to solve your problems.
> I am thinking harder than ever due to vibe coding.
> Skill issue
> Maybe this is just me, but I don't miss thinking so much.
The last comment pasted is pure gold, a great one to put up on a wall. Gave me a right chuckle thanks!!!
When I read the article, I feel the same emotions that I feel if someone were to tell me "I keep trying to ride a bike but I keep falling off". My experience with LLMs is that the "lack of thinking" is mostly a quick trough you fall into before you come out the other side understanding how to deal with LLMs better. And yes, there's nothing wrong with relating to someone's experience, but mostly I just want to tell that guy, just keep trying, it'll get better, and you'll be back to thinking hard if you keep at it.
But then OP says stuff like:
> I am not sure if there will ever be a time again when both needs can be met at once.
In my head that translates to "I don't think there will ever be a time again when I can actually ride my bike for more than 100 feet." At which point you probably start getting responses more like "I don't get it" because there's only so much empathy you can give someone before you start getting a little frustrated and being like "cmon it's not THAT bad, just keep trying, we've all been there".
If I can 'speak' for the OP:
> I keep trying to ride a bike but I keep falling off
I do not think this analogy is apt.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do.
The article is lamenting the disappearing of something meaningful for the OP. One can feel sad for this alone. It is not an equation to balance: X is gone but Y is now available. The lament stands alone. As the OP indicates with his 'pragmatism' we now collectively have little choice about the use of AI. The flood waters do not ask they take everyone in their path.
1 reply →
>You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
its obviously not wrong to fly over the desert in a helicopter. its a means to an end and can be completely preferable. I mean myself I'd prefer to be in a passenger jet even higher above it, at a further remove personally. But I wouldn't think that doing so makes me someone who knows the desert the same way as someone who has crossed it on foot. It is okay to prefer and utilize the power of "the next abstraction", but I think its rather pig headed to deny that nothing of value is lost to people who are mourning the passing of what they gained from intimate contact with the territory. and no it's not just about the literal typing. the advent of LLMs is not the 'end of typing', that is more reductionist failure to see the point.
Reminds me of all the parables about kings who "pretend to be a common man" for a day and walk among their subjects and leave with some new enlightenment.
The idea that you lose a ton of knowledge when you experience things through intermediaries is an old one.
I felt the same way about python when I was switching from C++ to python for data analysis
How? Other then calling utility functions that C++ doesn't have you can't just like skip understanding what you are coding by using Python. If you are importing libraries that do stuff for you that wouldn't be any different than if someone wrote those libs in C++.
1 reply →
I think I understand what the author is trying to say.
We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)
And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.
You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.
Spot on. It’s the lumberjack mourning the axe while holding a chainsaw. The work is still hard. it’s just different. The friction comes from developers who prioritize the 'craft' of syntax over delivering value. It results in massive motivated reasoning. We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike. They will hunt for a single AI syntax error while ignoring the history of bugs caused by human fatigue. It's not about the tech. it's about the loss of the old way of working.
And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
I disagree. It's like the lumberjack working from home watching an enormous robotic forestry machine cut trees on a set of tv-screens. If he enjoyed producing lumber, then what he sees on those screens will fill him with joy. He's producing lots of lumber. He's much more efficient than with both axe and chainsaw.
But if he enjoyed being in the forest, and _doesn't really care about lumber at all_ (Because it turns out, he never used or liked lumber, he merely produced it for his employer) then these screens won't give him any joy at all.
That's how I feel. I don't care about code, but I also don't really care about products. I mostly care about the craft. It's like solving sudokus. I don't collect solved sudokus. Once solved I don't care about them. Having a robot solve sudokus for me would be completely pointless.
> I sense a pattern that many developers care more about doing what they want instead of providing value to others.
And you'd be 100% right. I do this work because my employer provides me with enough sudokus. And I provide value back which is more than I'm compensated with. That is: I'm compensated with two things: intellectual challenge, and money. That's the relationship I have with my employer. If I could produce 10x more but I don't get the intellectual challenge? The employer isn't giving me what I want - and I'd stop doing the work.
I think "You do what the employer wants, produce what needs to be produced, and in return you get money" is a simplification that misses the literal forest for all the forestry.
4 replies →
> And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
I use LLMs a lot. They're ridiculously cool and useful.
But I don't think it's fair to categorize anybody as "egotistical". I enjoy programming for the fun puzzley bits. The big puzzles, and even often the small tedious puzzles. I like wiring all the chunks up together. I like thinking about the best way to expose a component's API with the perfect generic types. That's the part I like.
I don't always like "delivering value" because usually that value is "achieve 1.5% higher SMM (silly marketing metric) by the end of the quarter, because the private equity firm that owns our company is selling it next year and they want to get a good return".
Egotistical would be to reject the new tools in principle and be a less efficient developer.
But really, most of us who personally feel sad about the work being replaced by LLMs can still act reasonable, use the new tooling at work like a good employee, and lament about it privately in a blog or something.
> We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike.
Maybe you don’t care about the environment (which includes yourself and the people you like), or income inequality, or the continued consolidation of power in the hands of a few deranged rich people, or how your favourite artists (do you have any?) are exploited by the industry, but some of us have been banging the drum about those issues for decades. Just because you’re only noticing it now or don’t care it doesn’t mean it’s a new thing or that everyone else is being duplicitous. It’s a good thing more people are waking up and talking about those.
I work with a lot of artists, and selling them on (not totally rejecting) AI has largely been unsuccessful until they both understand the analogies and the specifics of what different tools do.
AI makes you the manager. The models are like GRAs or contract workers, maybe new to their fields but with tireless energy, and you need to be able to instruct them correctly and evaluate their outputs. None of them can do everything, and you'll need to carefully hire the ones you want based on the work you need, which means breaking workflows into batchable parts. If you've managed projects before, you've done this.
Right now, my focus is improving pipelines in composition and arrangement based on an artist's corpus. A lot of them just want to be more productive, and it's a slog to write, then break into parts, etc using modern notation software.
I agree. I think some of us would rather deal with small, incremental problems than address the big, high-level roadmap. High-level things are much more uncertain than isolated things that can be unit-tested. This can create feelings of inconvenience and unease.
That's a warm and cozy way to look at things but its not true.
LLM-aided coding is not a higher level tool. It is not writing in C vs writing in assembly. It is closer to asking other people to do something for you and (supposedly, hopefully, although how many people really do it?) reviewing the result.
The thing is, some people already disliked the thinking involved in programming and are welcoming these tools. That's fine, but you don't get to equate it with programming.
You don't have a strong mental model after agentic coding something in my experience.
It isn't an abstraction like assembly -> C. If you code something like: extract the raw audio data from an audio container, it doesn't matter if you write it in assembly, C, Javascript, whatever. You will be able to visualize how the data is structured when you are done. If you had an agent generate the code the data would just be an abstraction.
It just isn't worth it to me. If I am working with audio and I get a strong mental model for what different audio formats/containers/codecs look like who knows what creative idea that will trigger down the line. If I have an agent just fix it then my brain will never even know how to think in that way. And it takes like... a day...
So I get it as a optimized search engine, but I will never just let it replace understanding every line I commit.
You _think_ you're thinking as hard. Reading code != writing it. Just like watching someone do a thing isn't the same as actually doing it.
Correct… reading code is a much more difficult and ultimately, productive, task.
I suspect those using the tools in the best way are thinking harder than ever for this reason.
> reading code is a much more difficult
Not inherently, no. Reading it and getting a cursory understanding is easy, truly understanding what it does well, what it does poorly, what the unintended side effects might be, that's the difficult part.
In real life I've witnessed quite a few intelligent and experienced people who truly believe that they're thinking "really hard" and putting out work that's just as good as their previous, pre-AI work, and they're just not. In my experience it roughly correlates to how much time they think they're saving, those who think they're saving the most time are in fact cutting corners and putting out the sloppiest quality work.
2 replies →
Sure. Reading a book is a much more difficult and ultimately, productive, task than writing a book.
Well, depending on the scope of work, they may be still thinking hard, just on a higher level. That is, thinking about the requirements, specification, and design.
> I think just as hard, I type less. I specify precisely and I review.
Even if you "think just as hard" the act of physically writing things down is known to improve recall, so you're skipping a crucial step in understanding.
And when I review code, it's a different process than writing code.
These tradeoffs may be worth it, because we can ask the tools to analyze things for us just as easily as we can ask them to create things for us, but your own knowledge and understanding of the system is absolutely being degraded when working this way.
I think the more apt analog isn't a faster car, a la Ferrari, it's more akin to someone who likes to drive and now has to sit and monitor the self-driving car steer and navigate. Comparing to the Ferrari is incorrect since it still takes a similar level of agency from the driver versus a <insert slower vehicle>
This is exactly the right analogy here.
FSD is very very good most of the time. It's so good (well, v14 is, anyway), it makes it easy to get lulled into thinking that it works all the time. So you check your watch here, check your phone there, and attend to other things, and it's all good until the car decides to turn into a curb (which almost happened to me the other day) or swerve hard into a tree (which happened to someone else).
Funny enough, much like AI, Tesla is shoving FSD down people's throats by gating Autopilot 2, a lane keeping solution that worked extremely well and is much friendlier to people who want limited autonomy here and there, behind the $99/mo FSD sub (and removing the option to pay for the package out of pocket).
It is simple. Continuing your metaphor, I have a choice of getting exactly where I want on a camel in 3 days, or getting to a random location somewhere on the other side of the desert on a helicopter in few hours.
And being a reasonable person I, just like the author, choose the helicopter. That's it, that's the whole problem.
Why is that the reasonable choice if it doesn't get you to your destination?
I too did a lot of AI coding but when I saw the spaghetti it made, I went back to regular coding, with ask mode not agent mode as a search engine.
Because of compound efficiency and technological enablement.
Or, risking to beat the metaphor to death, because over a span of time I'll cross many more deserts than I would have on a camel, and because I'll cross deserts that I wouldn't even try crossing on a camel.
3 replies →
Because taking a rental camel from the airport is faster.
You did something smart and efficinently using the least amount of energy and time needed. +1 for consciousness being a mistake
Helicopters are deterministic though :)
> We're all Linus Torvalds now.
So...where's your OS and SCM?
I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.
I should have been clearer. It was a pun, a take, a joke. I was referring to his day-to-day activity now, where he merges code, doesn't write hardly any code for the linux kernel.
I didn't imply most of use can do half the thing he's done. That's not right.
Even disregarding what he has done, this is utterly absurd. I almost spit my coffee reading that.
You are going to tell me that the vibe coders care and read the code they merge with the same attention to detail and care that Linus has? Come on...
That's the key for me. People are churning out "full features" or even apps claiming they are dealing with a new abstraction level, but they don't give a fuck about the quality of that shit. They don't care if it breaks in 3 weeks/months/years or if that code's even needed or not.
Someone will surely come say "I read all the code I generate" and then I'll say either you're not getting these BS productivity boost people claim or you're lying.
I've seen people pushing out 40k lines of code in a single PR and have the audacity to tell me they've reviewed the code. It's preposterous. People skim over it and YOLO merge.
Or if you do review everything, then it's not gonna be much faster than writing it yourself unless it's extremely simple CRUD stuff that's been done a billion times over. If you're only using AI for these tasks maybe you're a bit more efficient, but nothing close to the claims I keep reading.
I wish people cared about what code they wrote/merged like Linus does, because we'd have a hell of a lot less issues.
> his day-to-day activity now, where he merges code
But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.
2 replies →
My hair hasn't turned blonde and I don't suddenly know how to speak Finnish, either.
You might have missed their point.
I agree! It's a lot more pleasant than being stuck over figuring out how to use awk properly for hours. I knew what I needed to do then, and I know what I need to do now too. The difference is I get to results faster. Sometimes I even learn that awk was not even the right tool in my situation and learn about a new way of doing things while AI is "thinking" for me
> You just fat-finger less typos today than ever before.
My typos are largely admissible.
> We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
Except Linus understands the code that is being reviewed / merged in since he already built the kernel and git by hand. You only see him vibe-coding toys but not vibe-coding in the kernel.
Today, we are going to see a gradual skill atrophy with developers over-relying on AI and once something like Claude goes down, they can't do any work at all.
The most accurate representation is that AI is going to rapidly make lots of so-called 'senior engineers' who are over-reliant and unable to detect bad AI code like juniors and interns.
My “skill” for 40 years has been to turn what I wanted my computer to do into code to get it done using the tools available to me. A “senior developer” is not someone who “codez real gud”. It’s someone who can work at a higher level of scope and ambiguity and has a larger impact on the organization than a ticket taker
If you can't rebuke code today. You can't rebuke code tomorrow.
By induction that means either nobody can rebuke code or someone who can rebuke code can do that from the day they're born.
2 replies →
I get it.
I got excited about agents because I told myself it would be "just faster typing". I told myself that my value was never as a typist and that this is just the latest tool like all the tools I had eagerly added to my kit before.
But the reality is different. It's not just typing for me. It's coming up with crap. Filling in the blanks. Guessing.
The huge problem with all these tools is they don't know what they know and what they don't. So when they don't know they just guess. It's absolutely infuriating.
It's not like a Ferrari. A Ferrari does exactly what I tell it to, up to the first-order effects of how open the throttle is, what direction the wheels face, how much pressure is on the brakes etc. The second-order effects are on me, though. I have to understand what effect these pressures will have on my ultimate position on the road. A normie car doesn't give you as much control but it's less likely to come off the road.
Agents are like a teleport. You describe where you want to be and it just takes you directly there. You say "warm and sunny" and you might get to the Bahamas, but you might also get to the Sahara. So you correct: "oh no, I meant somewhere nice" and maybe you get to the Bahamas. But because you didn't travel there yourself you failed to realise what you actually got. Yeah, it's warm, sunny and nice, but now you're on an island in the middle of nowhere and have to import basically everything. So I prompt again and rewrite the entire codebase, right?
Linus Torvalds works with experts that he trusts. This is like a manic 5 year old that doesn't care but is eager to work. Saying we all get to be Torvalds is like saying we all get to experience true love because we have access to porn.
except the thing does not work as expected and it just makes you worse not better
Like I said that's temporary. It's janky and wonky but it's a stepping stone.
Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
It's only time.
Why is image generation the same as code generation?
3 replies →
> Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
Yes, but you’re not taking into account what actually caused this evolution. At first glance, it looks like exponential growth, but then we see OpenAI (as one example) with trillions in obligations compared to 12–13 billion in annual revenue. Meanwhile, tool prices keep rising, hardware demand is surging (RAM shortages, GPUs), and yet new and interesting models continue to appear. I’ve been experimenting with Claude over the past few days myself. Still, at some point, something is bound to backfire.
The AI "bubble" is real, you don’t need a masters degree in economics to recognize it. But with mounting economic pressures worldwide and escalating geopolitical tension we may end up stuck with nothing more than those amusing Will Smith eating pasta videos for a while.
Comments like these are why I don't browse HN nearly ever anymore
Nothing new. Whenever a new layer of abstraction is added, people say it's worse and will never be as good as the old way. Though it's a totally biased opinion, we just have issues with giving up things we like as human being.
7 replies →
That's your opinion and you can not use those tools.
People are paying for it because it helps them. Who are you to whine about it?
But that's the entire flippin' problem. People are being forced to use these tools professionally at a stagering rate. It's like the industry is in its "training your replacement" era.
3 replies →