← Back to context

Comment by subarctic

7 days ago

I just feel so discouraged reading this somehow. I used to have this hard-to-get, in-demand skill that paid lots of money and felt like even though programming languages, libraries and web frameworks were always evolving I could always keep up because I'm smart. But now with these people like Simon Willison writing about the new way of coding with these agents and multiple streams of work going on at a time and it sounding like this is the future, I just feel discouraged because it sounds like so much work and I've tried using coding agents and they help a bit, but I find it way less fun to be waiting around for agents to do stuff and it's way harder to get into flow state managing multiple of these things. It makes me want to move into something completely different like sales

I'm really sorry to hear this, because part of my goal here is to help push back against the idea that "programming skills are useless now, anyone can get an LLM to write code for them".

I think existing software development skills get a whole lot more valuable with the addition of coding agents. You can take everything you've learned up to this point and accelerate the impact you can have with this new family of tools.

I said a version of this in the post:

> AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.

A brand new vibe coder may be able to get a cool UI out of ChatGPT, but they're not going to be able to rig up a set of automated tests with continuous integration and continuous deployment to a Kubernetes cluster somewhere. They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.

  • I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills. It doesn't work like tools developers use and it doesn't work like people developers work with. Furthermore, techniques of working with agents today may be completely outdated a year from now. The acceleration is also inconsistent: sometimes there's an acceleration, sometimes a deceleration.

    Generative AI is at the same time incredibly impressive and completely unreliable. This makes it interesting, but also very uncertain. Maybe it's worth my investment to learn how to master today's agents, and maybe I'd be better off waiting until these things become better.

    You wrote:

    > Getting good results out of a coding agent feels uncomfortably close to getting good results out of a human collaborator. You need to provide clear instructions, ensure they have the necessary context and provide actionable feedback on what they produce.

    That is true (about people) but misses out the most important thing for me: it's not about the information I give them, but about the information they give me. For good results, regardless of their skill level, I need to absolutely trust that they tell me what challenges they've run into and what new knowledge they've gained that I may have missed in my own understanding of the problem. If that doesn't happen, I won't get good results. If that kind of communication only reliably happens through code I have to read, it becomes inefficient. If I can't trust an agent to tell me what I need to know (and what I trust when working with people) then the whole experience breaks down.

    • > I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills.

      If you’ve be been tasked with leadership of an engineering effort involving multiple engineers and stakeholders you know that this is in fact a crucial part of the role the more senior you get. It is much the same with people: know their limitations, show them a path to success, help them overcome their limitations by laying down the right abstractions and giving them the right coaching, make it easier to do the right thing. Most of the same approaches apply. When we do these things with people it’s called leadership or management. With agents, it’s context engineering.

      17 replies →

    • > incredibly impressive and completely unreliable.

      There have been methods of protecting against this since before AI, and they still apply. LLMs work great with test driven development, for example.

      I would say that high-level knowledge and good engineering practices more important than ever, but they were always important.

      8 replies →

    • > doesn't work like people developers work with

      I don't know.

      This is true for people working in an environment that provides psychological safety, has room for mistakes and rewards hard work.

      This might sound cynical, but in all other places I see the "lying to cover your ass" behavior present in one form or another.

    • > It doesn't work like tools developers use and it doesn't work like people developers work with. Furthermore, techniques of working with agents today may be completely outdated a year from now.

      Sounds like big money to be made in improving UX

    • > I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills.

      that's a basic skill you gotta have if you're leading anything or anyone. There'll always be levels of that. So if you're planning to lead anyone in your career, it's a good skillset to develop

      1 reply →

  • While this is true, I definitely find that the style of the work changes a lot. It becomes much more managerial, and less technical. I feel much more like a mix of project and people manager, but without the people. I feel like the jury is still out on whether I’m overall more productive, but I do feel like I have less fun.

    • My lessons so far:

      1. Less fun.

      2. A lot of more "review fatigue".

      3. Tons of excess code I'd never put in there in the first place.

      4. Frustration with agents being too optimistic which with time verges on the ludicurous ("Task #3 has been completed successfully with 98% tests failing. [:useless_emojis:]")

      5. Frustration with agents routinely getting down a rabbit hole or going in circles, the effort needed to get that straight (Anthropic plainly advises to start from scratch in such cases - which is sound advice, but makes me feel like I just lost the last 5 hours of my life without even learning anything new).

      I stopped using agents and use LLMs very sparingly (e.g. for review - they sometimes find some details I missed and occasionally have an interesting solution) but I'm enjoying my work so much more without them.

      21 replies →

    • Yeah exactly, it changes the job from programmer to (technical) project manager, which is both more proactive (writing specifications) and reactive (responding to an agent finishing). The 'sprinting' remark is apt, because if your agents are not working you need to act. And it's already established that a manager shouldn't micromanage, that'll lead to burnout and the like. But that's why software engineers will remain relevant, because managers need someone to rely on that can handle the nitty-gritty details of what they ask for.

    • I also think that managing a coding agent isnt like managing a person. a person is creative, they will come up with ways that challenge whatever idea you have and that usually makes the project better. A coding agent never challenges you, mostly just does whatever you want, and you don't end up having any kind of intellectual person to person engagement that is why working on teams can be fun. So it kind of isolates you. And I think the primary reason all this happens is because marketing people have decided to call all of these coding agents "Artificial Intelligence" instead of "Dev Tools". And instead of calling it "Security" they now call it "AI Alignment". And instead of calling it "data schema" or "Spec sheet" they call it "managing the AI context". So now, we are all biased to see these things as some kind of entity that we can treat something like a colleague and we all bought this idea because the tool can chat with you. But it isn't a colleague, it doesn't think and feel, it doesn't provide intellectual engagement, it simply is a lossy, noisy tool to try and translate human language into computer language whether its python or machine code.

      1 reply →

  • > They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.

    I wonder what the practical limits are.

    As a senior dev on a greenfield solo project it's too exhausting for me to have two parallel agents (front/back), most of the time they're waiting for me to spec, review or do acceptance test. Feels like sprinting, not something I could do day in and day out.

    Might be due to tasks being too fine grained, but assuming larger ones are proportionally longer to spec and review, I don't see more than two (or, okay, three, maybe I'm just slow) being a realistic scenario.

    More than that, I think we're firmly in the vibe coding (or maybe spec-driven vibe coding) territory.

    • At least on a team, the limit is the team's time to review all the code. We've also found that vibe engineered (or "supervised vibing" as I call it) code tends to have more issues in code review because of a false sense of security creating blind spots when self reviewing. Even more burden on the team.

      We're experimenting with code review prompts and sub agents. Seems local reviews are best, so the bulk of the burden is on the vibing engineer, rather than the team.

      14 replies →

    • I resonate on the exhaustion — actually, the context switching fatigue is why we built Sculptor for ourselves (https://imbue.com/sculptor). We usually see devs running 4-6 agents in parallel today using Sculptor today. Personally I think much of the fatigue comes from: 1) friction in spawning agents 2) friction in reviewing agent changes 3) context management annoyance when e.g. you start debugging part of the agent's work but then have to reload context to continue the original task

      It's still super early, but we've felt a lot less fatigued using Sculptor so far. To make it easier to spawn agents without worrying, we run agents in containers so they can run in YOLO mode and don't interfere with each other. To make it easy to review changes, we made "Pairing Mode", lets you instantly sync any agent's work from the container into your local IDE to test it, then switch to another.

      For context management, we just shipped the ability to fork agents form any point in the convo history, so you can reuse an agent that you loaded with high-quality context and fork off to debug an agent's changes or try all options it presented. It also lets you keep a few explorations going and check in when you have time.

      Anyway, sorry, shilling the product a bit much but I just wanted to say that we've seen people successfully use more than 2 agents without feeling exhausted!

  • I really don't get the idea that LLMs somehow create value. They are burning value. We only get useful work out of them because they consume past work. They are wasteful and only useful in a very contrived context. They don't turn electricity and prompts into work, they turn electricity, prompts AND past work into lesser work.

    How can anyone intellectually honest not see that? Same as burning fossil fuels is great and all except we're just burning past biomass and skewing the atmosphere contents dangerously in the process.

    • > How can anyone intellectually honest not see that?

      The idea that they can only solve problems that they've seen before in their training data is one of these things that seems obviously true, but doesn't hold up once you consistently use them to solve new problems over time.

      If you won't accept my anecdotal stories about this, consider the fact that both Gemini and OpenAI got gold medal level performance in two extremely well regarded academic competitions this year: the International Math Olympiad (IMO) and the International Collegiate Programming Contest (ICPC).

      This is notable because both of those contests have brand new challenges created for them that have never been published before. They cannot be in the training data already!

      16 replies →

    • It's not about being honest. It's about Joe Bullshit from the Bullshit-Department having it easier in his/her/theirs Bullshit Job. Because you see, Joe decided two decades ago to be an "office worker", to avoid the horrors of working honestly with your hands or mind in a real job, like electrician, plumber or surgeon. So his day consists of preparing powerpoints, putting together various Excel sheets, attending whatever bullshit meetings etc. Chances are you've met a lot of Joe Bullshits in your career, you may have even reported to some of them. Now imagine the exhilaration Joe feels when he touches these magic tools. Joe does not really care about his job or about his company. But suddenly Joe can reduce his pain and suffering in a boring-to-death-job while keeping those sweet paychecks. Of course Joe doesn't believe his bosses only need him until the magic machine is properly trained so he can be replaced and reduced to an Eloi, living off the UBI. Joe Bullshit is selfish. In the 1930s he blindly followed a maniacal dictator because the dictator gave him a sense of security (if you were in the majority population) and a job. There is unfortunately a lot of Joe Bullshits in this world. Not all of them work with Excel. Some of them became self-made "developers" in the last 10 years. I don't mean the honest folks who were interested in technology but never had the means to go to a university. I mean all those ghouls who switched careers after they learnt there was money to be made in IT and money was their main motivation. They don't really care about the meaning of it all, the beautiful abstractions your mind wanders through as you create entire universes in code. So they are happy to offload it too, well because it's just another bullshit job, for the Joe Bullshit. And since Joe Bullshit is in the majority, you my friend, with your noble thoughts, are unfortunately preaching to the wind.

      1 reply →

  • I don't think OP thinks his skills are useless per se now, but that the way to apply those skills now feels less fun and enjoyable.

    Which makes perfect sense - even putting aside the dopamine benefits of getting into a coding flow state.

    Coding is craftsmanship - in some cases artistry.

    You're describing Vibe Engineering as management. And sure, a great manager can make more of an impact increasing the productivity of an entire team than a great coder can make by themselves. And sure, some of the best managers are begrudging engineers who stepped up when needed to and never stepped down.

    But most coders still don't want to be managers - and it's not from a lack of skill or interest in people - it's just not what they chose.

    LLM-based vibe coding and engineering is turning the creative craftsmanship work of coding into technical middle management. Even if the result is more "productivity", it's a bit sad.

    • But does anybody really care about what you like? What about all those other professions that got replaced by technology, did anybody care what they liked? The big question is how is software going to be build most efficiently and most effectively in the future and how do you prepare yourself for this new world. Otherwise you’ll end up with all those other professions that got replaced, like the mineworkers, hoping that the good old days will someday return.

      8 replies →

    • This is the heart of it. Most "craft" industries that have not yet been disrupted by technology or been made "more efficient" tend to coincidentally be the ones that are in demand and pay well -> and that society generally wants "good X" of. e.g. Plumbers, Electricans, previously software engineers. Efficiency usually benefits the consumer or the employer, not the craftsmen in most industries. There's a reason people are saying right now to "get a trade" where I am.

      If you look at what still pays well and/or is stable (e.g. where I live trades are highly paid and stable work) its usually the crafts industry. We still build houses for example mostly like we did way back (i.e. much of the skills are still craft, not industrialized industry) when and it shows in the price of them.

  • I'm getting really great results in a VERY old (very large) codebase by having discussion with the LLM (I'm using Claude code) and making detailed roadmaps for new features or converting old features to new more useable/modern code. This means FE and BE changes usually at the same time.

    I think a lot of the points you make are exactly what I'm trying to do.

    - start with a detailed roadmap (created by the ai from a prompt and written to a file)

    - discuss/adjust the roadmap and give more details where needed

    - analyze existing features for coding style/patterns, reusable code, existing endpoints etc. (write this to a file as well)

    - adjust that as needed for the new feature/converted feature - did it miss something? Is there some specific way this needs to be done it couldn't have known?

    - step through the roadmap and give feedback at each step (I may need to step in and make changes - I may realize we missed a step, or that there's some funky thing we need to do specifically for this codebase that I forgot about - let the LLM know what the changes are and make sure it understands why those changes were made so it won't repeat bad patterns. i.e. write the change to the .md files to document the update)

    - write tests to make sure everything was covered... etc etc

    Basically all the things you would normally WANT do but often aren't given enough time to do. Or the things you would need to do to get a new dev up to speed on a project and then give feedback on their code.

    I know I've been accomplishing a lot more than I could do on my own. It really is like managing another dev or maybe like pair programming? Walk through the problem, decide on a solution, iterate over that solution until you're happy with the decided path - but all of that can take ~20 minutes as opposed to hours of meetings. And the end result is factors of time less than if I was doing it on my own.

    I recently did a task that was allotted 40 hours in less than 2 working days - so probably close to 10-12 hours after adjusting for meetings and other workday blah blah blah. And the 40 hour allotment wasn't padded. It was a big task, but doing the roadmap > detailed structure including directory structure - what should be in each file etc etc cut the time down dramatically.

    I would NOT be able to do this if I the human didn't understand the code extremely well and didn't make a detailed plan. We'd just end up with more bad code or bad & non-working code.

    • Thank you for this post. I don't write much code as I'm currently mostly managing people but I read it constantly. I also do product management. LLMs are very effective at locating and explaining things in complex code bases. I use Copilot to help me research the current implementation and check assumptions. I'm working to extend out in exactly the directions you describe.

      1 reply →

    • This is what I've seen as well - in the past a large refactor for a codebase like that seemed nearly impossible. Now doing something like "add type hints" in python or "convert from js to ts" is possible in a few days instead of months to never.

      Another HUGE one is terraforming our entire stack. It's gone from nearly impossible to achievable with AI.

  • I remember reading a sci-fi book, where time was.. sharded? And people from different times were thrust together. I think it was a Phoenician army, which had learned to ride and battle bareback.

    And were introduced to the stability of stirrups and saddle.

    They were like daemons on those stirrup equipped horses. They had all the agility of wielding weapons and engaging in battle by hanging onto mane, and body with legs, yet now had (to them) a crazy easy and stable platform.

    When the battle came, the Phoenicians just tore through those armies who had grown up with the stirrup. There was no comparison in skill or capability.

    (Note: I'm positive some of the above may be wrong, but can't find the story and so am just stating it as best able)

    My point is, are we in that age? Are we the last skilled, deeply knowledgeable coders?

    I grew up learning to write eeproms on burners via the C64. Writing machine language because my machines were too slow otherwise. Needing to find information from massive paper manuals. I had to work it all out myself often, because no internet no code examples, just me thinking of how things could be done. Another person who grew up with some of the same tools and computers, once said we are the last generation to understand the true, full stack.

    Now I wonder, is it the same with coding?

    Are we it?

    The end?

  • > they're not going to be able to rig up a set of automated tests with continuous integration and continuous deployment to a Kubernetes cluster somewhere.

    Honestly, I have a ton of experience in system administration, and I'm super comfortable at a command line and using AWS tooling.

    But, my new approach is to delegate almost all of that to Claude, which can access AWS via the command-line interface and generate configuration files for me and validate that they work correctly. It has dramatically reduced the amount of time that I spend fiddling with and understanding the syntax of infra config files.

  • So it's automating away the fun parts, and leaving the humans to rig up automated tests and setup continuous integration...

    And unfortunately people who get to architect anything are a small subset of developers.

  • I appreciate what you're trying to do, but for myself, I'm not depressed because my skills are less valuable. I enjoyed the money but it was never about that for me. I'm depressed because I don't like the way this new coding feels in my brain. My focus and attention are my most precious resources and vibe coding just shatters them. I want to be absorbed in a coding flow where I see all the levels of the system and can elegantly bend the system to my will. Instead I'm stuck reviewing someone/something else's code which is always a grind, never a flow. And I can feel something terrible happening in my brain, which at best can be described as demotivation, and at worst just utter disinterest.

    It's like if I were a gardener and I enjoyed touching dirt and singing to plants, and you're here on Gardener News extolling the virtues of these newfangled tractors and saying they'll accelerate my impact as a gardener. But they're so loud and unpleasant and frankly grotesque and even if I refrain from using one myself, all my neighbors are using them and are producing all their own vegetables, so they don't even care to trade produce anymore--with me or anyone else. So I look out at my garden with sadness, when it gave me such joy for so many decades, and try to figure out where I should move so I can at least avoid the fumes from all the tractors.

    • Well said! Reading this I feel reminded of the early protests against industrialization and automation in other fields. Checks all the same boxes - insecurity and fear about the future, alienation towards the new tools, ...

      Not saying AI is similar in impact to the loom or something, it just occured to me how close this is to early Luddite texts.

      15 replies →

    • How beautifully put, and I couldn't agree more. I feel exactly the same way.

      However, I am still unconvinced that software development will go down this way. But if woking as a software developer will require managing multiple agents at the same time instead of crafting your own code — you can count me out, too.

    • If it is not about the money, why do you have to use these tools? If you enjoy small farming why concern yourself with mass production, or expensive equipment? Remain in the lane you enjoy?

      2 replies →

    • Fwiw I’m an ai proponent who loves that flow state you are describing. Programming literally was the first time I found it as a youth and I’ve been addicted to it since then.

      But it’s such a small part of my professional life. Most of what I do is chores and answering simple questions and planning for small iterations on the original thing or setting up a slightly different variant.

      Llm’s have freed me of so much of that! Now I outsource most of that work to the llms and greedily keep the deep flow inducing work for myself.

      And I have a new tool to explain to management why we are investing in all the tooling and processes that we know lead to quality, because the llms are catnip for the managerial mind.

      1 reply →

  • > They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.

    Neither can I, sadly. I have one brain cell and I can only really do one thing at a time. Doing more than one leads to a corrupted stack and I make exponentially more mistakes.

  • Have you tried SolveIt (method, tool) from Jeremy Howard yet?

    I was in the first batch last year where they introduced it and going to do the second one too.

    It´s a very different kind of beast to what is currently being discussed.

  • >accelerate the impact you can have with this new family of tools.

    Tech spent the last 10 years drilling into engineers' heads that scaling your impact is not about writing more or better code, but about influencing the work of other engineers through collaboration, process, documentation, etc. Even the non-managerial "senior IC" tracks are mostly about doing this with greater and and greater numbers of people. I wonder if we will start to see recognition in career tracks for people who are actually just extraordinarily productive by themselves or in small groups, or if you'll pretty much just have to be a startup founder to get paid for that.

  • Software developers can 10x-100x productivity/effectiveness with LLMs.

    Non developers can go from 0x to 1x. And I'm happy for people finally being able to learn about building software one way.

    And then learn why vibe coding often creates more quickly disposable code.

  • This has been experience as well. If there is a hard problem which needs to be addressed, generative code helps me break the inertia by generating the first draft and then I get really curious to poke holes in the generated code. I tend to procrastinate when I come across a gnarly issue or something I am not really familiar with, justifying by saying I need a big block of time to work on it. I use generative code as a pushy "mom/boss/coworker/spouse" to get stuff done.

  • I really hope you are right here, and to be honest it does reflect my limited experience with where I've used AI so far.

    But I'm also not ready to bet the farm on it. Seriously considering taking our savings and equity out of our house in a London adjacent area, and moving to a lower cost of living area, so that we're practically debt free. At that point we can survive on a full time minimum wage job, anything more than that is a bonus.

  • I still haven't seen any evidence to match these repeated claims of increased efficiency. What I have seen is reports that makes a lot of sense to me claiming it's all in the user's head.

    • Maybe it's in my head, but I have completed coding projects that I believe would have taken a team of five offshore maybe 12 weeks to do in the past in about ten working days while juggling calls and living normal corporate life.

      The win is that I don't have to share the vision of what needs to be done and how it should all work, and then constantly monitor and reframe that in the face of the teams missteps and real findings. I work with the agents directly, and provided I set the architecture and build up systematically I can get really good results. The cycle time between me identifying an issue and the issue getting fixed by me and the agents is now minutes rather than hours or days with an off shore team. Even better the agents can provide bug fixing expertise much quicker than stack overflow - so I can figure out what's wrong much faster so as to specific what needs fixing.

      It is no good walking in and requesting functionality, you need to know how the thing you want should work, and you need to know what good looks like, and what bad looks like, and how good is separated from bad. Then the normal process of discovery ("eep that doesn't actually work like I thought") can take place and you can refactor and repair as required.

      Sometimes I start something that just doesn't work, you have to recognise that you and the agents are lost, and everything needs to be torn down. You then need to think properly about whats gone wrong and why, and then come back with a better idea. Again - just like with dev teams, but much more clearly and much faster.

    • I'm working in corporate and haven't seen it yet. The main thing I see is blogs and whatnot of people building new weekend projects with LLMs, that is, greenfield, non-critical software - the type of software that, if I were to write it, I wouldn't bother with CI, tests, that kind of thing with. Sloppy projects, if you will.

      But happy to be corrected - is someone using these agents in their paid / professional / enterprise / team job?

      4 replies →

    • There was an article on here not too long ago - I can’t find it now - where the authors discussed how they leaned full in on it and were submitting 20k+ line PRs to open source projects in languages they were not very familiar with.

      However, they mentioned you had to let go of reviewing every line of every PR. I read that and was fine with holding off on full vibe coding for now. Nobody intelligent would pay for that and no competent developer would operate like that.

      I have a couple coworkers big on it. The lesser skilled ones are miserable to work with. I’ve kept my same code review process but number of comments left has at least 5x’d (not just from me, either). And I’m not catching everything - I get fatigued and call it done. Duplicated logic, missed edge cases, arbitrary patterns and conventions, etc. The high skilled ones less so, but I still don’t look forward to reviewing their PRs anymore. Too much work on my end.

      There are many devs who are more focused on results than being correct. These are the ones I’ve seen most drawn to LLMs/agents. There’s a place for these devs, but having worked on an aging startups codebase, I hope there aren’t too many.

  • Of course the devil is in the details. What you say and the skills needed make sense. It's unfortunately also the easiest aspects to dismiss either under pressure as there is often little immediate payoff, or because it's simply the hard part.

    My experience with llms in general is that sadly, they're mostly good bullshitters. (current google search is the epitome of worthlessness, the AI summary so hard tries to make things balanced, that it just dreams up and exaggerates pros en cons for most queries). In a same way platforms like perplexity are worthless, they seem utterly unable to assign the proper value to sources they gather.

    Of course that doesn't stop me from using llms where they're useful; it's nice to be able to give the architecture for a solution and let the llm fill the gaps than to code the entire thing by hand. And code-completion in general is a beautiful thing (sadly not a thing where much focus is on these days, most is on getting the llm create complete solutions while i would be delighted by even better code completion)

    Still all in all, the more i see llms used (or the more i see (what i assume) well willing people copy/paste llm generated responses in favor of handwritten responses) on so much of the internet, resulting in a huge decline of factualness and reproducibility (in he sense, that original sources get obscured), but an increase of nice full sntences and proper grammar, the more i'm inclined to belief that in the foreseeable future llm's aren't a net positive.

    (in a way it's also a perfect storm, the last decade education unprioritised teaching skills that would matter especially for dealing with AI and started to educate for use of tools instead of educate general principles. The product of education became labourers for a specific job instead of higher abstract level reasoning in a general area of expertise)

  • What is the other part of your goal?

    • Sparking more conversations about practices that work for doing serious production-quality software development with LLMs, especially in larger teams and on larger projects.

      Having a good counter to people who use "vibe coding" as a dismissive term for anything where an LLM is used to help product software.

      1 reply →

  • What about the accessibility of software development? Its completely vanishing for people that can not afford to pay for these agents. It used to be a field where you could get a laptop from the scrapyard and learn from there. It feels pointless. Also agents do not invent things, the creativity part is gone with them. They simply use what they've already seen, repeat the same mistakes a person made a few years ago. Its a dystopian way of working. Sure it enables one to spew out slop that might make companies money, but there is no passion, sense of exploration, personal growth. Its all just directing parrots with thumbs...

    • I feel your sentiment. However anyone with an interest in computers now has access to an LLM, which to me feels like an upgrade to having access to a modem and a search engine. Knowledge is power right?

      1 reply →

    • > What about the accessibility of software development? Its completely vanishing for people that can not afford to pay for these agents.

      what do you actually mean by this? it's clearly untrue - anyone get get a laptop and install linux on it and start bashing out code today, just as they could last week and last year and thirty years ago.

      do you mean that you think at some point in the future tooling for humans to write code won't exist? or that employers won't hire human programmers? or that your pride is hurt? or you want your hobby to also be a well-paid job? or something else?

      3 replies →

  • I need to read through this some more, but there has been another genetic coding paradigm referred to as spec driven development.

    I’ll find the link in the morning, but I kinda joke - it’s vibe coding for people who know how to define a problem and iterate on it.

    I’ve got a project reimplementing a service I want to make more uniform. Claude has produced a lot of stuff that would have taken me weeks to do.

    • GitHub's SpecKit is an example: https://github.com/github/spec-kit

      Spec-Driven Development treats the spec as the source of truth and the code as an artifact. As you develop, you modify/add to the spec and the codebase gets updated to reflect it.

      Personally I'm doubtful it can compete with traditional artisanal software engineering, as it's (IMHO) boils down to "if only you can spec it precisely enough, it'll work" and we've tried this with 5GL and (to some extent) BDD, and it doesn't get you to 100%.

      I do think it's interesting enough to explore, and most of us could use a bit more details in our Jira tickets.

      4 replies →

The "manage a fleet of massively parallelized agents" gets me uneasy too. It sounds uber powerful on its face. And where all the nerd interest lies.

It sounds stressful, like the ultimate manager job. Not what I signed up for.

But I also still hold onto this idea that shipping tons of iterations of "stuff" was never really the problem. Early in my dev experience I wanted to code everything all day every day. And I did and that's how I learned. And now in my second decade I switched to "why code anything?". In a business sense I mean, coding the thing is almost never the missing piece.

I joke in meetings that the answer is always "yes" whenever cross-functional teams ask "can we do this?". "How hard would x be?". For tech teams the answer _is_ always YES! I get that out of the way because that's never the right question to ask.

  • Absolutely this. LLM assistance means we can work faster, and that we can build things that previously weren't feasible given the available time and resources.

    Which makes the hardest problem in software even harder: what should we build? It doesn't matter how fast you can move if you're consistently solving the wrong problems.

    • > Which makes the hardest problem in software even harder: what should we build?

      You should build what’s personally fun and challenging to you and/or what is useful and solves a problem. Building for any other reason, including and especially the unfettered pursuit of profit, is what turns everything to shit.

      1 reply →

    • Absolutely!

      I've noticed that almost immediately after people discovered GPT could write code, this happened -- startups I worked with started rapidly expanding the scope of what they wanted to make. Suddenly all MVP's had to be multi-tenant with complex authorization, impersonation, microservices, monitoring, all the stuff that we used to build after we got users has now been pulled right to the starting gate of development -- because AI makes it easy to build all that stuff quickly. But it doesn't tell us if we should.

    • Exactly, I think one of the reasons programers are becoming so depressed over these AI agents is that they’re finally realizing that it was never really about the code, but about the outcome - and btw, this cold hard fact applies to the pre-LLM era.

      This occurred to me years ago when I was talking to a friend’s wife, who is a very intelligent and accomplished attorney. She was legitimately surprised when I explained that they were multiple programming languages, and technology stacks behind the software that she uses on a daily basis.

      Even my wife, a teacher who is very tech savvy (she’s the one who insisted I try ChatGPT after brushing it off) reminds me on the regular that she doesn’t care about how any of it works just that it doesn’t “glitch” when she’s in the middle of a class. Which has always been good for me to remember when I get off into the weeds yak shaving.

      1 reply →

    • "AI has made coding the easy part. The hard part now is product management", said Andrew Ng.

  • > The "manage a fleet of massively parallelized agents" gets me uneasy too

    It shouldn't. The agents are not good enough to be used in a fleet.

    I have Claude. It's fine, but I'm pretty confident that my low usage of Claude would out-compete a fleet of agents, because it feels like there's an inverse correlation between the number of tokens you spend and the quality of the resulting code (more tokens = more code to review, more bad code slips through)

    • That's basically my finding as well. Agent wrangling is herding cats. Working normally but tapping Claude for the smallest possible thing (look this up, finish this psuedocode, grab an example of this) feels like it works better all around—faster, safer, far fewer tokens, results in work that the team understands, aides flow rather than adding constant context switching...

      Maybe I'm wrong and the time will come to hang up my editor and go open an Italian restaurant or something. Until then I'm really inclined to believe my own eyes.

  • Yes. The first programmers used computers as a necessity to get things done. Difficult mathematical calculations, a fancy control system.

    This is where we should be. Using computers to solve problems. Not just "doing programming".

    Raise your head, look towards the horizon.

    • Yes, and forget about ownership of anything too. Only rental, only hardcore, because life is but an experience, spread your wings and fly, weeee, towards our hyperprofits and your prozac dreams!

      AI threads on HN reek of venture capital agendas so bad it's unbearable.

      2 replies →

I think people underestimate the degree to which fun matters when it comes to productivity. If something isn’t fun then I’ll likely put it off. A 15 minute task can become hours, maybe days long, because I’m going to procrastinate on doing it.

If managing a bunch of AI agents is a very un-fun way to spend time, then I don’t think it’s the future. If the new way of doing this is more work and more tedium, then why the hell have we collectively decided this is the new way to work when historically the approach has been to automate and abstract tedium so we can focus on what matters?

The people selling you the future of work don’t necessarily know better than you.

  • I think some people have more fun using LLM agents and generative AI tools. Not my case, but you can definitely read a bunch of comments from people using the tools and having fun/experience a state of flow like they have never had before

    • >I think some people have more fun using LLM agents and generative AI tools

      I think I'm one of them

      The rate at which I can explore new paths, or revisit old ones with a new perspective, has _exploded_ and I love it

      But then I'm the kind of person who could spend hours on Wikipedia going from one page to the next, so that might have something to do with it

      There's just so much to learn, I'm in my element

      (Though I use agents mostly in Ask mode, or I manually review every line of code in Agent mode and never commit anything I don't understand)

    • I definitely agree with you there. I contracted with a company that had some older engineers who were in largely managerial roles who really liked using AI for personal projects, and honestly, I kind of get it. Their work flow was basically prompt, get results, prompt again with modifications, rinse and repeat, it's low effort and has a nice REPL-like loop. Paraphrasing a bit, but it basically re-kindled the joy of programming for them.

      Haven't gotten the chance to ask, but I imagine managing a team of AI agents would feel a little too much like their day job, and consequently, suck the fun out of it.

      That said, looking back, I think the reason why generative AI is so fun for so many coders is because programming has become unnecessarily complex. I have to admit, programming nowadays for me feels like a bit of a slog at times because of the sheer effort it can sometimes take to implement the simplest things. Doesn't have to be that way, but I think LLM copy-paste machines are probably the wrong direction.

    • I think the majority of people I've worked with who have the title of "Software Engineer" do not like coding. They got into it for the money/career, and dream of eventually moving out of coding into management. I can count the number of coders who I've met who like coding on one hand

    • It's a different kind of fun for me.

      I've been enjoying seeing my agents produce code while I am otherwise too busy to program, or seeing refined prompts & context engineering get better results. The boring kinds of programming tasks that I would normally put off are now lower friction, and now there's an element of workflow tinkering with all these different AI tools that lets me have some fun with it.

      I also recently programmed for a few hours on a plane, with no LLM assistance whatsoever, and it was a refreshing way to reconnect with the joy of just internalizing a problem and fitting the pieces together in realtime. I am a bit sad that this kind of fun may no longer be lucrative in the near future, but I am thankful I got to experience it.

  • I’ll be that voice I guess - I have fun “vibe coding”.

    I’m a professional software engineer in Silicon Valley, and I’m fortunate to have been able to work on household-name consumer products across my career. I definitely know how to do “real” professional work “at scale” or whatever. Point is, I can do real work and understand things on my own, and I can generally review code and guide architecture and all that jazz. I became a software engineer because I love creating things that I and others could use, and I don’t care about “solving the puzzle” type satisfaction from writing code. In engineering school, software had the fastest turnaround time from idea in my head to something I could use, and that’s why I became a software engineer.

    LLM assisted coding accelerates this trend. I can guide an LLM to help me create things quickly and easily. Things I can mostly create myself, of course, but I find it faster for a whole category of easy tasks like generating UIs. It really lowers the “activation energy” to experiment. I think of it like 3D printing, where I can prototype ideas in an afternoon instead of long weekend or a few weeks.

    • >because I love creating things that I and others could use, and I don’t care about “solving the puzzle” type satisfaction from writing code.

      Please don't take offense to this, but it sounds like you just don't like building software? It seems like the end goal is what excites you, not the process.

      I think for many of us who prefer to write code ourselves, the relationship we have with building software is for the craft/intellectual stimulation. The working product is cool of course, but the real joy is knowing how to do something new.

      3 replies →

    • As a thought experiment, do you think it would be just as fun if you were given access to an infinite database of apps, and you were able to search through the database for an existing app that suit your needs, and then it gave it to you?

      Or would it no longer be fun, because it no longer feels like creating?

      2 replies →

I feel the same way. It also appears to be a lot more difficult to actually find jobs, though that's probably just the general state of the job market and less specifically AI related. All of it is thoroughly discouraging, demotivating, and every week this goes on the less I want to do it. So for me as well it might be time to try to look beyond software, which will also be difficult since software is what I've done for all my life, and everything else I can do I don't have any formal qualifications for, even if I am confident I have the relevant skills.

It's not even just that. Every single thing in tech right now seems to be AI this, AI that, and AI is great and all but I'm just so tired. So very tired. Somehow even despite the tools being impressive and getting more impressive by the day, I just can't find it in me to be excited about it all. Maybe it's just burnout I'm not sure, but it definitely feels like a struggle.

Keep your head up, the gravy train is not gonna run forever, and they will need serious engineers to untangle the piles of bullshit creates in these past few years.

But also yes, look into moving into a different field. Professional software engineering is gonna be infected with AI bullshit for a long while. Move into a field where hand-crafted code can make a difference, but not where you're paid for the line committed or have to compete with "vibe coding" KPIs.

  • I don't really agree. The writing is on the wall, if not now then in 2 years or 4 years. I arrive at this view not so much based on the capabilities of the tools right now, but based on the property of software being verifiable, which like mathematics, makes it amenable to synthetic data pipelines, with only relatively small remaining details needing to be worked out (such as how to endow architectural taste). This is not nearly the first industry where 'artisans' have been initially augmented by innovation, only to be eventually replaced by it, which in my view will occur likely within my own career-span.

    • Software is verifiable, but not by other programs. Also software is a soution to a problem, but the problem and the solution properties often don’t exist in the code.

      Software and data is a bit soup whre the only thing you truly need is the Turing machine. Programming languages, File format, protocols, and encoding are constructs that are useful because of their general applicability, not because of their intrinsic aspects.

      The domain expertise is still, what’s important, and code craftsmanship was the ability to create something that matches it closely enough that the cost of changes was minimal.

    • > property of software being verifiable

      Software is verifiable given a specific test oracle. There are however many problems where providing a correct test oracle is at least as hard as solving the problem itself.

      If you’ve ever worked on projects with “Model Based Systems Engineering” you’ll have felt this pain.

    • While afraid that we developers will eventually be automated away — as I have bills to pay —, I only need to ask the JetBrains AI assistant for help to understand why that won't happen in my ‘career-span’.

      It's not a diss on JetBrains, their assistant is good enough that I've paid for it for a few months; but ask of it anything a tad more complex and it becomes a code review for a PR that you begin to question in its entirety. I'm not familiar with CSS Grid, as I've stopped doing CSS when flex was becoming popular, but I have to say none of the models managed what I wanted. They kept proposing solutions with an arrogant confidence that this must work. When I pointed out this didn't work, they'd look at the codebase and find something else that was the problem. When I asked for help with a script for an Alpine box, it was very assertive that systemd-based solutions should work. How can you get that wrong?

      I imagine the code laundering will eventually get far enough that you can copy-paste someone else's project fully baked, and then the LLM will truly shine. But for building something piece by piece, I haven't gotten good results yet. The Assistant so far has been most useful for writing unit tests, HTML, or getting a decent web search within the IDE.

      I wonder if paying for Kagi wouldn't make for better search, and then I'd find some tool that writes unit tests based on your code. It really does feel like some people are being very generous about how magical these things are, because I'm not getting the magic at all.

  • hand crafted code? this isn't some rich downtown store to fool old rich people.

    code is code. if it works, nobody gives a shit. Market will adapt to be fault-tolerant. Look at all the value created by javascript.

    Also, FYI, I am writing some of most efficient code using AI.

If you're genuinely already good at coding, use the LLM to go horizontal into other complementary verticals that were too expensive to enter prior. Do the same thing that the other professions would do unto yours.

As an example, I would have never considered learning to use blender for 3d modeling in a game before having access to an LLM. The ability to quickly iterate through plausible 3d workflows and different design patterns is a revelation. Now, I can get through some reasonably complex art pipelines with a surprising amount of confidence. UV mapping I would have never learned without being able to annoy one of OAI's GPUs for a few hours. The sensation of solving a light map baking artifact on a coplanar triangle based upon principles developed from an LLM conversation was one of the biggest wins I've had in a long time.

The speed with which you can build confidence in complementary skills is the real super power here. Clean integration of many complex things is what typically brings value. Obsession with mastery in just one area (e.g. code) seems like the ultimate anti-pattern when working with these tools. You can practically download how to fly a helicopter into your brain like it's the matrix now. You won't be the best pilot on earth, but it might be enough to get you to the next scene.

If it's any consolation, I do think the non-technical users have a bigger hill to climb than the coders in many areas. Art is hard, but it is also more accessible and robust to failure modes. A developer can put crappy art in a game and ship it to steam. An artist might struggle just to get the tooling or builds working in the first place. Even with LLM assistance there is a lot to swim through. Getting art from 5% to 80% is usually enough to ship. Large parts of the code need to be nearly 100% correct or nothing works.

  • Thanks for this, I like your idea about breaking into areas I don't have experience with. E.g. in my case I might make a mobile app which I've never done before, and in theory it should be a lot easier with Claude than it would've been with just googling and reading documentation. Although I did kind of like that process of reading documentation and learning something new but you can't learn everything, you only have so much time on this planet

    • > Although I did kind of like that process of reading documentation and learning something new but you can't learn everything, you only have so much time on this planet

      I actually enjoy reading the documentation more these days, because I am laser focused on what I want to pull out of it after seeing the LLM make a suspicious move.

  • I can confirm this. My datapoint: I was mostly a web developer but using these "vibe" tooling I am making my own hardware board and coding for embedded, which includes writing drivers from datasheets, doing SIMD optimizations, implementing newer research papers into my code, etc.

You word quite well how I feel about it. On top of not really liking babysitting an AI , I'm also very afraid of the way this whole AI coding business normalizes needing an account with some nebulous evil empire to even be able to do your work. Brrr.

100%. Imagine how young people feel. When they’re still trying to figure things out, their parents and teachers just as clueless as they are, and at the same time the expectations of them are infinitely higher. “You’re pretty good, but chatgpt is still better. Try harder.”

Have you ever interacted with any "vibe"-based systems of agents in a production environment? Beyond just cool demos online?

My experience with them is they are fragile monstrosities, that are only permitted to exist at all because leadership is buying into the same hype that is irrationally propping up the companies running the models that make these things possible.

To be clear, my experience hasn't been that I don't like them, it's that they don't really work at all. They're constantly under development (often in the dark) and when a ray of light is cast on them they never successfully do the thing promised.

Cleaning up the messes left behind by these has my skills feeling more valuable then ever before.

I can relate with you. I love programming and building things, gives a different kind of rush when you finally figure out something. I've done vibe coding and don't enjoy it at all. I always thought my love for coding gives me an edge over other engineers who just want to get the job done. Now it's holding me back and I'm not sure if I should continue working in this field or if start doing wood working or something.

  • I still do all the stuff by hand, but ask the AI to review my work, provide suggestions, and occasionally write the tests (especially if it points out a bug that I disagree with). Its really good at pointing out typos (accidentally using the wrong variable of the same type, and stuff like that) that are also traditionally hard to spot during review.

Do not worry, I am mentoring a young engineer in my team. It is painfully hard to get him to improve his code, because it works. It is badly structured, lot of small "impedance mismatches", lot of small security issues, all that in 3 Python files.

I have a team of 10 engineers, the quality of the code they produce together with the LLM of the day correlates even more with the experience.

My impression over the past 6 months - before we had no "official" access to LLM, is that they increase the gap between junior and experienced developers.

Note that this is my limited impression from a team of 10 engineers. This matches with Simon's feeling in a good way for you!

You were never paid to type. You were paid to solve problems. And big part of this is being able to ask the right questions and framing of the problems. The rest were just tools.

There are exceptions of course - where you need to squeeze wonders from the hardware - but the majority of dev works boils to understanding the problem and finding the right answers.

  • You say this because you are on HN, very senior and/or living in a bubble.

    In the vast majority of programming jobs out there you are not paid to solve problems: you are told very clearly what to do, how to do it and what technology you have to use for the job.

    People don't hire analysts they hire "Java programmers".

    • > how to do it

      If you've ever lead a team, you know how much more valuable people are if they don't need to be told how to do things. Even more if they don't need to be told what to do! But having to explain in detail the "how".. can be really a big time sink and only worth it if you are training someone to level up.

    • The thing is that the poster I responded to also is those three things. And I am just pointing out that his job was never to keep up with the frameworks.

People keep comparing LLMs to automated looms, but I find them more comparable to cruise control than autopilot.

I've been working on a character sheet application for a while, and decided to vibe-code it with Spec-kit to help me write up a specification, and for things I know it's been great. I tried using Claude to make it into a PWA (something I don't know very well) as an experiment, and I've found the nanosecond the model strays out of my experience and knowledge everything goes straight to Hell. It wraps my codebase around a tree as if I'm not paying attention while driving.

It's a tool you'll have to learn to use, but I can say with absolute confidence it's no replacement for actual skills, if anything it highlights the gulf between people who know what they're doing and people who don't, for better and worse. It sacrifices some of the 'code under your fingers' feeling for management tasks, which I personally really like, as I've always wanted to document/test/code review/spec things out better, and I now understand the pain of people who'd rather not do that sort of thing.

https://github.com/github/spec-kit

  • The difference is that you can trust cruise control to do whatever limited job it knows how to do; you can't trust an LLM to do anything. That makes it, I think, hard to compare to anything we're used to (happily) working with.

  • Cruise control is a useful technology, that once you learn to use, it's automatic (somethingsomething pun something). LLMs on the other hand - well, yeah - if you like playing chess with pieces and board made out of smoke (to paraphrase Jerry Seinfeld), sure you'll probably figure it out...some day...

  • I do not know... I keep seeing everywhere, people promising that agent-based tools can solve all these problems and handle full, project-level tasks.

    • Those same people have large equity stakes or are in the surrounding network companies dependent on AI being successful.

My approach is to just tune out whenever I hear about this stuff.

I don't want it, I don't use it, I carry on as if it never existed, and they still pay me a lot.

If I really need to use agents some day I will bite the bullet, but, not today.

Literally all I use LLMs for is to ask ChatGPT about some dumb thing or two instead of asking StackOverflow as I did 5 years ago. Works for me.

It kinda feels like you turn from a software engineer to an offshoring manager.

Offshoring software development means letting lower-payed software developers from somewhere far away do the actual programming, but they have a very different culture than you, and they typically don't share your work context, don't really have a feeling for how the software is used -- unless you provide that.

Now we're offshoring to non-sentient, mostly stateless instances of coding agents. You still have to learn how to deal with them, but you're not learning about a real human culture and mindset, you learn about something that could be totally changed with the next release of the underlying model.

My rule of thumb, and its likely not the industry standard is, if I cannot maintain the code should all AI disappear, I don't use the code. I am able to tackle impostor syndrome that sometimes hits when I touch things that are new or unknown to me, and ask an LLM to give me sources and reasons and even explain it like I'm a five year old.

The LLM will not save you when everything is on fire and you need to fix things. The context window is simply not big enough. It could be your last change, it could also be a change six months ago that is lost in the weeds.

Come to game dev. I'm yet to see anyone make anything good with AI.

Like, where are all the amazing vibe-coded games we were promised? These guys should be eating my lunch, but they're not.

  • There are a ton of them already in game dev but they produce unfun games so you don’t hear about them. The hard part of game dev is designing actually fun experiences.

I can't even begin to imagine how a 12-year old who discovered how empowering bending the machine to do your will through code feels when, over time, realize that their dream career has been reduced to being an LLM middleman.

  • Now imagine a recent graduate, deep into debt, seeing all opportunities to pay off that debt vanishing before their eyes.

>I used to have this hard-to-get, in-demand skill that paid lots of money and felt like even though programming languages, libraries and web frameworks were always evolving I could always keep up because I'm smart.

Tools always empower those with knowledge further than those without knowledge.

I feel like the rug was pulled from under me.

I'm currently looking into other professions, but the future looks bleak for most kinds of knowledge work.

Don't worry, it's probably only the impostor syndrome. Your development skills are still relevant. Think of agents as junior developers that assist you in coding tasks, whom you constantly need to mentor, review, and correct.

  • Can we all agree that "mentoring" LLMs is actually a waste of time, please?

    The reason we invest this time in Junior devs is so they improve. LLMs do not

    • I had a fascinating conversation about this the other day. An engineer was telling me about his LLM process, which is effectively this:

      1. Collaborate on a detailed spec

      2. Have it implement that spec

      3. Spend a lot of time on review and QA - is the code good? Does the feature work well?

      4. Take lessons from that process and write them down for the LLM to use next time - using CLAUDE.md or similar

      That last step is the interesting one. You're right: humans improve, LLMs don't... but that means it's on us as their users to manage the improvement cycle by using every feature iteration as as opportunity to improve how they work.

      I've heard similar things from a few people now: by constantly iterating on their CLAUDE.md - adding extra instructions every time the bot makes a mistake, telling it to do things like always write the tests first, run the linter, reuse the BaseView class when building a new application view, etc - they get wildly better results over time.

      13 replies →

    • > Can we all agree that "mentoring" LLMs is actually a waste of time, please?

      Sorry, we can't. While it's true that you can't really modify the underlying model, updating your AGENTS.md (or whatever) with your expected coding style, best practices, common gotchas etc is a type of mentoring.

      3 replies →

    • > LLMs do not

      Maybe not in the session you interact with, however we are in a 'learning' phase now where I'm confident enough usage of AI coding agents is tracked and analyzed by its developers; this feedback cycle can in theory produce newer and better generations of AI coding agents.

    • "AI" has been so inconsistent. On one day it anticipates almost every line I am coding, the next day it's like we've never worked together before.

      1 reply →

  • Junior developers or maybe even better, outsourced developers - there's a big segment of software engineering that involves writing requirements and checking the work of an external software development company, with many companies heavily dependent on it (as they outsourced part of their core business, e.g. mainframes, SAP, whatever).

  • You think theyre still gonna be juniors 5 years from now? A couple years ago they could barely even write a function

    • No, I don't think they will always be junior developers. Obviously there will be a day that they will surpass humans.

      However, the progress doesn't look linear with the current technology, and I don't expect to see the same big jump in the next 5 years as we've seen in the last 5 unless we discover a disruptive, new technology.

      This can also be observed by comparing models with ~3B, ~30B, and ~300B parameters. You can see a huge performance boost when going from 3B to 30B, but we don't see the same when going to 300B. Simply adding 10x more RAM and GPU power brings diminishing returns.

    • The gains seem to be leveling to me but I'm not using them as much as others.

      Still seems like people are saying the same things when the first Claude came out.

      I can get it do stuff if I'm very specific, stand over it's shoulder, know exactly what I want, break it down into small chunks.

      The thing for me is... at that point, writing the code's the least time consuming part of the process half the time.

      I think for things like translating some code in JS with JSDocs to TypeScript I may give this a go. But for regular development work I'll probably skip it.

      That being said... no one lets me code anymore. It's just confluence docs with Figma architecture diagrams these days. I'd probably just introduce SQL injection vulnerabilities if they let me near an editor these days

      2 replies →

As for flow state managing multiple things, I've found this is a useful skill to train even without AI - if you have a workplace with lots of interruptions and shifting priorities.

I've found two things useful:

1) Keep a personal work log, where you - in short bullet points - can document the progress of the task you were last working on, and can keep track of how many parallel tasks there are currently going on. If you can match it with Jira tickets, all the better, but as this is for personal use only, you can also add tasks that are not tracked in Jira at all.

2) If you cannot avoid context switches, make them explicit: Instead of trying to hold 3 tasks in your head at the same time, decide consciously if you want to switch what you're currently working on. If yes, take a few minutes to "suspend" the current task by saving and committing everything (as WIP commits if necessary) and writing all you need to remember into the worklog.

Your skillset will be even more in demand in a few years, when everybody will be looking for actual engineers to clean up the mess LLMs created.

Be mindful of the context these posts are created in. Don't take the current echo chamber to heart.

For decades now, we are trying to lower the barrier to entry in software development. We created Python, web frameworks and mobile development so easily accessible that you can become software developer by completing a short online boot camp. There is a lot of software developers posting here now who, 20 years ago, would not even consider this job because it would be way over their abilities.

This forum is equivalent if you had a forum about civil transportation that gathers airline pilots and uber drivers. Technically, they both do the same work. Just like in that forum, uber drivers would outnumber airline pilots and skew the topics related to their experience, here we get pushed topics about new frameworks, and AI assisted tools.

When I started working professionally 20 years ago, you could only get job in big companies working on big projects. No one else could afford a cost of custom software. Today, we reduced development costs and we have a huge pool of potential customers who can now afford services of software developers. Web shops, gambling sites, porn sites... This is the majority of software development work today. Boring repetitive tasks of gluing some imported modules together.

Serious development work didn't disappear. It is just not talked about here. There is still a need people who know what they are doing.

My advise is that if you want a satisfying development career, steer clear of latest hypes and don't go blindly following techbro lemmings. And most importantly, don't take career advice from anyone who finds his job so unsatisfying and tedious that he is trying to make AI do it for him. That's a major red flag.

I think the key is remind yourself is that an engineer is supposed to solve business problems. So use these new tools to be more effective in doing so. An analogy is that people used to spend tons of time building out web server code but something like Django added tremendously useful abstractions and patterns to doing so, which allowed people to more productively add business value

LLMs are very much like WYSIWYG web editors of the early 2000s like Dreamweaver. They provide a human interface layer that abstracts away the coding, and neither does a particularly good job of it. When WYSIWYG first became a thing, there was talk that it would upend the web development industry. It did not.

  • One of the main points of my article was meant to be that LLMs don't abstract away the code, at least not if you're trying to use them for professional software development work as opposed to vibe coded prototypes and toy projects.

You shouldn't be discouraged. Now is the best time to create software. You have advantage that very few people have.

Its industry own fault that it is in the position that it is right now, and it will shift and change embrace it. I only wish I had your experience building software in professional environment.

You can literally build anything right now if you have the experience, I personally can't understand if the models are hallucinating hence the lack of experience writing and understanding code. However I always wanted to pivot into the industry but couldn't, hiring practices are brutal, internships are non-existent, junior roles are I think what senior used to be and the whole hr process is I don't know how to put it.

By using LLMs I can now build UIs, build functionality, iterate over design choices, learn about database design, etc. hopefully I will escape the tutorial hell and have my own working full stack web app soon.

Pivot to creating and then sale your product.

> It makes me want to move into something completely different like sales

I'm feeling the same. The moves I'm considering are

1. Landscaping 2. Carpentry 3. Small scale agriculture

(All made easier by a cushion of investments that are most of the way to passive income, so the new thing doesn't really have to make me that much money.)

  • My father runs a commercial landscaping company with 15 employees. His truck fleet insurance went up 35% just this year. His light industrial facility that he operates out of property taxes went up 55% last year. All of his commercial clients are cheaping out on all the little things that used to make extra money (pine straw, seasonal flowers, etc.). He’s having to deal with poorly educated staff who are constantly breaking equipment and doing stupid dangerous things. He’s so burned out by it all, and the fact that his actual salary is less than several of his top staff that he’s thinking about just shutting it all down. When I was working as a software developer, my income was probably twice as much as his without any of the risk or headache.

    • I hear you man. Didn't intend to paint a rosy image of hard working landscapers. What I meant was do it more like a hobby that pays a little on the side. Doesn't have to be much, just pay some bills and let me do sth I like. Cause supervising a bunch of immature LLM agents is not something I'll like to do all day, I'd rather trim trees and plant flowers on the cheap.

      1 reply →

    • No one is claiming that any of the alternatives are better jobs than software engineering has been for the last 20 years.

      We don't live in the last 20 years anymore and software engineering is either becoming a different (worse) job or simply vanishing.

      3 replies →

Yes and especially with new developments, like "$Framework now has Signals!", my thought is "I don't really care since in some years, it won't matter anyways". I don't see how I can build this lower level knowledge by almost never actually using it. I don't even want to think about job-interviews after a year+ of vibing and then being asked how RxJS works.

I'm preparing mentally for my day-job to stop being fun (it still beats most other occupations I guess), and keep my side/hobby-projects strictly AI-free, to keep my sanity and prevent athropy.

I just hope we'll get out of this weird limbo at some time, where AI is too good to ignore, but too unreliable to be left alone. I don't want to deal with two pressures at work.

I feel the opposite. I get to sit down and think about problems, expressing them in words as best I can, and then review the code, make sure that I understand it and it works as it should, all in a fraction of the time it used to take me. I like that I don't have to google APIs as much as I did before, and instead I can get a working thing much faster.

I can focus on actually solving problems rather than on writing clever and/or cute-looking code, which ironically also gives me more time later to over-optimize stuff at my leisure.

  • I feel this way with some of the work I've pushed through an LLM, but part of the time I'm left wondering what kind of Mickey Mouse problems people are working on where they are able to form up tidy English descriptions in complicated domains to capture what they're trying to achieve.

    If I have a clear idea of some algorithm I am trying to write, I have a concise method for expressing it already, and it ain't English.

    I suppose the other thing I would say is that reading code and understanding is definitely not the same as writing code and understanding it in terms of depth of understanding, and I think this notion that reviewing the outputs ought to be enough fails to capture the depth of understanding that comes with actually crafting it. You may not think this matters, but I'm pretty sure it does.

I just wish AI made compilers smarter in a provably correct way instead of a lame attempt at making programmers smarter.

I want tools that are smarter, but still 100% correct at what they do.

Any tools/languages that address this gap?

  • Something I really appreciate about LLMs is that they make a whole bunch of much more sophisticated reliable tooling acceptable to me.

    I've always been fascinated by AST traversal and advanced refactoring tools - things like tree-sitter or Facebooks's old codemod system https://github.com/facebookarchive/codemod

    I never used them, because the learning curve in them was steep enough that I never found the time to climb it to the point that I could start solving problems.

    Modern LLMs know all of these tools, which flattens that curve for me - I find it much easier to learn something like that if I can start from semi-working examples directly applicable to what I'm trying to do.

I have a relative who's in her 70s and used to be a coder. She told me she gave up coding when people introduced computers with terminals. She was used to filling out punch cards and felt like the way she worked, although constantly evolving, was something she could keep up with. When the new stuff came, with virtual programs and you just typing on a computer and no way to properly debug by shuffling the cards around, she ended up moving to something completely different...

Don't worry about it. Don't let anyone else tell you how best to use AI, use AI in a way that suits YOU, then it is so much fun. I would go crazy if I had multiple streams simultaneously working on stuff that need constant supervision (that would be different if I could trust they do 100% what I intend them to do), but AI is still very helpful in other ways (research, exploration, and writing tests).

This line really hit me. I used to think that mastering one advanced skill would be enough to rely on for life, but it seems that’s no longer the case.

I’m sorry you feel that way but that’s a surprising experience, I find flow states easier managing agents than actually coding. Each of course, to their own. Is it possible you were reaching the end of your tether anyway in the coding space? Feel free to slap that accusation down if it’s unfair.

I wonder how this will affect the burnout rate among IT workers in the long-term, which already was quite high. I guess a lot of people force themselves (or are forced to by their company) to use LLM in fear of being left behind, even if they don't enjoy the process, but sooner or later the fatigue will catch up.

> It makes me want to move into something completely different like sales

Aaand that's startup founder life :)

Intense multitasking, needing to accept a lower engineering quality bar, and ignoring scale problems because you don't know if anyone will actually buy the thing you're building yet.

Engineering something that you know you'll redo in 1 month is very different from engineering something that you intend to last for 5+ years, but it's still a fun challenge picking the right tradeoffs and working under different constraints.

it's taken programming from being fully waiting on compilations to being incrementally compiled and productive back to waiting on the compiler all over again.

The experience you have is something most youngsters won't ever get, because they won't have the time. You've become more valuable than you used to be, because you know exactly what works when and what doesn't. The hard part is being able to find the joy in making agents do what you want achieved instead of building it yourself. I think it actually isn't too hard once you get up to speed with managing multiple agents - efficiently juggling them feels like an art performance sometimes.

This is going to sound harsh, but welcome to the real world, I guess. Being in IT is pretty much the only job I know of today that is stable, pays well, is enjoyable, feels like it affects the world, personally engaging and challenging, etc. Being not in IT (it's just a hobby of mine) your comment sounds like "Well I had absolutely everything, and I still do but now it's not as fun anymore!"

Sales isn’t easy either!

  • Well-put. Sw eng is so much better, assuming you are comfortable in the role, for types who want to punch a clock doing something they don't hate.

    Sales is the definition of high-pressure, and your output is always threatened by forces beyond your control. It doesn't consistently reward intelligence or any particular skill other than hustle.

    There's nothing like sw dev that lets you sit at your desk and ignore the outside world while getting paid for delivering biz-critical milestones. Even creatives don't have this kind of potential autonomy.

At least you (or your employer) won't have to pay a shit ton of money for AI subscriptions so you remain productive after the AI bubble bursts.

to me genAI feels like a neural implanted exoskeleton.

it does awesome in demos. it has a real use.

but

it gets a long training period when one makes mistakes with it, it is big mistakes that take long to fix

Honestly based on what you've written I don't think you would enjoy sales any more

It is just a new way of coding. And indeed what the blog post said, if you are experienced, you will benefit the most as the AI agent will make similar mistakes as a junior and you will be able to recognize them.

But indeed, the fun part of coding a couple of routines is gone. That is history.