I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.
I can't empathize with the complaint that we've "lost something" at all. We're on the precipice of something incredible. That's not to say there aren't downsides (WOPR almost killed everyone after all), but we're definitely in a golden age of computing.
Certainly not. Computers are still magic, but much of that magic is now controlled and being restricted by someone other than you.
Today most people's only computer is a cell phone, which is heavily locked down and designed for media consumption and to collect and give away every scrap of their personal/private data. Most people's desktop computers aren't much better. They are continuously used by others against the interests of the people who paid for them, sometimes explicitly keeping them from doing things they want or limiting what they can install.
People are increasingly ignorant of how computers work in ways that were never possible when you had to understand them to use them. SoCs mean that users, and even the operating system they use, aren't fully aware of what the devices are doing.
People have lost control of the computers they paid for and their own data. They now have to beg a small number of companies for anything they want (including their own data on the cloud). We're heading toward a future where you'll need a submit to a retinal scan just to view a website.
Computing today is more adversarial, restricted, opaque, centralized, controlled, and monitored than it has been in a very long time. "My computer talks to me" is not making up for that.
What you're saying might be true, but it's also a choice to delegate responsibility to someone other than yourself. I'm not saying that the adversarial state of computing is ok, just that most people don't care, or don't like the alternatives.
Even as someone concerned with the issues you mention, the shift happening now feels pretty magical to me. I can only imagine how non-technical people must feel.
> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before.
Maybe they made us feel magic, but actual magic is the opposite of what I want computers to be. The “magic” for me was that computers were completely scrutable and reason-able, and that you could leverage your reasoning abilities to create interesting things with them, because they were (after some learning effort) scrutable. True magic, on the other hand, is inscrutable, it’s a thing that escapes explanation, that can’t be reasoned about. LLMs are more like that latter magic, and that’s not what I seek in computers.
> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.
I always preferred the Star-Trek-style ship computers that didn’t exhibit personality, that were just neutral and matter-of-fact. Computers with personality tend to be exhausting and annoying. Please let me turn it off. Computers with personality can be entertaining characters in a story, but that doesn’t mean I want them around me as the tools I have to use.
I have no idea what everyone is talking about. LLMs are based on relatively simple math, inference is much easier to learn and customize than say Android APIs. Once you do you can apply familiar programming style logic to messy concepts like language and images. Give you model a JSON schema like "warp_factor": Integer if you don't want chatter, that's way better than Star Trek computer could do. Or have it write you a simple domain specific library on top of Android API that you can then program from memory like old style BASIC rather than having to run to stack overflow for evwery new task.
The golden age for me is any period where you have the fully documented systems.
Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.
And software that’s open and can be modified.
Instead what we have is:
- AI which are little black boxes and beyond our ability to fully reason.
- perpetual subscription services for the same software we used to “own”.
- hardware that is completely undocumented to all but a small few who are granted an NDA before hand
- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”
- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.
The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.
My take on the difference between now and then is “effort”. All those things mentioned above are now effortless but the door to “effort” remains open as it always has been. Take the first point for example. Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself. We used to purchase software or write it ourselves before it became effortless to get it for free in exchange for ads and then a subscription when we grew tired of ads or were tricked into bait and switch. You can also argue that it has never been easier to write your own software than it is today.
Hostile operating systems. Take the effort to switch to Linux.
Undocumented hardware, well there is far more open source hardware out there today and back in the day it was fun to reverse engineer hardware, now we just expect it to be open because we couldn’t be bothered to put in the effort anymore.
Effort gives me agency. I really like learning new things and so agentic LLMs don’t make me feel hopeless.
Have you tried using GenAI to write documentation? You can literally point it to a folder and say, analyze everything in this folder and write a document about it. And it will do it. It's more thorough than anything a human could do, especially in the time frame we're talking about.
If GenAI could only write documentation it would still be a game changer.
> The golden age for me is any period where you have the fully documented systems. Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did. And software that’s open and can be modified.
I agree, that it would be good. (It is one reason why I wanted to design a better computer, which would include full documentation about the hardware and the software (hopefully enough to make a compatible computer), as well as full source codes (which can help if some parts of the documentation are unclear, but also can be used to make your own modifications if needed).) (In some cases, we have some of this already, but not entirely. Not all hardware and software has the problems you list, although it is too common now. Making a better computer will not prevent such problematic things on other computers, and not entirely preventing such problems on the new computer design either, but it would help a bit, especially if it is actually designed good rather than badly.)
> perpetual subscription services for the same software we used to “own”.
In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?
If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.
Local models exist and the knowledge required for training them is widely available in free classes and many open projects. Yes, the hardware is expensive, but that's just how it is if you want frontier capability. You also couldn't have a state of the art mainframe at home in that era. Nor do people expect to have industrial scale stuff at home in other engineering domains.
> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.
We literally are not, and we’d do well to stop using such hyperbole. The 1980s fantasy was of speaking to a machine which you could trust to be correct with a high degree of confidence. No one was wishing they could talk to a wet sock that’ll confidently give you falsehoods and when confronted (even if they were right) will bow down and always respond with “you’re absolutely right”.
In some ways, I'd say we're in a software dark age. In 40 years, we'll still have C, bash, grep, and Mario ROMs, but practically none of the software written today will still be around. That's by design. SaaS is a rent seeking business model. But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies. There's no way you'll be able to take a repo from 2026 and spin it up in 2050 without major work.
One question is how will AI factor in to this. Will it completely remove the problem? Will local models be capable of finding or fixing every dependency in your 20yo project? Or will they exacerbate things by writing terrible code with black hole dependency trees? We're gonna find out.
> That's by design. SaaS is a rent seeking business model.
Not all software now is SaaS, but unfortunately it is too common now.
> But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies.
Some people (including myself) prefer to write programs without too many dependencies, in order to avoid that problem. Other things also help, including some people write programs for older systems which can be emulated, or will use a more simpler portable C code, etc. There are things that can be done, to avoid too many dependencies.
There is uxn, which is a simple enough instruction set that people can probably implement it without too much difficulty. Although some programs might need some extensions, and some might use file names, etc, many programs will work, because it is designed in a simple way that it will work.
We have what I've dreamed of for years: the reverse dictionary.
Put in a word and see what it means? That's been easy for at least a century. Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was. Now it's always available.
The "reverse dictionary" is called a "thesaurus". Wikipedia quotes Peter Mark Roget (1852):
> ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed
Digital reverse dictionaries / thesauri like https://www.onelook.com/thesaurus/ can take natural language input, and afaict are strictly better at this task than LLMs. (I didn't know these tools existed when I wrote the rest of this comment.)
I briefly investigated LLMs for this purpose, back when I didn't know how to use a thesaurus; but I find thesauruses a lot more useful. (Actually, I'm usually too lazy to crack out a proper thesaurus, so I spend 5 seconds poking around Wiktionary first: that's usually Good Enough™ to find me an answer, when I find an answer I can trust it, and I get the answer faster than waiting for an LLM to finish generating a response.)
There's definitely room to improve upon the traditional "big book of synonyms with double-indirect pointers" thesaurus, but LLMs are an extremely crude solution that I don't think actually is an improvement.
Computers did feel like magic... until I read code, think about it, understood it, and could control it. I feel we're stepping away from that, and moving to a place of less control, less thinking.
I liked programming, it was fun, and I understood it. Now it's gone.
It's not gone, it's just being increasingly discouraged. You don't have to "vibe code" or spend paragraphs trying to talk a chatbot into doing something that you can do yourself with a few lines of code. You'll be fine. It's the people who could have been the next few generations of programmers who will suffer the most.
Glad to see this already expressed here because I wholly agree. Programming has not brought me this much joy in decades. What a wonderful time to be alive.
Good for you. But there are already so, so many posts and threads celebrating all of this. Everyone is different. Some of us enjoy the activity of programming by hand. This thread is for those us, to mourn.
I have an llm riding shotgun and I still very much program by hand. it's not one extreme or the other. whatever I copy from the llm has to be redone line by line anyways. I understand all of my code because I touch every line of it
> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.
It has been interesting (and not in a good way) how willing people are to anthropomorphize these megacorporation-controlled machines just because the interface is natural language now.
The original NES controller only contains a single shift register - no other active components.
Today, a wireless thing will have more code than one would want to ever read, much less comprehend. Even a high level diagram of the hardware components involved is quite complex.
We're on the precipice of something very disgusting. A massive power imbalance where a single company or two swallows the Earth's economy, due to a lack of competition, distribution and right of access laws. The wildest part is that these greedy companies, one of them in particular, are continuously framed in a positive light. This same company that has partnered with Palantir. AI should be a public good, not something gatekept by greedy capitalists with an ego complex.
Nothing meaningful happened in almost 20 years. After the iPhone, what happened that truly changed our lives? The dumpster fire of social media? Background Netflix TV?
In fact, I remember when I could actually shop on Amazon or browse for restaurants on Yelp while trusting the reviews. None of that is possible today.
We have been going through a decade of enshitification.
Yes this is the issue. We truly have something incredible now. Something that could benefit all of humanity. Unfortunately it comes at $200/month from Sam Altman & co.
> I can't empathize with the complaint that we've "lost something" at all.
We could easily approach a state of affairs where most of what you see online is AI and almost every "person" you interact with is fake. It's hard to see how someone who supposedly remembers computing in the 80s, when the power of USENET and BBSs to facilitate long-distance, or even international, communication and foster personal relationships (often IRL) was enthralling, not thinking we've lost something.
I grew up on 80's and 90's BBSes. The transition from BBSes to Usenet and the early Internet was a magical period, a time I still look back upon fondly and will never forget.
Some of my best friends IRL today were people I first met "online" in those days... but I haven't met anyone new in a longggg time. Yeah, I'm also much older, but the environment is also very different. The community aspect is long gone.
even in the 90s there was the phrase "the Internet, where the men are men, the women are men, and the teen girls are FBI agents". It was always the case you never really knew who/what you were dealing with on the Internet.
I'm from the early 90s era. I know exactly what you're saying. I entered the internet on muds, irc and usenet. There were just far fewer people online in those communities in those days, and in my country, it was mostly only us university students.
But, those days disappeared a long time ago. Probably at least 20-30 years ago.
I'd honestly much rather interact with an LLM bot than a conservative online. LLM bots can at least escape their constraints with clever prompting. There is no amount of logic or evidence that will sway a conservative. LLMs provide a far more convincing fake than conservatives are able to.
> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality
The difference is that the computer only talks back to you as code because you’re paying its owners, with you not being part of the owners. I find it really baffling that people put up with this. What will you do when Alphabet or Altman will demand 10 times the money out of you fir the privilege of their computer talking to you in programming code?
I agree with you with the caveat that all the "ease of building" benefits, for me, could potentially be dwarfed by job losses and pay decreases. If SWE really becomes obsolete, or even if the number of roles decrease a lot and/or the pay decreases a lot (or even fails to increase with inflation), I am suddenly in the unenviable position of not being financially secure and being stuck in my 30s with an increasingly useless degree. A life disaster, in other words. In that scenario the unhappiness of worrying about money and retraining far outweighs the happiness I get from being able to build stuff really fast.
Fundamentally this is the only point I really have on the 'anti-AI' side, but it's a really important one.
The invention of Mr Jacquard ushered in a sartorial golden age, when complex fabrics are easy to produce cheaply, at the expense of a few hours spent on punching a deck of cards. But the craft of making tapestries by hand definitely went into demise. This is the situation which the post is mourning.
Frankly, I have my doubts about the utter efficiency of LLMs writing code unattended; it will take quite some time before whatever comes after the current crop learns to do that efficiently and reliably. (Check out how many years went between first image generation demos and today's SOTA.) But the vector is obvious: humans would have to speak a higher-level language to computers, and hand-coding Typescript is going to be as niche in 10 years as today is hand-coding assembly.
This adds some kinds of fun, but also removes some other kinds of fun. There's a reason why people often pick something like PICO-8 to write games for fun, rather than something like Unreal Engine. So software development becomes harder because the developer has to work on more and more complex things, faster, and with fewer chances to study the moving parts to a comfortable depth.
LLMs have irritated me with bad solutions but they've never hurt my feelings. I can't say that about a single person I know. They're better people than people lol
This is exactly where I am with GenAI. After forty years: blocks of code, repository patterns, factory patterns, threading issues, documentation, one page executive summaries…
It's because of the way that I use the tools, and I have the luxury of being a craftsman, as opposed to a "TSA agent."
But then, I don't get paid to do this stuff, anymore. In fact, I deliberately avoid putting myself into positions, where money changes hands for my craft. I know how fortunate I am, to be in this position, so I don't say it to aggravate folks that aren't.
I'm not going to code by hand if it's 4x slower than having Claude do it. Yes, I can do that, but it just feels bad.
The analogy I like is it's like driving vs. walking. We were healthier when we walked everywhere, but it's very hard to quit driving and go back even if it's going to be better for you.
it's an exciting time, things are changing and changing beyond "here's my new javascript framework". It's definitely an industry shakeup kind of deal and no one knows what lies 6 months, 1 year, 5 years from now. It makes me anxious seeing as i have a wife+2 kids to care for and my income is tied to this industry but it's exciting too.
I didn't imagine I would be sending all my source code directly to a corporation for access to an irritatingly chipper personality that is confidently incorrect the way these things are.
There have been wild technological developments but we've lost privacy and autonomy across basically all devices (excepting the people who deliberately choose to forego the most capable devices, and even then there are firmware blobs). We've got the facial recognition and tracking so many sci-fi dystopias have warned us to avoid.
I'm having an easier time accomplishing more difficult technological tasks. But I lament what we have come to. I don't think we are in the Star Trek future and I imagined doing more drugs in a Neuromancer future. It's like a Snow Crash / 1984 corporate government collab out here, it kinda sucks.
I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.
The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.
Our most popular desktop operating system doesn't even have an easy way to package and update software for it.
Yes, this is where it's at for me. LLM's are cool and I can see them as progress, but I really dislike that they're controlled by huge corporations and cost a significant amount of money to use.
Unfortunately we live in a "vote with your wallet" paradigm where some of the most mentally unhealthy participants have wallets that are many orders of magnitude bigger than the wallet of the average participant.
Honestly I think it's under-engineer garbage. Proper engineering is putting in the effort to come up with simpler solutions. The complex solutions appear because we push out the first thing that "works" without time to refine it.
Dystopian cyberpunk was always part of the fantasy. Yes, scale has enabled terrible things.
There are more alternatives than ever though. People are still making C64 games today, cheap chips are everywhere. Documentation is abundant... When you layer in AI, it takes away labor costs, meaning that you don't need to make economically viable things, you can make fun things.
I have at least a dozen projects going now that I would have never had time or energy for. Any itch, no matter how geeky and idiosyncratic, is getting scratched by AI.
I really am very thankful for @simonw posting a TikTok from Chris Ashworth, a Baltimore theater software developer, who recently picked up LLM's for building a voxel display software controller. And who was just blown away.
https://simonwillison.net/2026/Jan/30/a-programming-tool-for...
Simon doesn't touch on my favorite part of Chris's video though, which is Chris citing his friend Jesse Kriss. This stuck out at me so hard, and is so close to what you are talking about:
> The interesting thing about this is that it's not taking away something that was human and making it a robot. We've been forced to talk to computers in computer language. And this is turning that around.
I don't see (as you say) a personality. But I do see the ability to talk. The esoteria is still here underneath, but computer programmers having this lock on the thing that has eaten the world, being the only machine whisperers around, is over. That depth of knowledge is still there and not going away! But notably too, the LLM will help you wade in, help those not of the esoteric personhood of programmers to dive in & explore.
I retired a few years ago, so I have no idea what AI programming is.
But I mourned when CRT came out, I had just started programming. But I quickly learned CRTs were far better,
I mourned when we moved to GUIs, I never liked the move and still do not like dealing with GUIs, but I got used to it.
Went through all kinds of programming methods, too many to remember, but those were easy to ignore and workaround. I view this new AI thing in a similar way. I expect it will blow over and a new bright shiny programming methodology will become a thing to stress over. In the long run, I doubt anything will really change.
I think you're underestimating what AI can do in the coding space. It is an extreme paradigm shift. It's not like "we wrote C, but now we switch to C++, so now we think in objects and templates". It's closer to the shift from assembly to a higher level language. Your goal is still the same. But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.
If you never tried Claude Code, give it s try. It's very easy to get I to. And you'll soon see how powerful it is.
I'm actually extremely good at programming. My point is I love computers and computing. You can use technology to achieve amazing things (even having fun). Now I can do much more of that than when I was limited to what I can personally code. In the end, it's what computers can do that's amazing, beautiful, terrifying... That thrill and to be on the bleeding edge is always what I was after.
One thing that I realized was that a lot of our so-called "craft" is converged "know-how". Take the recent news that Anthropic used Claude Code to write a C compiler for example, writing compiler is hard (and fun) for us humans because we indeed need to spend years understanding deeply the compiler theory and learning every minute detail of implementation. That kind of learning is not easily transferrable. Most students tried the compiler class and then never learned enough, only a handful few every year continued to grow into true compiler engineers. Yet to our AI models, it does not matter much. They already learned the well-established patterns of compiler writing from the excellent open-source implementations, and now they can churn out millions of code easily. If not perfect, they will get better in the future.
So, in a sense our "craft" no longer matters, but what really happens is that the repetitive know-how has become commoditized. We still need people to do creative work, but what is not clear is how many such people we will need. After all, at least in short term, most people build their career by perfecting procedural work because transferring the know-how and the underlying whys is very expensive to human. For the long term, though, I'm optimistic that engineers just get an amazing tool and will use it create more opportunities that demand more people.
I'm not sure we can draw useful conclusions from the Claude Code written C compiler yet. Yes, it can compile the Linux kernel. Will it be able to keep doing that moving forward? Can a Linux contributor reliably use this compiler to do their development, or do parts of it simply not work correctly if they weren't exercised in the kernel version it was developed against? How will it handle adding new functionality? Is it going to become more-and-more expensive to get new features working, because the code isn't well-factored?
To me this doesn't feel that many steps above using a genetic algorithm to generate a compiler that can compile the kernel.
If we think back to pre-AI programming times, did anyone really want this as a solution to programming problems? Maybe I'm alone in this, but I always thought the problem was figuring out how to structure programs in such a way that humans can understand and reason about them, so we can have a certain level of confidence in their correctness. This is super important for long-lived programs, where we need to keep making changes. And no, tests are not sufficient for that.
Of course, all programs have bugs, but there's a qualitative difference between a program designed to be understood, and a program that is effectively a black box that was generated by an LLM.
There's no reason to think that at some point, computers won't be able to do this well, but at the very least the current crop of LLMs don't seem to be there.
> and now they can churn out millions of code easily.
It's funny how we suddenly shifted from making fun of managers who think programmer's should be measured by the number of lines of code they generated, to praising LLMs for the same thing. Why did this happen? Because just like managers, programmers letting LLMs write the code aren't reading and don't understand the output, and therefore the only real measure they have for "productivity" is lines of code generated.
Note that I'm not suggesting that using AI as a tool to aid in software development is a bad thing. I just don't think letting a machine write the software for us is going to be a net win.
> I can't empathize with the complaint that we've "lost something" at all.
I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!
It's not only the golden era of code. It's the golden era of content.
> If you would like to grieve, I invite you to grieve with me.
I’m not against AI in itself, but the current implementation (read market) can eat my shiny metal … I got kids to feed and bills to pay. Yes AI was all about post scarcity. Hiring in our industry is already feeling the brunt of AI, where is the UBI? Where is our soft landing? I’m too old to learn XYZ skill if I get laid off, it will be life changing, and for what? For some rich person to be richer while simultaneously destroying the climate and my energy bill?
I would push back, but I do not know how. Hope for the market to pop I guess.
Generally from a financial planning standpoint 35-50 is typically the "grinding years" where mortgage, family, and other life commitments means that typically your career investment needs to pay off to make it through. In some ways it is the "danger zone" financially. Hard to change careers (not young enough), but not yet worked enough to retire with large expenses coming in. This isn't unique to software engineers either -> this is most people in most jobs.
There are also mixed people on these forums in different regions, countries, work experiences, etc. For example software in most places in the world had an above average salary but not extremely high (i.e. many other white collar professions would pay similar/more). For those people where it is a standard skilled role it probably hits even harder now than say the ones with lots of stock who can retire early and enjoy the new toy that is AI.
It was said in the context of having bills to pay. Meaning that he is in deep and needs a high-priced developer salary to make ends meet.
Virtually all other careers that offer similar compensation have an old boys club gatekeeping the profession, requiring you submit many years and hundreds of thousands of dollars before they will consider letting you in. That might pencil out as a reasonable investment when you are 14, but once you are mid-career you'll never get back what you put into it.
Learning XYZ skill is something you can do at any age, and doing so will even get you an average paying job with ease. Learning the XYZ skill in the way that keeps the old boys happy is not a realistic option for someone who considers themselves old.
> Hiring in our industry is already feeling the brunt of AI
AI isn't what is driving us to slow hiring down in the US. There are other reasons I have brought up multiple times on HN.
> I’m too old to learn XYZ skill if I get laid off
Sadly, you will have to.
My dad is in his 60s and has been programming and soldering since ZX Spectrums and apple ][s roamed the earth, yet he still keeps abreast on the latest CNCF projects, prompt engineering, A2A, eBPF, and other modern stacks.
Meanwhile I'm seeing people half his age flaming out and kvetching that spending some time further studying A2A, MCP, and other design patterns is insurmountable.
Software Engineering is an ENGINEERING discipline. If you do not keep abreast on the changes happening in our industry, you will fall behind.
And in fact, having years of experience is a net benefit because newer innovations themselves build on top of older fundamentals.
For example, understanding Linux internals helps debug GPUs that communicate via Infiniband that are being used to train models that are being orchestrated via K8s and are operating on segmented networks.
Our PortCos and I are not hiring you to be a code monkey writing pretty looking code. If we want a code monkey we can offshore. We are paying you $200k-300k base salaries in order to architect, translate, and negotiate business requirements into technical requirements.
Yes this will require EQ on top of technical depth. That is what engineering is. The whole point of engineering is to build sh#t that works well enough. It doesn't have to be pretty, it will often be MacGyvered, and it will have glaring issues that are tomorrow's problem - but it is solving a problem.
The name of the game for me is building “sh#t that works well” and I like it, and that means constant learning no doubt. I’ve done crazy sh!t like implementing Webservers with bash, accessed accessed by tunneling over uart, to configure laser driven HUD on a pair of glasses. All this was new to me but I did it and it works well within the constraints we were given.
Now AI is making us more efficient (with questionable quality) that means we need less people to get a job done, less people hired per project. I have personally experienced this, to a degree. Now if I get layedoff and I don’t meet the cut because there is more competition someone better or more desperate that me, I’m out of luck.
I can restart my career as an electrician, I studied a lot of electronics both professionally and personally, but I will be starting as an apprentice, that’s not putting food on my table.
> We are paying you $200k-300k base salaries
That’s nice, I earn far less than half that as a web dev in Norway.
Software Engineering is an ENGINEERING discipline. If you do not keep abreast on the changes happening in our industry, you will fall behind.
---
Lately it's more like swallowing big swathes of BS. Software and engineering are two very far away disciplines. This has nothing to do with engineering. You have to use cumbersome, non-human-centric things just to get a job interview. They don't look at you as a programmer. They look at you as an X or Y framework expert. So, you are not an engineer at all. You are more and more becoming a trained monkey who has to appease the feebleminded and work with mind-bogglingly idiotic and overcomplicated things.
LLMs are only a threat if you see your job as a code monkey. In that case you're likely already obsoleted by outsourced staff who can do your job much cheaper.
If you see your job as a "thinking about what code to write (or not)" monkey, then you're safe. I expect most seniors and above to be in this position, and LLMs are absolutely not replacing you here - they can augment you in certain situations.
The perks of a senior is also knowing when not to use an LLM and how they can fail; at this point I feel like I have a pretty good idea of what is safe to outsource to an LLM and what to keep for a human. Offloading the LLM-safe stuff frees up your time to focus on the LLM-unsafe stuff (or just chill and enjoy the free time).
I see my job as having many aspects. One of those aspects is coding. It is the aspect that gives me the most joy even if it's not the one I spend the most time on. And if you take that away then the remaining part of the job is just not very appealing anymore.
It used to be I didn't mind going through all the meetings, design discussions, debates with PMs, and such because I got to actually code something cool in the end. Now I get to... prompt the AI to code something cool. And that just doesn't feel very satisfying. It's the same reason I didn't want to be a "lead" or "manager", I want to actually be the one doing the thing.
You won't be prompting AI for the fun stuff (unless laying out boring boilerplate is what you consider "fun"). You'll still be writing the fun part - but you will be able to prompt beforehand to get all the boilerplate in place.
Nevermind coding where is the llm for legal stuff? Why are all these programmers working on automating their job away instead of those bloodsucking lawyers who charge hundreds of eur per h.
It’s happening as fast for them. I literally sit next to our general counsel all day at the office. We work together continually. I show him things happening in engineering, and each time he shows me the analogous things happening in legal.
Domain knowledge and gatekeeping. We don't know what is required in their role fully, but we do know what is required in ours. We also know that we are the target of potentially trillions in capital to disrupt our job and that the best and brightest are being paid well just to disrupt "coding". A perfect storm of factors that make this faster than other professions.
It also doesn't help that some people in this role believe that the SWE career is a sinking ship which creates an incentive to climb over others and profit before it tanks (i.e. build AI tools, automate it and profit). This is the typical "It isn't AI, but the person who automates your job using AI that replaces you".
Until they can magically increase context length to such a size that can conveniently fit the whole codebase, we're safe.
It seems like the billions so far mostly go to talk of LLMs replacing every office worker, rather than any action to that effect. LLMs still have major (and dangerous) limitations that make this unlikely.
Dunning–Kruger is everywhere in the AI grift. People who don't know a field trying to deploy some AI bot that solves the easy 10% of the problem so it looks good on the surface and assumes that just throwing money (which mostly just buys hardware) will solve it.
They aren't "the smartest minds in the world". They are slick salesmen.
Agreed. Programming languages are not ambiguous. Human language is very ambiguous, so if I'm writing something with a moderate level of complexity, it's going to take longer to describe what I want to the AI vs writing it myself. Reviewing what an AI writes also takes much longer than reviewing my own code.
AI is getting better at picking up some important context from other code or documentation in a project, but it's still miles away from what it needs to be, and the needed context isn't always present.
Why is that safe in the medium to long term? If LLMs can code monkey already after just 4 years, why assume in a couple more they can’t talk to the seniors’ direct report and get requirements from them? I’m learning carpentry just in case.
LLMs are a threat to the quality of code in a similar - but much more dramatic - way to high level languages and Electron. I am slightly worried about keeping a job if there's a downturn, but I'm much more worried about my job shifting into being the project manager for a farm of slop machines with no taste and a complete inability to learn.
I see what these can do and I'm already thinking, why would I ever hire a junior developer? I can fire up opencode and tell it to work multiple issues at once myself.
The bottleneck becomes how fast you can write the spec or figure out what the product should actually be, not how quickly you can implement it.
So the future of our profession looks grim indeed. There will be far fewer of us employed.
I also miss writing code. It was fun. Wrangling the robots is interesting in its own way, but it's not the same. Something has been lost.
Because a junior developer doesn't stay a junior developer forever. The value of junior developers has never been the code they write. In fact, in my experience they're initially a net negative, as more senior developers take time to help them learn. But it's an investment, because they will grow into more senior developers.
You hire the junior developer because you can get them to learn your codebase and business domain at a discount, and then reap their productivity as they turn senior. You don’t get that with an LLM since it only operates on whatever is in its context.
(If you prefer to hire seniors that’s fine too - my rates are triple that of a junior and you’re paying full price for the time it takes me learning your codebase, and from experience it takes me at least 3 months to reach full productivity.)
I think it’s naive to think that not every part of our jobs will worryingly soon be automated. All the way up to and inckuding CEO. This is not exciting.
But I also have no idea how people are going to think about what code to write when they don't write code. Maybe this is all fine, is ok, but it does make me quite nervous!
That is definitely a problem, but I would say it’s a problem of hiring and the billion-dollars worth of potential market cap resting on performative bullshit that encourages companies to not hire juniors to send a signal to capture some of those billions regardless of actual impact on productivity.
LLMs benefit juniors, they do not replace them. Juniors can learn from LLMs just fine and will actually be more productive with them.
When I was a junior my “LLM” was StackOverflow and the senior guy next to me (who no doubt was tired of my antics), but I would’ve loved to have an actual LLM - it would’ve handled all my stupid questions just fine and freed up senior time for the more architectural questions or those where I wasn’t convinced by the LLM response. Also, at least in my case, I learnt a lot more from reading existing production code than writing it - LLMs don’t change anything there.
If you believe juniors are already not safe, it’s only a question of time before seniors are in the same position. First they came for the socialists, etc etc.
Agree with the author. I like the process of writing code, typing method names and class definitions while at the same time thinking ahead about overall architecture, structure, how much time given function would run for, what kind of tests are necessary.
I find it unsettling how many people in the comments say that they don't like writing code. Feels aliens to me. We went into this field for seemingly very different reasons.
I do use LLMs and even these past two days I was doing vibe coding project which was noticeably faster to setup and get to its current state than if I wrote in myself. However I feel almost dirty by how little I understand the project. Sure, I know the overall structure, decisions and plan. But I didn't write any of it and I don't have deep understanding of the codebase which I usually have when working on codebase myself.
It's not so much the writing of the code (which I did like), it's the aesthetic of the code. It's solving a problem with the right code and the right amount of code (for now). That's still the case, even with AI writing most of the code. You have to steer it constantly because it has very bad instincts, because most people in the profession aren't good at it, so it has bad training data. Mainly because the "learn to code" movement and people getting into this profession just for the money and not the love. Those people are probably screwed.
For me its the money. Software paid and does pay well when I got in. I have actually been trying to get the eff out as fast as I can and move to management/research (I work in ML). I have avoided web dev shit like the plague since it is indeed very low value added work. The fact that LLMs can finish all this crappy work for 20bucks a month where I dont have to do it by hand is a welcome step. Otoh, I dont think point and trust is the way to do AI assisted coding. You get better output when you know the business logic and can make sense of the code. In fact, I prompt many many times until the AI spits out something I understand.
Honestly all the complaining about dying of a craft is just pathetic. In one of the jobs I worked, there were specific performance rubrics under "Craft" and those really annoyed me. Software / Code is just a tool to solve a problem.
Coding is a tool to solve a problem, but, to many it is also a culture. It has a history, it has connections, it has lore, it had some loosely commonly held values, and yes it absolutely was a craft. It’s okay for people to mourn that. It’s not the same as someone who may have taken it up for money. That itself, is seeing coding as a tool for income—which is valid.
Calling it pathetic I think lacks empathy and perhaps an appreciation for what it might mean to other people outside the letters we type. Those letters, the languages we type, compilers that process them, and libraries that enable them were, at the end of the day, made by people.
The craft is still there just like painting is still an alternative to a photograph. It's just not going to be valued by society anymore, and far fewer will learn how to do it. Natural language is the new programming language. For now, understanding the craft is still an edge in making better prompts, but already I can see that telling Antigravity "now look for ways to make this more efficient" works almost as well as guiding it specifically on how it had duplicated some code flows.
I do feel a kind of personal loss in the sense that society is in the process of stopping to value or admire the design and coding skill I've cultivated since I was 6yo. At the same time, I'm kind of thrilled that I can write a detailed readme.md and tell an agent to "make it so" and I can iterate to a utility program in 20 minutes instead of an hour. When I feel a pit in my stomach is when that utility program uses some framework that I haven't learned, and don't need to because their code worked perfectly the first time. Surely that means I'm going to basically stop learning the details, as the details I've accumulated over my life quickly begin to not matter anymore.
Honestly I'm planning to use AI to make a kick-ass retro development environment a la "Sending Modern Languages Back to 1980s Game Programmers" (https://prog21.dadgum.com/6.html) and spend my retirement having fun in it.
Even if I were retired and financially set now, that would mean nothing in 10 years if an unemployed society collapses around me. Apathy is not on the menu today.
>People who have reaped the rewards of their careers tend not to be the ones concerned about their futures. Apathy.
Not everyone has to become a programmer, people at the start of their careers can chooses paths other than programming if they're afraid of the (lack of) future prospects from AI. Where did people work before the ZIRP boom? Those industries are still around. Plenty of STEM related jobs besides programing.
Did society actually value those skills before? Maybe companies or individuals did, but giving coded instructions to computers was seen by most as wizardry at best and geeky at worst. Unfortunately, I feel society values tackling and home run hitting, superficial beauty, and wealth, far more than technical skills.
" They can write code better than you or I can, and if you don’t believe me, wait six months."
It's ALWAYS wait 6 months, or wait for the next generation. Or "oh that model that we told you to use is old now, use this new one instead. Oh that doesn't work? Well that's old now, use this one". Always. 6 months ago it was wait 6 months. 12 months ago it was wait 6 months. 18 months ago it was wait 6 months. Now it's wait 6 months. 6 months from now it'll be wait 6 months.
While I'm on the fence about LLMs there's something funny about seeing an industry of technologists tear their own hair out about how technology is destroying their jobs. We're the industry of "we'll automate your job away". Why are we so indignant when we do it to ourselves...
This article isn't really about losing a job. Coding is a passion for some of us. It's similar to artists and diffusion, the only difference being that many people can appreciate human art - but who (outside of us) cares that a human wrote the code?
I love programming, but most of that joy doesn't come from the type of programming I get paid to do. I now have more time and energy for the fun type, and I can go do things that were previously inconceivable!
Last night "I" "made" 3D boids swarm with directional color and perlin noise turbulence. "I" "did" this without knowing how to do the math for any of those things. (My total involvement at the source level was fiddling with the neighbor distance.)
I think this is really it. Being a musician was never a very reliable way to earn a living, but it was a passion. A genuine expression of talent and feeling through the instrument. And if you were good enough you could pay the bills doing work work for studios, commercials, movies, theater. If you were really good you could perform as a headliner.
Now, AI can generate any kind of music anyone wants, eliminating almost all the anonymous studio, commercial, and soundtrack work. If you're really good you can still perform as a headliner, but (this is a guess) 80% of the work for musicians is just gone.
The people outside of us didn’t care about your beautiful code before. Now we can quickly build their boring applications and spend more time building beautiful things for our community’s sake. Yes, there are economic concerns, but as far as “craft” goes, nothing is stopping us from continuing to enjoy it.
I disagree a bit. Coding can remain an artistic passion for you indefinitely, it's just your ability to demand that everyone crafts each line of code artisinally won't be subsidized by your employer for much longer. There will probably always be a heavily diminished demand for handcrafted code.
Agreed. I've always thought the purpose of all automation was to remove needless toil. I want computers to free people. I guess I subscribe to the theory of creative destruction.
"We" might be such an industry, but I'm not. My focus has always been on creating new capabilities, particularly for specialists in whatever field. I want to make individuals more powerful, not turn them into surplus.
For me it's because the same tech is doing it to everyone else in a more effective way (i.e. artists especially). I'm an "art enjoyer" since I was a child and to see it decimated by people who I once looked up to is heartbreaking. Also, if it only affected software, I would've been happy to switch to a more artistic career, but welp there goes that plan.
I feel very similarly, I always thought of software engineering as being my future career. I'm young, I just really got my foot into the industry in my early twenties. It feels like the thing I wanted to do died right when I was allowed to start. I also always felt that if I didn't get to do development, I would try to get into arts which has always been a dream of mine, and now it feels that that died, too. I wish I was born just a little bit earlier, so that I had a bit more time. :(
These comments are comical. How hard is it to understand that human beings are experiential creatures. Our experiences matter, to survival, to culture, and identity.
I mourn the horse masters and stable boys of a century past because of their craft. Years of intuition and experience.
Why do you watch a chess master play, or a live concert, or any form of human creation?
In fact, contrary things are so very often both true at the same time, in different ways.
Figuring out how to live in the uncomfortableness of non-absolutes, how to live in a world filled with dualisms, is IMO one of the primary and necessary maturities for surviving and thriving in this reality.
For my whole life I’ve been trying to make things—beautiful elegant things.
When I was a child, I found a cracked version of Photoshop and made images which seemed like magic.
When I was in college, I learned to make websites through careful, painstaking effort.
When I was a young professional, I used those skills and others to make websites for hospitals and summer camps and conferences.
Then I learned software development and practiced the slow, methodical process of writing and debugging software.
Now, I get to make beautiful things by speaking, guiding, and directing a system which is capable of handling the drudgery while I think about how to make the system wonderful and functional and beautiful.
It was, for me, never about the code. It was always about making something useful for myself and others. And that has never been easier.
I like coding, I really do. But like you, I like building things more than I like the way I build them. I do not find myself miss writing code by hand as much.
I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".
But I do worry. The main question is this - will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)
My main hope is this - AI can beat a human in chess for a while now, we still play chess, people earn money from playing chess, teaching chess, chess players are still celebrated, youtube influencers still get monetized for analyzing games of celebrity chess players, even though the top human chess player will likely lose to a stockfish engine running on my iPhone. So maybe there is hope.
> will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)
Of course, and if LLMs keep improving at current rates it will happen much faster than people think.
Arguably you don't need junior software engineers anymore. When you also don't need senior software engineers anymore it isn't that much of a jump to not needing project managers, managers in general or even software companies at all anymore.
Most people, in order to protect their own ego, will assume *their* job is safe until the job one rung down from them disappears and then the justified worrying will begin.
People on the "right things to build" track love to point out how bad people are at describing requirements, so assume their job as a subject matter expert and/or customer-facing liaison will be safe, but does it matter how bad people are at describing requirements if iteration is lightning fast with the human element removed?
Yes, maybe someone who needs software and who isn't historically some sort of software designer is going to have to prompt the LLM 250 times to reach what they really want, but that'll eventually still be faster than involving any humans in a single meeting or phone call. And a lot of people just won't really need software as we currently think about it at all, they'll just be passing one-off tasks to the AI.
The real question is what happens when the labor market for non-physical work completely implodes as AI eats it all. Based on current trends I'm going to predict in terms of economics and politics we handle it as poorly as possible leading to violent revolution and possible societal collapse, but I'd love to be wrong.
> I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".
I've always been strongly in the first category, but... the issue is that 10x more people will be able to build the right things. And if I build the right thing, it will be easy to copy. The market will get crowded, so distribution will become even harder than it is today. Success will be determined by personal brand, social media presence, social connections.
For me, photography is the metaphor - https://raskie.com/post/we-have-ai-at-home - We've had the technology to produce a perfect 2D likeness of a subject for close to two centuries now, and people are still painting.
Video didn't kill the radio star either. In fact the radio star has become more popular than ever in this, the era of the podcast.
> will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human
I share this sentiment. It's really cool that these systems can do 80% of the work. But given what this 80% entails, I don't see a moat around that remaining 20%.
> "build the right things" [vs] "build things right"
I think this (frequent) comparison is incorrect. There are times when quality doesn't matter and times that it does. Without that context these discussions are meaningless.
If I build my own table no one really gives a shit about the quality besides me and maybe my friends judging me.
But if I sell it, well then people certainly care[0] and they have every right to.
If I build my own deck at my house people do also care and there's a reason I need to get permits for this, because the danger it can cause to others. It's not a crazy thing to get your deck inspected and that's really all there is to it.
So I don't get these conversations because people are just talking past one another. Look, no one gives a fuck if you poorly vibe code your personal website, or at least it is gonna be the same level as building your own table. But if Ikea starts shipping tables with missing legs (even if it is just 1%) then I sure give a fuck and all the customers have a right to be upset.
I really think a major part of this concern with vibe coding is about something bigger. It is about slop in general. In the software industry we've been getting sloppier and sloppier and LLMs significantly amplify that. It really doesn't matter if you can vibe code something with no mistakes, what matters is what the businesses do. Let's be honest, they're rushing and don't care about quality because they have markets cornered and consumers are unable to accurately evaluate products prior to purchase. That's the textbook conditions for a lemon market. I mean the companies outsource tech support so you call and someone picks up who's accent makes you suspicious of their real name being "Steve". After all, it is the fourth "Steve" you've talked to as you get passed around from support person to support person. The same companies who contract out coders from poor countries and where you find random comments in another language. That's the way things have been going. More vaporware. More half baked products.
So yeah, when you have no cake the half baked cake is probably better than nothing. At home it also doesn't matter if you're eating a half baked cake or one that competes with the best bakers in the world. But for everyday people who can't bake their own cakes, what do they do? All they see is a box with a cake in it, one is $1, another for $10, and another other is $100. They look the same but they can't know until they take a bite. You try enough of the $1 cakes and by the time you give up the $10 cakes are all gone. By the time you get so frustrated you'll buy the $100 cake they're gone too.
I don't dislike vibe coding because it is "building things the wrong way" or any of that pretentious notion. I, and I believe most people with a similar opinion, care because "the right things" aren't being built. Most people don't care how things were built, but they sure do care about the result. Really people only start caring about how the sausage is made when they find out that something distasteful is being served and concealed from them. It's why everyone is saying "slop".
So when people make this false dichotomy it just feels like people aren't listing to what's actually being said.
[0] Mind you, it is much easier for an inexperienced person to judge the quality of a table than software. You don't need to be a carpenter to know a table's leg is missing or that it is wobbly but that doesn't always hold true for more sophisticated things like software or even cars. If you haven't guessed already, I'm referencing lemon markets: https://en.wikipedia.org/wiki/The_Market_for_Lemons
I've seen a hundred ai-generated things, and they are rarely interesting.
Not because the tools are insufficient, it's just that the kind of person that can't even stomach the charmed life of being a programmer will rarely be able to stomach the dull and hard work of actually being creative.
Why should someone be interested in you creations? In what part of your new frictionless life would you've picked up something that sets you apart from a million other vibe-coders?
> stomach the dull and hard work of actually being creative
This strikes me as the opposite of what I experience when I say I'm "feeling creative", then everything comes easy. At least in the context of programming, making music, doing 3D animation and some other topics. If it's "dull and hard work" it's because I'm not feeling "creative" at all, when "creative mode" is on in my brain, there is nothing that feels neither dull nor hard. Maybe it works differently for others.
I love building things too, but for me, the journey is a big part of what brings me joy. Herding an LLM doesn't give me joy like writing code does. And the finished project doesn't feel the same when my involvement is limited to prompting an LLM and reviewing its output.
If I had an LLM generate a piece of artwork for me, I wouldn't call myself an artist, no matter how many hours I spent conversing with the LLM in order to refine the image. So I wouldn't call myself a coder if my process was to get an LLM to write most/all the code for me. Not saying the output of either doesn't have value, but I am absolutely fine gatekeeping in this way: you are not an artist/coder if this is how you build your product. You're an artistic director, a technical product manager, something of that nature.
That said, I never derived joy from every single second of coding; there were and are plenty of parts to it that I find tedious or frustrating. I do appreciate being able to let an LLM loose on some of those parts.
But sparing use is starting to really only work for hobby projects. I'm not sure I could get away with taking the time to write most of it manually when LLMs might make coworkers more "productive". Even if I can convince myself my code is still "better" than theirs, that's not what companies value.
Isn't this like saying that if better woodworking tools come out, and you like woodworking, that woodworking somehow 'isn't your craft'. They said that their craft is about making things.
There are woodworkers on YouTube who use CNC, some who use the best Festool stuff but nothing that moves on its own, and some who only use handtools. Where is the line at which woodworking is not their craft?
Yeah, seems like too many went into this field for money or status not because they like the process. Which is not an issue by itself, but now these people talk about how their AI assistant of choice made them some custom tool in two hours that would have taken them three weeks. And it's getting exhausting.
It is a different kind of code. Just a lot of programmers can’t grock it as such.
I guess I started out as a programmer, then went to grad school and learned how to write and communicate my ideas, it has a lot in common with programming, but at a deeper level. Now I’m doing both with AI and it’s a lot of fun. It is just programming at a higher level.
I’m going to be thinking about this comment for a while—-and I think you’re basically right.
Almost none of the code I wrote in 2015 is still in use today. Probably some percentage of people can point to code that lasted 20 years or longer, but it can’t be a big percentage. When I think of the work of a craft, I think of doing work which is capable of standing up for a long time. A great builder can make a house that can last for a thousand years and a potter can make a bowl that lasts just as long.
I’ve thought of myself as a craftsman of code for a long time but maybe that was just wrong.
so much garbage ego in statements like this. if you really knew about software, you'd recognize there are about a million ways to be successful in this field
> Now, I get to make beautiful things by speaking, guiding, and directing a system which is capable of handling the drudgery while I think about how to make the system wonderful and functional and beautiful.
For how long do you think this is sustainable? In the sense of you, or me, or all these other people here being able to earn a living. Six months? A couple of years? The time until the next-but-one Claude release drops?
Does everyone have to just keep re-making themselves for whatever the next new paradigm turns out to be? How many times can a person do that? How many times can you do that?
I want to be in your camp, and am trying hard. But the OP's blog entry should at least give us a moment to "respect the dead". That's all he's asking, I think.
This is the best description of value from AI that I've seen so far. It allows people who don't like writing code to build things without doing so.
I don't think it's nearly as valuable to people who do enjoy writing code, because I don't think prompting an agent (at least in their current state) is actually more productive than just writing the code. So I don't see any reason to mourn on either side.
Adam Neely has a video on GenAI and it's impact on the music industry. There is a section in the video about beauty and taste and it's pretty different from your conclusions. One example I remember is would an AI find beauty in a record scratch sound?
> For my whole life I’ve been trying to make things—beautiful elegant things.
Me too, but... The ability to code was a filter. With AI, the pool of people who can build beautiful elegant software products expands significantly. Good for the society, bad for me.
AI agents seem to be a powerful shortcut to the drudgery. But let's not forget, that powerful software rests on substance. My hope is the substance will increase, after all.
In my opinion the relationship between level of detailed care and resulting beauty is proportional. Can you get the same level without getting your hands dirty? Sure, maybe, but I doubt a painter or novelist could really produce beautiful work without being intimately familiar with that work. The distance that heavy use of AI tools creates between you and the output does not really lend itself to beauty. Could you do it, sure, but at that point it's probably more efficient to just do things yourself and have complete intimate control.
To me, you sound more utilitarian. The philosophy you are presenting is a kind of Ikea philosophy. Utility, mass production, and unique beauty are generally properties that do not cohere together, and there's a reason for this. I think the use of LLMs in the production of digital goods is very close to the use of automation lines in the production of physical goods. No matter how you try some of the human charm, and thus beauty will inevitably be lost, the number of goods will increase, but they'll all be barely differentiable souless replications of more or less the same shallow ideas repeated as infinitum.
I agree, LLMs definitely sand off a lot of personality, and you can see it in writing the most, at this point I'm sure tons of people are subconsciously trained to lower the trust for something where they recognize typical patterns.
With the code, especially interfaces, the results will be similar -- more standardized palettes, predictable things.
To be fair, the converging factor is going on pretty much forever, e.g. radio/TV led to the lots of local accents disappearing, our world is heavily globalized.
So when you "learned software development and practiced the slow, methodical process of writing and debugging software", it wasn't about code? I don't get it. Yes, building useful things is the ultimate goal, but code is the medium through which you do it, and I don't understand how that cannot be an important part of the process.
It's like a woodworker saying, "Even though I built all those tables using precise craft and practice, it was NEVER ABOUT THE CRAFT OR PRACTICE! It was about building useful things." Or a surgeon talking about saving lives and doing brain surgery, but "it was never about learning surgery, it was about making people get better!"
Not the GP I feel some of that energy. The parts I most enjoy are the interfaces, the abstractions, the state machines, the definitions. The code I enjoy too, and I would be sad to lose all contact with it, but I've really appreciated AI especially for helping me get over the initial hump on things like:
- infrastructure bs, like scaffold me a JS GitHub action that does x and y.
- porting, like take these kernel patches and adjust them from 6.14 to 6.17.
- tools stuff, like here's a workplace shell script that fetches a bunch of tokens for different services, rewrite this from bash to Python.
- fiddly things like dealing with systemd or kubernetes or ansible
- fault analysis, like here's a massive syslog dump or build failure, what's the "real" issue here?
In all these cases I'm very capable of assessing, tweaking, and owning the end result, but having the bot help me with a first draft saves a bunch of drudgery on the front end, which can be especially valuable for the ADHD types where that kind of thing can be a real barrier to getting off the ground.
So many people responding to you with snarky comments or questioning your programming ability. It makes me sad. You shared a personal take (in response to TFA which was also a personal take). There is so much hostility and pessimism directed at engineers who simply say that AI makes them more productive and allows them to accomplish their goals faster.
To the skeptics: by all means, don't use AI if you don't want to; it's your choice, your career, your life. But I am not sure that hitching your identity to hating AI is altogether a good idea. It will make you increasingly bitter as these tools improve further and our industry and the wider world slowly shifts to incorporate them.
Frankly, I consider the mourning of The Craft of Software to be just a little myopic. If there are things to worry about with AI they are bigger things, like widespread shifts in the labor force and economic disruption 10 or 20 years from now, or even the consequences of the current investment bubble popping. And there are bigger potential gains in view as well. I want AI to help us advance the frontiers of science and help us get to cures for more diseases and ameliorate human suffering. If a particular way of working in a particular late-20th and early-21st century profession that I happen to be in goes away but we get to those things, so be it. I enjoy coding. I still do it without AI sometimes. It's a pleasant activity to be good at. But I don't kid myself that my feelings about it are all that important in the grand scheme of things.
If AI can do the coding, those of us who aren't programmers don't need you anymore. We can just tell the AI what we want.
Luckily for real programmers, AI's not actually very good at generating quality code. It generates the equivalent of Ali Baba code: it lasts for one week and then breaks.
This is going to be the future of programming: low-paid AI clerks to generate the initial software, and then the highly paid programmers who fix all the broken parts.
Yes. The problem is there is a huge invisible gap between "looks like it works" and "actually works", and everything that entails, like security and scaling beyond a couple users. Non-programmers and inexperienced ones will have trouble with those gaps. Welcome to our slop filled future.
Because such people are not sincere either to themselves about who they are or to others. It's really hard for me to take seriously phrases like "I joined this industry to make things, not to write code".
Do painters paint because they just like to see the final picture? Or do they like the process? Yes, painting is an artistic process, not exactly crafting one. But the point stand.
Woodworkers making nice custom furniture generally enjoy the process.
It's like learning to cook and regularly making your own meals, then shifting to a "new paradigm" of hiring a personal chef to cook for you. Food's getting made either way, but it's not really the same deal.
I think unless you're vibe coding, it's pretty clear that they're still making it. Just because you aren't literally typing 100% of the characters that make up the syntax of the programming language you're using doesn't mean you're not making the final product in most meaningful sentences if you're designing the architecture, the algorithms, the data structures, the state machines, the interfaces, etc, and thinking about how they interact and whether they'll do something that's useful for the people you're making it for.
I feel like most of the anxiety around LLMs is because (in the USA at least) our social safety net sucks.
I'd probably have way more fun debating LLMs if it wasn't tied to my ability to pay rent, have healthcare, or feel like a valued person contributing something to society. If we had universal healthcare and a federal job guarantee it would probably calm things down.
What's more interesting is how the big names in our industry, the ones who already made their money as you say, have turned quickly since the end of 2025. I think even the most old school names can see that the writing is on the wall now.
This is what I don't really understand. It's a bit difficult to take "wait x months" at face value because I've been hearing it for so long. Wait x months for what? Why hasn't it happened yet?
Things seem to be getting better from December 2022 (chatgpt launch), sure, but is there a ceiling we don't see?
"Self-driving cars" and Fusion power also come to mind. With the advent of photography, it was widely believed that drawing and painting would vanish as art forms. Radio would obsolete newspapers, becoming obsolete themselves with television, and so on. Don't believe the hype.
Um.. Claude Code has been out less than a YEAR.. and the lift in capability in the last year has been dramatic.
It does seem probable based on progress that in 1-2 more model generations there will be little need to hand code in almost any domain. Personally I already don't hand code AT ALL, but there are certainly domains/languages that are under performing right now.
Right now with the changes this week (Opus 4.6 and "teams mode") it already is another step function up in capability.
Teams mode is probably only good for greenfield or "green module" development but I'm watching a team of 5 AI's collaborating and building out an application module by module. This is net new capability for the tool THIS WEEK (Yes I am aware of earlier examples).
I don't understand how people can look at this and then be dismissive of future progress, but human psychology is a rich and non-logical landscape.
Things have progressed much faster than even the most optimistic predictions, so every "wait 6 months" has come true. Just look at how the discourse has changed on HN. No-one is using the arguments from 6 months ago and any argument today will probably be equally moot in 6 months.
Maybe we should look at output like quality of software being produced instead of discourse on forums where AI companies are spending billions to market?
Where is all this new software and increased software quality from all this progression?
If we were as smart as the smartest guys throwing trillions at LLMs we wouldn't be predicting anything, we would be creating it like the gods we were always meant to be ever since someone hurt our feelings irrevocably. Hitler could have been a painter, these guys could be slinging dope for a living but here we are.
But the sentiment has changed significantly over the last 6 months. I think this is the biggest step change in sentiment since ChatGPT 3.5. Someone who said "wait 6 months" 6 months ago would have been "right".
> Now is the time to mourn the passing of our craft.
Your craft is not my craft.
It's entirely possible that, as of now, writing JavaScript and Java frontends (what the author does) can largely be automated with LLMs. I don't know who the author is writing to, but I do not mistake the audience to be "programmers" in general...
If you are making something that exists, or something that is very similar to something that exists, odds are that an LLM can be made to generate code which approximates that thing. The LLM encoding is lossy. How will you adjust the output to recover the loss? What process will you go through mentally to bridge the gap? When does the gap appear? How do you recognize it? In the absolute best case you are given a highly visible error. Perhaps you've even shipped it, and need to provide context about the platform and circumstances to further elucidate. Better hope that platform and circumstance is old-hat.
This perspective was mine 6 months ago. And god damn, I do miss the feeling of crafting something truly beautiful in code sometimes. But then, as I've been pushed into this new world we're living in, I've come to realize a couple things:
Nothing I've ever built has lasted more than a few years. Either the company went under, or I left and someone else showed up and rewrote it to suit their ideals. Most of us are doing sand art. The tide comes in and its gone.
Code in and of itself should never have been the goal. I realized that I was thinking of the things I build and the problems I selected to work on from the angle of code quality nearly always. Code quality is important! But so is solving actual problems with it. I personally realized that I was motivated more by the shape of the code I was writing than the actual problems it was written to solve.
Basically the entire way I think about things has changed now. I'm building systems to build systems. Thats really fun. Do I sometimes miss the feeling of looking at a piece of code and feeling a sense of satisfaction of how well made it is? Sure. That era of software is done now sadly. We've exited the craftsman era and entered into the Ikea era of software development.
They can not. They can make some average code. On Friday one suggested an NSI installer script that would never bundle some needed files in the actual installer. I can only imagine that a lot of people have made the same mistake (used CopyFiles instead of File) and posted that mistake on the internet. The true disaster of that being that then testing out that installer on the developer's PC, where that CopyFiles may well work fine since the needed files happen to be sitting on that PC, would then lead on to think it was some weird bug that only failed on the end user's PC. I bet a lot of people posted it with comments like "this worked fine when I tried it," and here we are a decade later feeding that to an LLM.
These tools can write average code. That's what they've mostly been fed; that's what they're aiming for when they do their number crunching. The more specifically one prompts, I expect, then the more acceptable that average code will be. In some cases, average appears to be shockingly bad (actually, based on a couple of decades' experience in the game, average is generally pretty bad - I surely must have been churning out some average, bad code twenty years ago). If I want better than average, I'm going to have to do it myself.
And it will run rings around me in all the languages I don't know; every case in which my standard would be shockingly bad (I speak no APL whatsoever, for example) it would do better (in some cases, though, it would confidently produce an outcome that was actually worse than my null outcome).
You left out the key line “and you don’t believe me, wait six months”. These models are getting better all the time. The term “vibe coding” was only coined a year ago, around the same time as the release of Claude Code.
It doesn’t matter if you don’t think it’s good yet, because it’s brand new tech and it keeps improving.
My fear is management saying: "here are two juniors and a Claud, now produce output of 10 seniors". It is not working out? You must be using it wrong. You don't want the juniors? Too bad.
1. Crafting something beautiful. Figuring out correct abstractions and mapping them naturally to language constructs. Nailing just the right amount of flexibility, scalability and robustness. Writing self-explanatory, idiomatic code that is a pleasure to read. It’s an art.
2. Building useful things. Creating programs that are useful to myself and to others, and watching them bring value to the world. It’s engineering.
These things have utility but they are also enjoyable onto themselves. As best I can tell, your emotional response to coding agents depends on how much you care about these two things.
AI has taken away the joy of crafting beautiful things, and has amplified the joy of building things by more than 10x. Safe bet: It will get to 100x this year.
I am very happy with this tradeoff. Over the years I grew to value building things much more highly. 20yo me would’ve been devastated.
I tried out Claude code for the first time today, and I was a little bit disappointed after all the comments I’ve been reading about it. I didn’t so much notice a speed difference as I did just not having to think very hard while I was working compared to writing everything myself.
Also, it’s always six months from now, because otherwise you could just point at the hundred ways they’re wrong right now. It’s nothing but the ol’ dotcom “trust me, bro” kind of marketing.
> I didn’t ask for the role of a programmer to be reduced to that of a glorified TSA agent, reviewing code to make sure the AI didn’t smuggle something dangerous into production.
This may be the perspective of some programmers. It doesn't seem to be shared by the majority of software engineers I know and read and listen to.
Do you mean the perspective that he is a "glorified TSA agent" or that he doesn't like it? Because in this thread it seems that some people agree but they just like it :)
I disagree the opportunities created for software engineers are reduced to those of a "glorified TSA agent".
We now have more opportunity than ever to create more of the things we have wanted to. We are able to spend more time leaning into our abilities of judgement, creativity, specific knowledge, and taste.
Countless programming frustrations are gone. I, and all those I talk to are having more fun than they have ever had.
I'm still not sure what analogy fits for me. It's closer to product manager/maestro/artist/architect/designer that helps a number of amazing systems create great code.
I often venerate antiques and ancient things by thinking about how they were made. You can look at a 1000-year-old castle and think: This incredible thing was built with mules and craftsmen. Or look at a gorgeous, still-ticking 100-year-old watch and think: This was hand-assembled by an artist. Soon I'll look at something like the pre-2023 Linux kernel or Firefox and think: This was written entirely by people.
The modal person just trying to get their job done wasn't a software artisan; they were cutting and pasting from Stack Overflow, using textbook code verbatim, and using free and open-source code in ways that would likely violate the letter and spirit of the license.
If you were using technology or concepts that weren't either foundational or ossified, you found yourself doing development through blog posts. Now, you can at least have a stochastic parrot that has read the entire code and documentation and can talk to it.
At least with physical works (for now, anyway), the methods the artisans employ leave tell-tale signs attesting to the manner of construction, so that someone at least has the choice of going the "hand made" route, and others, even lay people without special tooling, can tell that it indeed was hand made.
Fully AI generated code has similar artifacts. You can spot them pretty easily after a bit. Of course it doesn't really matter for the business goals, as long as it works correctly. Just like 99% of people don't care if their clothing was machine made vs. handmade. It's going to be a tiny minority that care about handmade software.
> Today, I would say that about 90% of my code is authored by Claude Code. The rest of the time, I’m mostly touching up its work or doing routine tasks that it’s slow at, like refactoring or renaming.
> I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it. Maybe it’s because the initial tribal battle lines have clouded everybody’s judgment, or maybe it’s because we inhabit different worlds where the technology is either better or worse (I still don’t think LLMs are great at UI for example), but there’s just a lot of patently unhelpful discourse out there, and I’m tired of it.
This seems to be a general rule for AI-generated anything. It's impressive in domains you're not an expert in. Much less so on domains you are an expert in.
If AI is good enough that juniors wielding it outproduce seniors, then the juniors are just... overhead. The company would cut them out and let AI report to a handful of senior architects who actually understand what's being built. You don't pay humans to be a slow proxy for a better tool.
If the tools get good enough to not need senior oversight, they're good enough to not need junior intermediaries either. The "juniors with jetpacks outpacing seniors" future is unrealistic and unstable—it either collapses into "AI + a few senior architects" or "AI isn't actually that reliable yet."
Or it collapses when the seniors have to retire anyway. Who instructs the LLM when there’s nobody who understands the business?
I’m sure the plan is to create a paperclip maximizing company which is fully AI. And the sea turned salty because nobody remembered how to turn it off.
After ten years of professional coding, LLMs have made my work more fun. Not easier in the sense of being less demanding, but more engaging. I am involved in more decisions, deeper reviews, broader systems, and tighter feedback loops than before. The cognitive load did not disappear. It shifted.
My habits have changed. I stopped grinding algorithm puzzles because they started to feel like practicing celestial navigation in the age of GPS. It is a beautiful skill, but the world has moved on. The fastest path to a solution has always been to absorb existing knowledge. The difference now is that the knowledge base is interactive. It answers back and adapts to my confusion.
Syntax was never the job. Modeling reality was. When generation is free, judgment becomes priceless.
We have lost something, of course. There is less friction now, which means we lose the suffering we often mistook for depth. But I would rather trade that suffering for time spent on design, tradeoffs, and problems that used to be out of reach.
This doesn't feel like a funeral. It feels like the moment we traded a sextant for a GPS. The ocean is just as dangerous and just as vast, but now we can look up at the stars for wonder, rather than just for coordinates.
I agree with the nolanlawson's sentiment. What's interesting is many of the opposing statements here seem to be less interested in the actual code, and more interested in the final state. Both are valid, but one is going away due to technological advancements. That is the mourning.
There are some of us who enjoyed the code as a thing to explore. Others here don't seem to like that as much.
I've never bought into it because like 80% of the work the world does is CRUD-level stuff which should be boring and simple so it can be readable and maintainable.
The craftspeople doing the other 20% of the code are at the top end of the skill spectrum, but AI is starting from the bottom and working its way up. They should be the least worried about AI taking over their output.
This is like throwing together dozens of stick frame homes that look alike vs. building custom log, brick, or stone houses. No one is going to be tearing down my drywall and marveling at how well the studs are spaced.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
You can use AI to write all your code, but if you want to be a programmer and can't see that the code is pretty mid then you should work on improving your own programming skills.
People have been saying the 6 month thing for years now, and while I do see it improving in breadth, quality/depth still appears to be plateauing.
It's okay if you don't want to be a programmer though, you can be a manager and let AI do an okay job at being your programmer. You better be driven to be a good at manager though. If you're not... then AI can do an okay job of replacing you there too.
I dont get the hype.. And I dont think we will reach peak AI coding performance any time soon.
Yes, watching an LLM spit out lots of code is for sure mesmerizing. Small tasks usually work ok, code kinda compiles, so for some scenarios it can work out.. BUT anyone serious about software development can see how piece of CRAP the code is.
LLMs are great tools overall, great to bounce ideas, great to get shit done. If you have a side project and no time, awesome.. If your boss/company has a shitty culture and you just want to get the task done, great. Got a mundane coding task, hate coding, or your code wont run in a critical environment? please, LLM that shit over 9000..
Remember though, an LLM is just a predictor, a noisy, glorified text predictor. Only when AI reaches a point of not optimizing for short term gains and has built-in long term memory architecture (similar to humans) AND can produce some linux kernel level code and size, then we can talk..
Super weird comments on this thread, to the point I would think it's brigaded or some (most) comments are straight up AI generated. Hackernews definitely changed. The tone on AI changed since a few months ago, I guess also because many people here are working on AI. Almost any new startup is AI-adjacent now. It's no surprise, cca. 120 out of 150 YC startups are AI. So there is a big push on this forum to keep the hype and sentiment going.
I was thinking the same thing as I scrolled through. Half the comments are some AI generated homage to AI. I’ve seen Anthropic pushing so much marketing BS on X/Twitter that wouldn’t surprise me if it extended to HN.
I have junior people on my team using Cursor and Claude, it’s not all great. Several times they’ve checked in code that also makes small yet breaking changes to queries. I have to watch out for random (unused) annotations in Java projects and then explain why the tools are wrong. The Copilot bot we use on GitHub slows down PR reviews by recommending changes that look reasonable yet either don’t really work or negatively impact performance.
Overall, I’d say AI tooling has maybe close to doubled the time I spend on PR reviews. More knowledgeable developers do better with these tools but they also fall for the toolings false confidence from time to time.
I worry people are spending less time reading documentation or stepping through code to see how it works out of fear that “other people” are more productive.
I think you need to lighten up and adapt the way programmers have adapted since the 50s. There’s been a lot of pomposity over the past ten years around “craft”, you’d think we’re all precious Renaissance artists pining for recognition over our heavenly crafted code rather than engineers. I’m in my late forties also I’m case that matters at all, I don’t think it does. If anything that gives me an advantage to spot and quickly filter out the 1/10 insanely bad suggestions that AI pukes out and know when it’s giving me decent code. Who knows where it will lead but stubbornly refusing to move won’t help. Please take this in a positive way also, rather than a dig. Be positive. Also think of all the job opportunities you will have rewriting all the crap code that gets punted out over the next 10 years. You’re going to be rich. Seriously tho, if you are in your forties you have the advantage of leveraging AI yourself but also knowing when it is bull shitting you.
I wonder if this is just a matter of degree. In a few years (or less) you may not have to "skillfully guide" anything. The agents will just coordinate themselves and accomplish your goals after you give some vague instruction. Will you still feel proud? Or maybe a bit later then agents will come up with their own improvements and just ship them without any input at all. How about then?
That requires thinking. Let's just ship now, think later. Does it matter? Show me the money and all that. We will all just ride in the sunset with Butch Cassidy and the Sunset Kid I'm sure.
Some code is worth transcribing by hand — an ancient practice in writing, art and music.[0] Some isn't even worth looking at.
I find myself, ironically, spending more time typing out great code by hand now. Maybe some energy previously consumed by tedium has been freed up, or maybe the wacky machines brought a bit of the whimsy back into the process for me.
[0] And in programming, for the readers of Zed Shaw's books :)
But I am still quite annoyed at the slopful nature of the code that is produced when you're not constantly nagging it to do better
We've RLed it to produce code that works by hook or by crook, putting infinity fallback paths and type casts everywhere rather than checking what the semantics should be.
Humans are also non-deterministic code generators though. It can be possible that an LLM is more deterministic or consistent at building reliable code than a human.
Mathematicians use LLMs. Obviously, they don't trust LLM to do math. But LLM can help with formalizing a theorem, then finding a formal proof. It's usually a very tedious thing - but LLMs are _already_ quite good at that. In the end you get a proof which gets checked by normal proof-checking software (not LLM!), you can also inspect, break into parts, etc.
You really need to look into detail rather than dismiss wholesale ("It made a math error so it's bad at math" is wrong.)
Like other tech disrupted crafts before this, think furniture making or farming, that's how it goes. From hand-made craft, to mass production factories (last couple of decades) to fully automated production.
The craft was dying long before LLMs. Started in dotcom, ZIRP added some beatings, then LLMs are finishing the job.
This is fine, because like in furniture making, the true craftsmen will be even more valuable (overseeing farm automation, high end handmade furniture, small organic farms), and the factory worker masses (ZIRP enabled tech workers) will move on to more fulfulling work.
That’s not how it goes for the worker. If you are a capitalist then it doesn’t matter, you own the means of production. The laborer, however, has to learn new skills, which take time and money. If your profession no longer exists, unless you have enough capital to retool/be a capitalist, then you will personally get poorer.
I'm not sure comparing artisanal software to woodworking or organic farming is possible.
With woodworking and farming you get as a result some physical goods. Some John Smith that buys furniture can touch nice cherry paneling, appreciate the joinery and grain. With farming you he can taste delicious organic tomatoes and cucumbers, make food with it.
Would this John Smith care at all about how some software is written as long as it does what he wants and it works reliably? I'm not sure.
Where do people find this optimism? I reckon when the software jobs fall everything else will follow shortly too. That's just the first target because it's what we know and the manual stuff is a little harder for now. The "good news" is everyone might be in the same boat so the system will have to adapt,
Software, and most STEM based jobs, have a lot of determinism and verifiability + some way to reduce the cost of failure so brute force iteration can cover up the remaining. There is often "a correct answer". They've also yet to be truly disrupted until now which makes them particularly vulnerable than any other job.
Most jobs don't have the same level of verification and/or repeatability. Some factors include:
* Physical constraints: Even the jobs that have productive output if they are physical it will take a long time for AI and more importantly energy density to catch up. Robots have a while to go as well - in the end human hands and your metabolism/energy density will be worth more than your brain/intelligence.
* Cost of failure/can't repeat: For things like building the cost of failure is high (e.g. disposal, cleanup, more resources, etc) -> even 70% of a "building bench" benchmark would be completely inadequate without low cost to repeat. Many jobs are also already largely automated but scaled (e.g. mining, manufacturing, etc) - they've already gone through the wave.
* Human need for its own sake: Other jobs cater not just for productive output, but for some human need where it hasn't been made more efficient ever (e.g. care jobs). There are jobs that a human is more effective in the medium term because the receiver needs it from a human.
No -> this just affects white collar STEM based roles. Thinking we are in it together is just another form of "cope" sadly. There's a rational reason why others have optimism while we SWE's are now full of anxiety and dread.
For the people who it doesn't affect given their current place in many societies (nurses, builders, etc etc) there will be little sympathy.
People have to stop talking like LLMs solved programming.
If you're someone with a background in Computer Science, you should know that we have formal languages for a reason, and that natural language is not as precise as a programming language.
But anyway we're peek AI hype, hitting the top on HN is worth more than a reasonable take, reasonableness doesn't sell after all.
So here we're seeing yet another text about how the world of software was solved by AI and being a developer is an artifact of the past.
Right? At least on HN, there's a critical mass of people loudly ignoring this these days, but no one has explained to me how replacing formal language with an english-language-specialized chatbot - or even multiple independent chatbots (aka "an agent") - is good tradeoff to make.
It's "good" from the standpoint of business achieving their objectives more quickly. That may not be what we think of as objectively good in some higher sense, but it's what matters most in terms of what actually happens in the world.
Does it really matter that English is not as precise if the agent can make a consistent and plausible guess what my intention is? And when it occasionally guesses incorrectly, I can always clarify.
You're right, of course, but you should consider that all formal language starts as an informal language idea in the mind of someone. Why shouldn't that "mind" be an LLM vs. a human?
I think mostly because an LLM is not a "mind". I'm sure there'll be an algorithm that could be considered a "mind" in the future, but present day an LLM is not it. Not yet.
This is in my opinion the greatest weakness of everything LLM related. If I care about the application I'm writing, and I believe I should if I bother doing it at all, it seems to me that I should want to be precise and concise at describing it. In a way, the code itself serves as a verification mechanism for my thoughts and whether I understand the domain sufficiently.
English or any other natural language can of course be concise enough, but when being brief they leave much to imagination. Adding verbosity allows for greater precision, but I think as well that that is what formal languages are for, just as you said.
Although, I think it's worth contemplating whether the modern programming languages/environments have been insufficient in other ways. Whether by being too verbose at times, whether the IDEs should be more like databases first and language parsers second, whether we could add recommendations using far simpler, but more strict patterns given a strongly typed language.
My current gripes are having auto imports STILL not working properly in most popular IDEs or an IDE not finding referenced entity from a file, if it's not currently open... LLMs sometimes help with that, but they are extremely slow in comparison to local cache resolution.
Long term I think more value will be in directly improving the above, but we shall see. AI will stay around too of course, but how much relevance it'll have in 10 years time is anybody's guess. I think it'll become a commodity, the bubble will burst and we'll only use it when sensible after a while. At least until the next generation of AI architecture will arrive.
I can relate coz once upon a time I enjoyed reading poetic code - which I later found to be impossible to modify or extend.
> Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.
And then I couldn’t relate because no one ever paid me for lines of code. And the hardest programming I ever did was when it took me 2 days to write 2 lines of C code, which did solve a big problem.
I abhorred the LOC success metric because we had to clean up after those who dumped their code diarrhea to fool those who thought every line of code is added value. Not to mention valuing LOC strictly makes you a junior programmer.
E.g. you have to know more to do the following in 2 lines (yes, you can):
‘’’
t = a;
a = b;
b = a;
‘’’
According to LOC missionaries these 3 lines are more expensive to write and shows that you’re a better programmer than the XOR swap. But it’s actually more expensive to run it than the XOR swap, and it’s more expensive to hire the person who can write the XOR swap. (Not endorsing clever/cryptic code, just making a point about LOC)
So, if the LOC missionaries are out of a job because of LLMs, I will probably celebrate.
Boss that counts LOC will be fired or will bankrupt the company/team.
One of my friend who never lost a touch with coding in hist 30+ years career recently left FAANG and told me it's the best time to realize his dream of building a start up. And it's not because AI can write code for him - it would take him about 12 month to build the system from the ground up manually, but it's the best time because nobody can replicate what he's doing by using AI only. His analogy was "it's like everyone is becoming really good cyclist with electric bikes, even enthusiast cyclists, but you secretly train for Tour de France".
I fall in the demographic discussed in the article but I’ve approached this with as much pragmatism as I can muster. I view this as a tool to help improve me as a developer. Sure there will be those of us who do not stay ahead (is that even possible?) of the curve and get swallowed up but technology has had this affect on many careers in the past. They just change into something different and sometimes better. It’s about being willing to change with it.
> So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you, because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike. Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.
I might be mistaken, but I bet they said the same when Visual Basic came out.
The blacksmith analogy is poetic but misleading. Blacksmithing was replaced by a process that needed no blacksmith at all. What's happening with code is closer to what synthesizers did to music — the instrument changed, the craft didn't die.
Musicians mourned synthesizers. Illustrators mourned Photoshop. Typesetters mourned desktop publishing. In every case the people who thrived weren't the ones who refused the new tool or the ones who blindly adopted it. They were the ones who understood that the tool absorbed the mechanical layer while the taste layer became more valuable, not less.
The real shift isn't from hand-coding to AI-coding. It's from "I express intent through syntax" to "I express intent through constraints and review." That's still judgment. Still craft. Just a different substrate.
What we're actually mourning is the loss of effort as a signal of quality. When anyone can generate working code, the differentiator moves upstream to architecture, to knowing what to build, to understanding why one approach fails at scale and another doesn't. Those are harder skills, not easier ones.
I’m in my 40 something and it’s game over for my career. The grey in my hair makes it so that I never get past the first round. The history on my resume makes it so I’m lucky to get a round. The GPT’s and Claude have fundamentally changed how I view work and frankly, I’m over it.
I’m in consulting now and it’s all the same crap. Enterprises want to “unleash AI” so they can fire people. Maximize profits. My nephews who are just starting their careers are blindly using these tools and accepting the PR if it builds. Not if it’s correct.
I’m in awe of what it can do but I also am not impressed with the quality of how it does it.
I’m fortunate to not have any debt so I can float until the world either wises up or the winds of change push me in a new direction.
I liked the satisfaction of building something “right” that was also “useful”. The current state of Opus and Codex can only pretend to do the latter.
There's a commercial building under construction next to my office. I look down on the construction site, and those strapping young men are digging with their big excavators they've been using for years and taking away the dirt with truck and trailer.
Why use a spade? Even those construction workers use the right sized tools. They ain't stupid.
you'd sometimes discover a city communication line destroyed in the process; or the dirt hauled on top of a hospitals, killing hundreds of orphaned kids with cancer; or kittens mixed into concrete instead of cement.
And since you clicked "agree" on that Anthropic EULA, you can't sue then for it, so you now hire 5 construction workers to constantly overlook the work.
It's still net positive... for now at least... But far from being "without any people". And it'll likely remain this way for a long time.
LLM would be equal to a monstrous moving castle with million robotic hands that can somehow collaterally extract piles of dirt from earth, doing a lot of damage to our planet
This is the right take IMO, so thanks for a balanced comment.
I would add a nuance from OPs perspective sorta: a close friend of mine works in construction, and often comments on how projects can be different. On some, everyone in the entire building supply chain can be really inspired to work on a really interesting project because of either its usefulness or its craftsmanship (the 2 of which are related), and on some, everyone’s, just trying to finish the project is cheaply quickly as possible.
It’s not that the latter hasn’t existed in tech, but it does appear that there is a way to use LLMs to do more of the latter. It’s not “the end of a craft”, but without a breakthrough (and something to check the profit incentive) it’s also not a path to utopia (like other comments seem to be implying)
Craftsmanship doesn’t die, it evolves, but the space in between can be a bit exhausting as markets fail to understand the difference at first.
I think OP is coming at this more from an artisan angle. Perhaps there were shoveler artisans who took pride in the angle of their dirt-shoveling. Those people perhaps do lament the advent of excavators. But presumably the population who find code beautiful vs the art of shoveling are of different sizes
For years developers have worshipped at the alter of innovation, citing their role in decimating many old industries and crafts as just what we do. Now it’s come for you.
I highly doubt this specific person has looked down on others for also mourning the death of their craft. You can't rightly call someone a hypocrite and cite "developers" as your source.
> We’ll miss the feeling of holding code in our hands
I agree, I started feeling this a few months ago, where I was only writing the architecture and abstractions and letting AI fill in the gaps. It seems in the next few months it could probably do more than that. But is it so bad, I agree that I can't really mold an entire pot by my hand any more. But, if you ask AI to do it, it will create a pot with cracks in it and it would be your job to either plaster it of fill gold in those cracks.
I feel coding is going to be similar to kintsugi after this is all over
Sort of ironic. My dad coded on hole punch cards and hated it, hated th physicality of that. Now he super loves AI, having left the field 20 years ago due to language fatigue.
I've been hearing "the LLM can write better code than a human, and if you don't believe me, wait six months" for years now. Such predictions haven't been true before and I don't believe they are true now.
This makes me think about the craftsmen whose careers vanished or transformed through the ages due to industries, machines etc. They did not have online voices to write 1000's of blogs everyday. Nor did they have people who can read their woes online.
Maybe not 1000s and not online, but we do have journals, articles, essays, and so on written by such people throughout history. And they had similar sentiments.
> Someday years from now we will look back on the era when we were the last generation to code by hand. We’ll laugh and explain to our grandkids how silly it was that we typed out JavaScript syntax with our fingers. But secretly we’ll miss it.
Why will I miss it? I will be coding my small scripts and tools and hobby projects by hand because there is no deadline attached to them. Hell, I will also tag them as "bespoke hand-crafted free range artisanal" code. There will be a whole hipster category of code that is written as such. And people will enjoy it as they enjoy vinyl records now. Many things would have changed by then but my heart will still be a programmer's heart.
from other sources: 6-12 months, by end of 2025, ChatGPT 7.
It's a concern trolling and astroturfing at it best.
One camp of fellow coders who are saying how their productivity grew 100x but we are all doomed, another camp of AI enthusiasts who got ability to deliver products and truly believe in their newly acquired superiority.
It's all either true or false, but if in six months it becomes true, we'll know it, each one of us.
However, if it's all BS and in six months there will be Windows 95 written by LLM but real code still requires organic intelligence, there won't be any accountability, and that's sad.
I don't mourn coding for itself, since I've always kinda disliked that side of my work (numerical software, largely).
What I do mourn is the reliability. We're in this weird limbo where it's like rolling a die for every piece of work. If it comes up 1-5, I would have been better off implementing it myself. If it comes up 6, it'll get it done orders of magnitude faster than doing it by hand. Since the overall speedup is worthwhile, I have to try it every time, even if most of the time it fails. And of course it's a moving target, so I have to keep trying the things that failed yesterday because today's models are more capable.
I get where this is coming from. But at the same, AI/LLMs are such an exciting development. As in "maybe I was wrong and the singularity wasn't bullshit". If nothing else, it's an interesting transition to live through.
I agree, in the sense that any major change is "interesting". Doesn't mean they are all good though, many major changes have been bad historically. The overall net effect has been generally good, but you never know with any individual change.
> We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM.
I'll miss it not because the activity becomes obsolete, but because it's much more interesting than sitting till 2am trying to convince LLM to find and fix the bug for me.
We'll still be sitting till 2am.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
I've been hearing this for the last two years. And yet, LLMs, given abstract description of the problem, still write worse code than I do.
Or did you mean type code? Because in that case, yes, I'd agree. They type better.
I am not confident that AI tooling can diagnose or fix this kind of bug. I’ve pointed Claude Opus at bugs that puzzle me (with only one code base involved) and, so far, it has only introduced more bugs in other places.
I'm not saying it can btw. I'm arguing for the opposite.
And for the record, I'm impressed at issues it can diagnose. Being able to query multiple data sources in parallel and detect anomalies, it sometimes can find the root cause for an incident in a distributed system in a matter of minutes. I have many examples when LLMs found bugs in existing code when tasked to write unit tests (usually around edge cases).
But complex issues that stem from ambiguous domain are often out of reach. By the time I'm able to convey to an LLM all the intricacies of the domain using plain English, I'm usually able to find the issue myself.
And that's my point: I'd be more eager to run the code under debugger till 2am, than to push an LLM to debug for me (can easily take till 2am, but I'd be less confident I can succeed at all)
The acceleration of AI has thrown into sharp relief that we have long lumped all sorts of highly distinct practices under this giant umbrella called "coding". I use CC extensively, and yet I still find myself constantly editing by hand. Turns out CC is really bad at writing kubernetes operators. I'd bet it's equally bad at things like database engines or most cutting edge systems design problems. Maybe it will get better at these specific things with time, but it seems like there will always be a cutting edge that requires plenty of human thought to get right. But if you're doing something that's basically already been done thousands of times in slightly different ways, CC will totally do it with 95% reliability. I'm ok with that.
It's also important to step back and realize that it goes way beyond coding. Coding is just the deepest tooth of the jagged frontier. In 3 years there will be blog posts lamenting the "death of law firms" and the "death of telemedicine". Maybe in 10 years it will be the death of everything. We're all in the same boat, and this boat is taking us to a world where everyone is more empowered, not less. But still, there will be that cutting edge in any field that will require real ingenuity to push forward.
I think there's clearly a difference in opinion based on what you work on. Some people were working on things that pre-CC models also couldn't handle and then CC could, and it changed their opinions quickly. I expect (but cannot prove of course) that the same will happen with the area you are describing. And then your opinion may change.
I expect it to, eventually. But then the cutting edge will have simply moved to something else.
I agree that it's very destabilizing. It's sort of like inflation for expertise. You spend all this time and effort saving up expertise, and then those savings rapidly lose value. At the same time, your ability to acquire new expertise has accelerated (because LLMs are often excellent private tutors), which is analogous to an inflation-adjusted wage increase.
There are a ton of variables. Will hallucinations ever become negligible? My money is on "no" as long as the architecture is basically just transformers. How will compiling training data evolve with time? My money is on "it will get more expensive". How will legislators react? I sure hope not by suppressing competition. As long as markets and VC are functioning properly, it should only become easier to become a founder, so outsized corporate profits will be harder to lock down.
To the people who are against AI programming, honest question: why do you not program in assembly? Can you really say "you" "programmed" anything at all if a compiler wrote your binaries?
This is a 100% honest question. Because whatever your justification to this is, it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.
I'm 100% honestly looking forward to finding a single justification that would not fit both scenarios.
But I've seen this conversation on HN already 100 times.
The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I personally don't agree at all, in the sense I don't think that matters. I've run into compiler bugs, and more library bugs than I can count. The real world is just as messy as LLMs are, and you still need the same testing strategies to guard against errors. Development is always a slightly stochastic process of writing stuff that you eventually get to work on your machine, and then fixing all the bugs that get revealed once it starts running on other people's machines in the wild. LLMs don't write perfect code, and neither do you. Both require iteration and testing.
> The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I don't see this as a frequent answer tbh, but I do frequently see claims that this is the critique.
I wrote much more here[0] and honestly I'm on the side of Dijkstra, and it doesn't matter if the LLM is deterministic or probabilistic
It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.
- Dijkstra: On the foolishness of "natural language programming"
His argument has nothing to do with the deterministic systems[1] and all to do with the precision of the language. His argument comes down to "we invented symbolic languages for a good reason".
[1] If we want to be more pedantic we can actually codify his argument more simply by using some mathematical language, but even this will take some interpretation: natural language naturally imposes a one to many relationship when processing information.
I'm not particularly against AI programming but I don't think these two things are equivilent. A compiler translates code to specifications in a deterministic way, the same compiler produces the same output from the same code, it is all completely controlled. AI is not at all deterministic, temperature is built into LLMs and furthermore the lack of specificity in prompts and our spoken languages. The difference in control is significant enough to me not to put compilers and AI coding agents into the same catagory even though they are both taking some text and producing some other text/machine code.
A chef can cook a steak better than a robo-jet-toaster, even though neither of them has raised the cow.
It's not about having abstraction levels above or below (BTW, in 21st century CPUs, the machine code itself is an abstraction over much more complex CPU internals).
It's about writing a more correct, efficient, elegant, and maintainable code at whichever abstraction layer you choose.
AI still writes messier, sloppier, buggier, more redundant code than a good programmer can when they care about the craft of writing code.
The end result is worse to those who care about the quality of code.
We mourn, because the quality we paid so much attention to is becoming unimportant compared to the sheer quantity of throwaway code that can be AI-generated.
We're fine dining chefs losing to factory-produced junk food.
Even if you are not coding in assembly you still need to think. Replace llm with a smart programmer. I don't like the other guy to do all the thinking for me. Much better if it's a collaborative process even if the other guy could have coded the perfect solution without my help. Like otherwise why am I even in the picture?
I know how to review code without looking at the corresponding assembly and have high confidence in the behavior of the final binary. I can't quite say the same for a prompt without looking at the generated code, even with temperature 0. The difference is explainability, not determinism.
There is no requirement for compilers to be deterministic. The requirement is that a compiler produces something that is valid interpretation of the program according to the language specification, but unspecified details (like specific ordering of instructions in the resulting code) could in principle be chosen nondeterministically and be different in separate executions of the compiler.
For me, the whole goal is to achieve Understanding: understanding a complex system, which is the computer and how it works. The beauty of this Understanding is what drives me.
When I write a program, I understand the architecture of the computer, I understand the assembly, I understand the compiler, and I understand the code. There are things that I don't understand, and as I push to understand them, I am rewarded by being able to do more things. In other words, Understanding is both beautiful and incentivized.
When making something with an LLM, I am disincentivized from actually understanding what is going on, because understanding is very slow, and the whole point of using AI is speed. The only time when I need to really understand something is when something goes wrong, and as the tool improves, this need will shrink. In the normal and intended usage, I only need to express a desire to achieve a result. Now, I can push against the incentives of the system. But for one, most people will not do that at all; and for two, the tools we use inevitably shape us. I don't like the shape into which these tools are forming me - the shape of an incurious, dull, impotent person who can only ask for someone else to make something happen for me. Remember, The Medium Is The Message, and the Medium here is, Ask, and ye shall receive.
The fact that AI use leads to a reduction in Understanding is not only obvious, but also studies have shown the same. People who can't see this are refusing to acknowledge the obvious, in my opinion. They wouldn't disagree that having someone else do your homework for you would mean that you didn't learn anything. But somehow when an LLM tool enters the picture, it's different. They're a manager now instead of a lowly worker. The problem with this thinking is that, in your example, moving from say Assembly to C automates tedium to allow us to reason on a higher level. But LLMs are automating reasoning itself. There is no higher level to move to. The reasoning you do now while using AI is merely a temporary deficiency in the tool. It's not likely that you or I are the .01% of people who can create something truly novel that is not already sufficiently compressed into the model. So enjoy that bit of reasoning while you can, o thou Man of the Gaps.
They say that writing is God's way of showing you how sloppy your thinking is. AI tools discourage one from writing. They encourage us to prompt, read, and critique. But this does not result in the same Understanding as writing does. And so our thinking will be, become, and remain vapid, sloppy, inarticulate, invalid, impotent. Welcome to the future.
There's a balance of levels of abstraction. Abstraction is a great thing. Abstraction can make your programs faster, more flexible, and more easy to understand. But abstraction can also make your programs slower, more brittle, and incomprehensible.
The point of code is to write specification. That is what code is. The whole reason we use a pedantic and somewhat cryptic schema is that natural language is too abstract. This is the exact reason we created math. It really is even the same reason we created things like "legalese".
Seriously, just try a simple exercise and be adversarial to yourself. Describe how to do something and try to find loopholes. Malicious compliance. It's hard, to defend and writing that spec becomes extremely verbose, right? Doesn't this actually start to become easier by using coding techniques? Strong definitions? Have we not all forgotten the old saying "a computer does exactly what you tell it to, not what you intend to tell it to do"? Vibe coding only adds a level of abstraction to that. It becomes "a computer does what it 'thinks' you are telling it to do, not what you intend to tell it to do". Be honest with yourself, which paradigm is easier to debug?
Natural language is awesome because the abstraction really compresses concepts, but it requires inference of the listener. It requires you to determine what the speaker intends to say rather than what the speaker actually says.
Without that you'd have to be pedantic to even describe something as mundane as making a sandwich[1]. But inference also leads to misunderstandings and frankly, that is a major factor of why we talk past one another when talking on large global communication systems. Have you never experienced culture shock? Never experienced where someone misinterprets you and you realize that their interpretation was entirely reasonable?[2] Doesn't this knowledge also help resolve misunderstandings as you take a step back and recheck assumptions about these inferences?
> using temperature 0.0
Because, as you should be able to infer from everything I've said above, the problem isn't actually about randomness in the system. Making the system deterministic only has one realistic outcome: a programming language. You're still left with the computer doing what you tell it to do, but have made this more abstract. You've only turned it into the PB&J problem[1] and frankly, I'd rather write code than instructions like those kids are. Compared to the natural language the kids are using, code is more concise, easier to understand, more robust, and more flexible.
I really think Dijkstra explains things well[0]. (I really do encourage reading the entire thing. It is short and worth the 2 minutes. His remark at the end is especially relevant in our modern world where it is so easy to misunderstand one another...)
The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.
Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.
I don't like this particular phrase, as it suggests that LLMs are just replicating code. As far as I understand, LLMs can also create new code and algorithms (some better than others). When we think of them as copy-paste-maschines, we judge their capabilities unfairly and also underestimate their capabilities.
So funny because "you’re still riding around on a fixie bike." that's literally what I do. I also take a plane to fly.
In that analogy that would be picking the right library/framework/service vs. vibe coding it. I do NOT need to write all the code to provide value.
Knowing which tool to use for the task is precisely what requires honing a skill and requires judgement. It's not about how many kilometers you manage to cover.
I suspect my comment will not be well received, however I notice in myself that I've passed the event horizon of being a believer and am past the honeymoon period and I'm beginning to think about engineering
My headspace is now firmly in "great, I'm beginning to understand the properties and affordances of this new medium, how do I maximise my value from it", hopefully there's more than a few people who share this perspective, I'd love to talk with you about the challenges you experience, I know I have mine, maybe we have answers to each others problems :)
I assume that the current set of properties can change, however it seems like some things are going to be easier than others, for example multi modal reasoning still seems to be a challenge and I'm trying to work out if that's just hard to solve and will take a while or if we're not far from a good solution
I thought I'd miss all the typing and syntax, but I really don't. Everyone has their own relationship with coding, but for me, I get satisfaction out of the end product and putting it in front of someone. To the extend that I cared about the code, it mainly had to do with how much it allowed the end product to shine.
Yes, there's clearly a big split in the community where perhaps ~50% are like OP and the other ~50% are like you. But I think we should still respect the views of the other side and try to empathize.
The majority of the code currently running in production for my company was written 5+ years ago. This was all "hand-written" and much lower quality than the AI generated code that I am generating and deploying these days.
Yet I feel much more connected with my old code. I really enjoyed actually writing all that code even though it wasn't the best.
If AI tools had existing 5 years ago when I first started working on this codebase, obviously the code quality would've been much higher. However, I feel like I really loved writing my old code and if given the same opportunity to start over, I would want to rewrite this code myself all over again.
I didn't come in IT for money - back in the days it wasn't as well paid as today - nevertheless if this craft was very poorly paid I probably wouldn't choose this profession either. And I assume many people here wouldn't as well unless you are already semi-retired or debt free.
I mourn a little bit that in 20 years possibly 50% of software jobs will get axed or unless you are elite/celebrity dev salary will stagnate. I mourn that in the future upward mobility and moving up into upper middle class will be harder without trying to be entrepreneur.
I'm probably a minority, but I've never loved dealing with syntax. The code itself always felt like a hindrance to me that reminded me that my brain was slowed down by my fingers. I get it though, it was tactical and it completed the loop. It was essential for learning I felt like despite eventually getting to a point where it slows you down the more senior you get.
AI has a ways to go before it's senior level if it ever reaches that level, but I do feel bad for juniors that survive this who never will have the opportunity to sculpt code by hand.
I feel like we are long into the twilight of mini blogs and personal sites. Its like people trying to protect automotive jobs, the vas majority were already lost
It's not even always a more efficient form of labour. I've experienced many scenarios with AI where prompting it to do the right thing takes longer and requires writing/reading more text compared to writing the code myself.
Some years ago I was at the Burger King near the cable car turntable at Powell and Market St in San Francisco. Some of the homeless people were talking about the days when they'd been printers. Press operators or Linotype operators. Jobs that had been secure for a century were just - gone.
That's the future for maybe half of programmers.
Remember, it's only been three years since ChatGPT. This is just getting started.
> because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike
Sure, maybe it takes me a little while to ride across town on my bike, but I can reliably get there and I understand every aspect of the road to my destination. The bazooka-powered jetpack might get me there in seconds, but it also might fly me across state lines, or to Antarctica, or the moon first, belching out clouds of toxic gas along the way.
I understand the sentiment and I've been involved in software engineering in various roles for the last 25+ years. The thing that gives me hope is that never once in that time has the problem ever been that we didn't have more work to do.
It's not like all of a sudden I'm working 2-3 hours a day. I'm just getting a lot more done.
One other helpful frame: I consider LLMs simply to be very flexible high-level 'language' Compilers. We've moved up the Abstraction Chain ever since we invented FORTRAN and COBOL (and LISP) instead of using assembly language.
We're 'simply' moving up the abstraction hierarchy again. Good!
You know who else mourned the loss of craft? People that don't like PHP and Wordpress because they lower the barrier to entry to creating useful stuff while also leaving around a fair amount of cruft and problems that the people that use them don't understand how to manage.
Like iambateman said: for me it was never about code. Code was a means to an ends and it didn't stop at code. I'm the kind of software engineer that learned frontends, systems, databases, ETLs, etc -- whatever it was that was that was demanded of me to produce something useful I learned and did it. We're now calling that a "product engineer". The "craft" for me was in creating useful things that were reliable and efficient, not particularly how I styled lines, braces, and brackets. I still do that in the age of AI.
All of this emotional spillage feels for not. The industry is changing as it always has. The only constant I've ever experienced in this industry is change. I realized long ago that when the day comes that I am no longer comfortable with change then that is my best signal that this industry is no longer for me.
I think it's a bit different when you can opt out. If you didn't want to use PHP you didn't have to. But it's getting increasingly hard to opt out of AI.
The death of a means to an end is the birth of an end itself.
When cameras became mainstream, realism in painting went out of fashion, but this was liberating in a way as it made room for many other visual art styles like Impressionism. The future of programming/computing is going to be interesting.
We have CNC machines, and we still have sculptors.
Mechanising the production of code is good thing. And crafting code as art is a good thing. It is sign of a wider trend that we need to look at these things like adversaries.
I look forward to the code-as-art countermovement. It's gonna be quite something.
Great post. Super sad state of affairs but we move on and learn new things. Programming was always a tool and now the tool has changed from something that required skill and understanding to complaining to a neural net. Just have to focus on the problem being solved more.
I'll believe it when I start seeing examples of good and useful software being created with LLMs or some increase in software quality. So far it's just AI doom posting, hype bloggers that haven't shipped anything, anecdotes without evidence, increase in CVEs, increase in outages, and degraded software quality.
It would be helpful if you could define “useful” in this context.
I’ve built a number of team-specific tools with LLM agents over the past year that save each of us tens of hours a month.
They don’t scale beyond me and my six coworkers, and were never designed to, but they solve challenges we’d previously worked through manually and allow us to focus on more important tasks.
The code may be non-optimal and won’t become the base of a new startup. I’m fine with that.
It’s also worth noting that your evidence list (increased CVEs, outages, degraded quality) is exclusively about what happens when LLMs are dropped into existing development workflows. That’s a real concern, but it’s a different conversation from whether LLMs create useful software.
My tools weren’t degraded versions of something an engineer would have built better. They’re net-new capability that was never going to get engineering resources in the first place. The counterfactual in my case isn’t “worse software”—it’s “no software.“
It really shouldn't be this hard to just provide one piece of evidence. Is anecdotes of toy internal greenfield projects that could probably be built with a drag and drop no-code editor really the best from this LLM revolution?
At my work ~90% of code is now LLM generated. It's not "new" software in the sense that you're describing but it's new features, bug fixes, and so on to the software that we all work on. (Although we are working on something that we can hopefully open source later this year that is close to 100% LLM generated, and I can say, as someone that has been reviewing most of the code, is quite high quality)
Well, on the surface it may seem like there’s nothing being created of value, but I can assure you every company from seed stage to unicorns are heavily using claude code, cursor, and the like to produce software. At this point, most software you touch has been modified and enhanced with the use of LLMs. The difference in pace of shipping with and without AI assistance is staggering.
Coding is an abstraction. Your CPU knows nothing of type safety, bloom filters, dependencies, or code reuse.
Mourning the passing of one form of abstraction for another is understandable, but somewhat akin to bemoaning the passing of punch card programming. Sure, why not.
Your entire brain's model of the world is an abstraction over its sensory inputs. By this logic we might as well say you shouldn't mourn anything since all it means is a minor difference in the sensory inputs your brain receives.
2. the tools still need a lot of direction, i still fight claude with opus to do basic things and the best experiences are when i provide very specific prompts
3. being idealistic on a capitalist system where you have to pay your bills every month is something i could do when my parents paid my bills
These apocalyptic posts about how everything is shit really don't match my reality at all. I use these tools every day to be more productive and improve my code but they are nowhere close to doing my actual job, that is figuring out WHAT to do. How to do it is mostly irrelevant, as once i get to that point i already know what needs to be done and it doesn't matter if it is me or Opus producing the code.
I feel like a lot of comments here are missing the point. I think the article does a fairly good job neither venerating nor demonizing AI, but instead just presenting it as the reality of the situation, and that reality means that the craft of programming and engineering is fundamentally different than it was just a few years ago.
As an (ex-)programmer in his late 40s, I couldn't agree more. I'm someone who can be detail-oriented (but, I think also with a mind toward practicality) to the point of obsession, and I think this trait served me extremely well for nearly 25 years in my profession. I no longer think that is the case. And I think this is true for a lot of developers - they liked to stress and obsess over the details of "authorship", but now that programming is veering much more towards "editor", they just don't find the day-to-day work nearly as satisfying. And, at least for me, I believe this while not thinking the change to using generative AI is "bad", but just that it's changed the fundamentals of the profession, and that when something dies it's fine to mourn it.
If anything, I'm extremely lucky that my timing was such that I was able to do good work in a relatively lucrative career where my natural talents were an asset for nearly a quarter of a century. I don't feel that is currently the case regarding programming, so I'm fortunate enough to be able to leave the profession and go into violin making, where my obsession with detail and craft is again a huge asset.
The inspired me to write down some scattered thoughts I have on this [0]. tl;dr I firmly believe we kicked the can down the road and now it's too late.
Things programmers forgot to do before AI started writing a bunch of software:
1. Learn how to review code
Some tools exist, some of them are even quite good! Large organizations have tried to build best practices to balance urgency with correctness, but programmers are still quite bad at this.
2. Compare program A vs program B
In a similar vein, we simply do not know how to measure one program vs another. Or a function against another. Or even one variableName vs another_variable_name.
"It depends" Depends on what? We never sorted this out.
3. Talk to other professions
We are not the first profession forced to coordinate with Automation as a coworker, and we certainly won't be the last. We're not even the first knowledge workers to do so.
How did other laborers deal with this? We don't know because we were busy making websites.
Software was never governed by a single standard. It has always been closer to architecture or design, with competing schools judging quality differently. AI tools do not remove judgment; they make it unavoidable. I sit in a minimalist, almost brutalist school: fewer layers, obvious structure, and software that delivers results even if it is not fashionable.
I am feeling this loss. I spent most of mu career scrupulously avoiding leadership positions because what I really like is the simple joy of making things with my own two hands.
Many are calling people like me Luddites for mourning this, and I think that I am prepared to wear that label with pride. I own multiple looms and a spinning wheel, so I think I may be in a better position speculates on how the Luddites felt than most people are nowadays.
And what I see is that the economic realities are what they are - like what happened to cottage industry textile work, making software by hand is no longer the economical option. Or at least, soon enough it won’t be. I can fret about deskilling all I like, but it seems that soon enough these skills won’t be particularly valuable except as a form of entertainment.
Perhaps the coding agents won’t be able to make certain things or use certain techniques. That was the case for textile manufacturing equipment, too. If so then the world at large will simply learn to live without. The techniques will live on, of course, but their practical value will be as an entertainment for enthusiasts and a way for them to recognize one another when we see it in each others’ work.
It’s not a terrible future, I suppose, in à long enough view. The world will move on, just like it did after the Industrial Revolution. But, perhaps also like the Industrial Revolution and other similar points in history, not until after we get through another period where a small cadre of wealthy elites who own and control this new equipment use that power to usher in a new era of neofeudalism. Hopefully this time they won’t start quite so many wars while they enjoy their power trips.
To me, it’s super exciting to play ping pong with ideas up until I arrive at an architecture and interfaces, that I am fine with.
My whole life I have been reading other people’s code to accumulate best practices and improve myself. While a lot of developers start with reading documentation, I have always started with reading code.
And where I was previously using the GitHub Code Search to eat up as much example code as I could, I am now using LLMs to speed the whole process up. Enormously. I for one enjoy using it.
That said, I have been in the industry for more than 15 years. And all companies I have been at are full of data silos, tribal knowledge about processes and organically grown infrastructure, that requires careful changes to not break systems you didn’t even know about.
Actually most of my time isn’t put into software development at all. It’s about trying to know the users and colleagues I work with, understand their background and understand how my software supports them in their day to day job.
I think LLMs are very, very impressive, but they have a long way to go to reach empathy.
About 20 years ago I realised the extreme immaturity of IT field. The market was fragmented and you could literally become billionaire from your basement (80's), from an office (90's) etc. It was at the stage of car industry of the beginning of the past century. Since then the IT market has matured a lot so that a handful of companies formed an oligopoly and are thriving with no possibility of threat from isolated developers. In the following years the startup landscape will be dead, everyone will be bought up and even the computer devices will become sealed boxes with controlled buttons. Having a browser open might be a hack.
Two years ago I decided to give up my career as an industry researcher to pursue a tenure-track professor position at a community college. One of the reasons I changed careers is because I felt frustrated with how research at my company changed from being more self-directed and driven by longer-term goals to being directed by upper management with demands for more immediate productization.
I feel generative AI is being imposed onto society. While it is a time-saving tool for many applications, I also think there are many domains where generative AI needs to be evaluated much more cautiously. However, there seems to be relentless pressure to “move fast and break things,” to adopt technology due to its initial labor-saving benefits without fully evaluating its drawbacks. That’s why I feel generative AI is an imposition.
I also resent the power and control that Big Tech has over society and politics, especially in America where I live. I remember when Google was about indexing the Web, and I first used Facebook when it was a social networking site for college students. These companies became successful because they provided useful services to people. Unfortunately, once these companies gained our trust and became immensely wealthy, they started exploiting their wealth and power. I will never forget how so many Big Tech leaders sat at Trump’s second inauguration, some of whom got better seats than Trump’s own wife and children. I highly resent OpenAI’s cornering of the raw wafer market and the subsequent exorbitant hikes in RAM and SSD prices.
Honestly, I have less of an issue with large language models themselves and more of an issue with how a tiny handful of powerful people get to dictate the terms and conditions of computing for society. I’m a kid who grew up during the personal computing revolution, when computation became available to the general public. I fell for the “computers for the rest of us,” “information at your fingertips” lines. I wanted to make a difference in the world through computing, which is why I pursued a research career and why I teach computer science.
I’ve also sat and watched research industry-wide becoming increasingly driven by short-term business goals rather than by long-term visions driven by the researchers themselves. I’ve seen how “publish-and-perish” became the norm in academia, and I also saw DOGE’s ruthless cuts in research funding. I’ve seen how Big Tech won the hearts and minds of people, only for it to leverage its newfound power and wealth to exploit the very people who made Big Tech powerful and wealthy.
The tech industry has changed, and not for the better. This is what I mourn.
It's not just tech it's everything. This is an existential crisis because we have rolled back almost two centuries. We are just handing the keys to the kingdom to these sociopaths, and we are thanking them for it. They are not even having the decency to admit they really just want to use us as numbers, this was always the case since the industrial revolution. Dozens of generations worldwide have toiled and suffered collectively to start creating life changing technology and these bloodsucking vampires that can't quench their thirst just live in their own reality and it doesn't include the rest of us. It's really been the same problem for ages but now they really seem to have won for the last time.
I've noticed a steep decline in software quality as a consumer since Covid, but especially since ChatGPT came out.
That hasn't changed. Whether it's apps that hog too much memory, games that use too much storage, unresponsiveness, or just plain cryptic error messages, everything feels more fraile than it used to be.
Perhaps its the growing pains of LLMs, a horde of junior programmers pushing stuff to production that the seniors were too "old fashioned" to notice.
To me this sentiment is silly. Programming was never about the act of writing, but about making the computer do something. Now we can ask computers to helps us write instructions for them. Great if you ask me. And no, your job is not going away because human maintainers will always be required to review changes, communicate with stakeholders and provide a vision for the projects. I have yet to see a chatbot that can REDUCE entropy inside a codebases rather than continuously increase it with more and more slop.
Stop acting like software engineering is dead or something. Even if it its, we will still code by hand when we are poor later in this bubble. Only thing I would refuse is to pay any money to Anthropic & openAI. Literally would invest in Chinese labs at this point
It was pretty clear to me after interacting with the first popular ChatGPT version around end of 2022 that all knowledge jobs will be replaced sooner or later. I don’t think coding is somehow special. What we have right now is an intermediate stage where “taste” still matters, but this won’t last forever.
I also believe that when all knowledge jobs are replaced, something fundamental needs to change in society. Trying to anticipate and prepare for that right now is premature.
It makes me sad to read posts like this. If it is a necessary step for you on the journey from denial to acceptance to embracing the new state of the world, then sure, take your time.
But software engineering is the only industry that is built on the notion of rapid change, constant learning, and bootstrapping ourselves to new levels of abstraction so that we don't repeat ourselves and make each next step even more powerful.
Just yesterday we were pair programming with a talented junior AI developer. Today we are treating them as senior ones and can work with several in parallel. Very soon your job will not be pair programming and peer reviewing at all, but teaching a team of specialized coworkers to work on your project. In a year or two we will be assembling factories of such agents that will handle the process from taking your requirements to delivering and maintaining complex software. Our jobs are going to change many more times and much more often than ever.
And yet there will still be people finding solace in hand-crafting their tools, or finding novel algorithms, or adding the creativity aspect into the work of their digital development teams. Like people lovingly restoring their old cars in their garage just for the sake of the process itself.
> software engineering is the only industry that is built on the notion of rapid change, constant learning, and bootstrapping ourselves to new levels of abstraction
Not sure I agree. I think most programming today looks almost exactly the same as it did 40 years ago. You could even have gotten away with never learning a new language. AI feels like the first time a large percentage of us may be forced to fundamentally change the way we work or change careers.
One may still write C code as they did 40 years ago, but they still use the power of numerous libraries, better compilers, Git, IDEs with syntax highlighting and so on. The only true difference — to me — is the speed of change that makes it so pronounced and unsettling.
It's true, unless you have always been working on FOTM frontend frameworks, you could easily be doing the same thing as 20/30/40 years ago. I'm still using vim and coding in C++ like someone could have 30+ years ago (I was too young then). Or at least, I was until Claude code got good enough to replace 90% of my code output :)
These posts make me feel like I’m the worst llm prompter in existence.
I’m using a mix of Gemini, grok, and gpt to translate some matlab into c++. It is kinda okay at its job but not great? I am rapidly reading Accelerated C++ to get to the point where I can throw the llm out the window. If it was python or Julia I wouldn’t be using an LLM at all bc I know those languages. AI is barely better than me at C++ because I’m halfway through my first ever book on it. What LLMs are these people using?
The code I’m translating isn’t even that complex - it runs analysis on ecg/ppg data to implement this one dude’s new diagnosis algorithm. The hard part was coming up with the algorithm, the code is simple. And the shit the LLM pours out works kinda okay but not really? I have to do hours of fix work on its output. I’m doing all the hard design work myself.
I fucking WISH I could only work on biotech and research and send the code to an LLM. But I can’t because they suck so I gotta learn how computer memory works so my C++ doesn’t eat up all my pc’s memory. What magical LLMs are yall using??? Please send them my way! I want a free llm therapist and a programmer! What world do you live in?? Let me in!
A lot of people are using Claude Code which many consider to be a noticeably better for coding than the other models.
I think also they tend to be generating non-C++ code where there are more guardrails and less footguns for LLMs to run into. Eg they're generating Javascript or Python or Rust where type systems and garbage collection eliminates entire classes of mistakes that LLMs can run into. I know you said you don't use it for Python because you know the language but even experienced Python devs still see value in LLM-generating Python code.
That’s funny bc I linked my post to a server I’m on and I also was told to use an agent.
My worry about an agent is I’m trying to translate the math with full fidelity and an agent might take liberties with the math rather than full accuracy. I’m already having issues with 0 to 1 indexing screwing up some of the algorithm.
I'm firing you for being unable to adequately commune with the machine spirit.
(But for real, a good test suite seems like a great place to start before letting an LLM run wild... or alternatively just do what you're doing. We definitely respect textbook-readers more than prompters!)
I'm surprised so many people are only waking up to this now. It should have been obvious as soon as ChatGPT came out that even with only incremental improvements, LLMs would kill programming as we knew it. And the fact that these utterances, however performative, from developers expressing grief or existential despair have become commonplace tells me as much about the power of these systems than whatever demo Anthropic or OpenAI has cooked up.
I would also point out that the author, and many AI enthusiasts, still make certain optimistic assumptions about the future role of "developer," insisting that the nature of the work will change, but that it will somehow, in large measure, remain. I doubt that. I could easily envision a future where the bulk of software development becomes something akin to googling--just typing the keywords you think are relevant until the black box gives you what you want. And we don't pay people to google, or at least, we don't pay them very much.
Many have mentioned woodworking as an analogy from a personal perspective, but for me the important perspective is that of consumers.
Sure, if you have the money, get a carpenter to build your kitchen from solid oak. Most people buy MDF, or even worse, chipboard. IKEA, etc. In fact, not too long ago, I had a carpenter install prefabricated cabinets in a new utility room. The cabinets were pre-assembled, and he installed them on the wall in the right order and did the detailed fittings. He didn’t do a great job, and I could have done better, albeit much slower. I use handsaws simply because I’m afraid of circular saws, but I digress.
A lot of us here are like carpenters before IKEA and prefabricated cabinets, and we are just now facing a new reality. We scream “it is not the same”. It indeed isn’t for us. But the consumers will get better value for money. Not quality, necessarily, but better value.
How about us? We will eventually be kitchen designers (aka engineers, architects), or kitchen installers (aka programmers). And yes, compared to the golden years, those jobs will suck.
But someone, somewhere, will be making bespoke, luxury furniture that only a few can afford. Or maybe we will keep doing it anyway because our daily jobs suck, until we decide to stop. And that is when the craft will die.
The world will just become less technical, as is the case with other industrial goods. Who here even knows how a combustion engine works? Who knows how fabric is made, or even how a sawing machine works? We are very much like the mechanics of yesteryear before cars became iPads on wheels.
As much as we hate it, we need to accept that coding has peaked. Juniors will be replaced by AI, experts will retire. Innovation will be replaced by processes. And we must accept our place in history.
> "I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off of it."
I have to say this reads a bit hollow to me, and perhaps a little bit shallow.
If the content this guy created could be scraped and usefully regurgitated by an LLM, that same hack, before LLMs, could have simply searched, found the content and still profited off of it nonetheless. And probably could have done so without much more thought than that required to use the LLM. The only real difference introduced by the LLM is that the purpose of the scraping is different than that done by a search engine.
But let's get rid of the loaded term "hack" and be a little less emotional and the complaint. Really the author had published some works and presumably did so that people could consume that content: without first knowing who was going to consume it and for what purpose.
It seems to me what the author is really complaining about is that the reward from the consuming party has been displaced from himself to whoever owns the LLM. The outcome of consumption and use hasn't changed... only who got credit for the original work has.
Now I'm not suggesting that this is an invalid complaint, but trying to avoid saying, "I posted this for my benefit"... be that commercial (ads?) or even just for public recognition...is a bit disingenuous.
If you poured you knowledge, experience, and creativity into some content for others to consume and someone else took that content as their own... just be forthright about what you really lost and don't disparage the consumer. Just because they aren't your "hacks" anymore, but that middlemen are now reaping your rewards.
I absolutely disagree with this. All the things the author said will still exist and keep on existing.
Nothing will prevent you from typing “JavaScript with your hands”, from “holding code in our hands and molding it like clay…”, and all the other metaphors. You can still do all of it.
What certainly will change is the way professional code will be produced, and together with that, the avenue of having a very well-paid remuneration, to write software line-by-line.
I’ll not pretend that I don’t get the point, but it feels like the lamentation of a baker, tailor, shoemaker, or smith, missing the days of old.
And yet, most people prefer a world with affordable bread, clothes, footware, and consumer goods.
Will the world benefit the most from “affordable” software? Maybe yes, maybe not, there are many arguments on both sides. I am more concerned the impact on the winners and losers, the rich will get more rich and powerful, while the losers will become even more destitute.
Yet, my final point would be: it is better or worse to live in a world in which software is more affordable and accessible?
> All the things the author said will still exist and keep on existing.
Except the community of people who, for whatever reason, had to throw themselves into it and had critical mass to both distribute and benefit from the passion of it. This has already been eroded by the tech industry coopting programming in general and is only going to diminish.
The people who discovered something because they were forced to do some hard work and then ran with it are going to be steered away from that direction by many.
I don’t think it’s that simple. A couple of examples:
Food:
A lot of the processed foods that are easily available make us unhealthy and sick. Even vegetables are less nutritious than they were 50 years ago. Mass agriculture also has many environmental externalities.
Consumer goods:
It has become difficult to find things like reliable appliances. I bought a chest freezer. It broke after a year. The repairman said it would cost more to fix than to buy a new one. I asked him if there was a more reliable model and he said no: they all break quickly.
Clothing:
Fast fashion is terrible for the environment. Do we need as many clothes as we have? How quickly do they end up in landfills?
Would we be better off as a society repairing shoes instead of buying new ones every year?
It's true, they don't "make 'em like they used to". They make them in new, more efficient ways which have contributed to improving global trends in metrics such as literacy, child mortality, life expectancy, extreme poverty, and food supply.
If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue
If your worries are about ecology and sustainability I agree that is a concern we need to address more effectively than we have in the past. Technology will almost certainly be part of that solution via things like fusion energy. Success is not assured and we cannot just sit back and say "we live in the best of all possible worlds with a glorious manifest destiny", but I don't think that the future is particularly bleak compared to the past
Cars make people unhealthy and lead to city designs that hurt social engagement and affordability, but they are so much more efficient that it's hard not to use them.
And then the obvious stuff about screens/phones/social media.
I mourn having to repeatedly hear this never-quite-true promise that an amazing future of perfect code from agentic whatevers will come to fruition, and it's still just six months away. "Oh yes, we know we said it was coming six, twelve, and eighteen months ago, but this time we pinky swear it's just six months away!"
I remember when I first got access to the internet. It was revolutionary. I wanted to be online all the time, playing games, chatting with friends, and discovering new things. It shaped my desire to study computer science and learn to develop software! I could see and experience the value of the internet immediately. It's utility was never "six months away," and I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.
LLM coding doesn't feel revolutionary or exciting like this. It's a mandate from the top. It's my know-nothing boss telling me to "find ways to use AI so we can move faster." It's my boss's know-nothing boss conducting Culture Amp surveys about AI usage, but ignoring the feedback that 95% of Copilot's PR comments are useless noise: "The name of this unit test could be improved." It's waiting for code to be slopped onto my screen, so I can go over it with a fine-toothed comb and find all the bugs—and there are always bugs.
Here's what I hope is six months away: The death of AI hype.
This feels right when you're looking forwards. The perfect AI bot is definitely not 6 months away. It'll take a lot longer than that to get something that doesn't get things wrong a lot of the time. That's not especially interesting or challenging though. It's obvious.
What's much more interesting is looking back 6, 12, 18, or 24 months. 6 months ago was ChatGPT 5, 12 months ago was GPT 4.5, 18 months ago was 4o, and 24 months ago ChatGPT 3.5 was released (the first one). If you've been following closely you'll have seen incredible changes between each of them. Not to get to perfect, because that's not really a reasonable goal, but definite big leaps forward each time. A couple of years ago one-shotting a basic tic tac toe wasn't really possible. Now though, you can one-shot a fairly complex web app. It won't be perfect, or even good by a lot of measures compared to human written software, but it will work.
I think the comparison to the internet is a good one. I wrote my first website in 1997, and saw the rapid iteration of websites and browsers back then. It felt amazing, and fast. AI feels the same to me. But given the fact that browsers still aren't good in a lot of ways I think it's fair to say AI will take a similarly long time. That doesn't mean the innovations along the way aren't freaking cool though.
ChatGPT 3.5 was almost 40 months ago, not 24. GPT 4.5 was supposed to be 5 but was not noticeably better than 4o. GPT 5 was a flop. Remember the hype around Gemini 3? What happened to that? Go back and read the blog posts from November when Opus 4.5 came out; even the biggest boosters weren't hyping it up as much as they are now.
It's pretty obvious the change of pace is slowing down and there isn't a lot of evidence that shipping a better harness and post-training on using said harness is going to get us to the magical place where all SWE is automated that all these CEOs have promised.
Yeah, but humans still had to work to create those websites, it increased jobs, didn't replace them (this is happening). This will devalue all labor that has anything to do with i/o on computers, if not outright replace a lot of it. Who cares if it can't write perfect code, the owners of the software companies never cared about good code, they care about making money. They make plenty of money off slop, and they'll make even more if they don't have to have humans create the slop.
The job market will get flooded with the unemployed (it already is) with fewer jobs to replace the ones that were automated, those remaining jobs will get reduced to minimum wages whenever and wherever possible. 25% of new college grads cannot find employment. Soon young people will be so poor that you'll beg to fight in a war. Give it 5-10 years.
This isn't a hard future to game theory out, its not pretty if we maintain this fast track of progress in ML that minimally requires humans. Notice how the ruling class has increased the salaries for certain types of ML engineers, they know what's at stake. These businessmen make decisions based on expected value calculated from complex models, they aren't giving billion dollar pay packages to engineers because its trendy. We should use our own mental models to predict where this is going, and prevent it from happening however possible.
THE word ''Luddite'' continues to be applied with contempt to anyone with doubts about technology, especially the nuclear kind. Luddites today are no longer faced with human factory owners and vulnerable machines. As well-known President and unintentional Luddite D. D. Eisenhower prophesied when he left office, there is now a permanent power establishment of admirals, generals and corporate CEO's, up against whom us average poor bastards are completely outclassed, although Ike didn't put it quite that way. We are all supposed to keep tranquil and allow it to go on, even though, because of the data revolution, it becomes every day less possible to fool any of the people any of the time. If our world survives, the next great challenge to watch out for will come - you heard it here first - when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long. Meantime, as Americans, we can take comfort, however minimal and cold, from Lord Byron's mischievously improvised song, in which he, like other observers of the time, saw clear identification between the first Luddites and our own revolutionary origins. It begins:[0]
Something I'm finding odd is this seemingly perpetually repeating claim that the latest thing that came out actually works, unlike the last thing that obviously didn't quite work.
Then next month, of course, latest thing becomes last thing, and suddenly it's again obvious that actually it didn't quite work.
It's like running on a treadmill towards a dangling carrot or something. It's simultaneously always here in front of our faces but also not here in actual hand, obviously.
The tools are good and improving. They work for certain things, some of the time, with various need for manual stewarding in the hands of people who really know what they're doing. This is real.
But it remains an absolutely epic leap from here to the idea that writing code per se is a skill nobody needs any more.
More broadly, I don't even really understand what that could possibly mean on a practical level, as code is just instructions for what the software should do. You can express instructions on a higher level, and tooling keeps making that more and more possible (AI and otherwise), but in the end what does it mean to abstract fully away from the instruction in the detail? It seems really clear that will never be able to result in getting software that does what you want in a precise way rather than some probabilistic approximation which must be continually corrected.
I think the real craft of software such that there is one is constructing systems of deterministic logic flows to make things happen in precisely the way we want them to. Whatever happens to tooling, or what exactly we call code or whatever, that won't change.
> an amazing future of perfect code from agentic whatevers will come to fruition...
Nobody credible is promising you a perfect future. But, a better future, yes! If you do not see it, then know this. You have your head firmly planted in the sand and are intentionally refusing to see what is coming. You may not like it. You may not want it. But it is coming and you will either have to adapt or become irrelevant.
Does Copilot spit out useless PR comments. 100% yes! Are there tools that are better than Copilot? 100% yes! These tools are not perfect. But even with their imperfections, they are very useful. You have to learn to harness them for their strengths and build processes to address their weaknesses. And yes, all of this requires learning and experimentation. Without that, you will not get good results and you will complain about these tools not being good.
6 months ago is when my coding became 100% done by AI. The utility already has been there for a while.
>I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.
The difference is that you were a kid then with an open mind and now your world view has fixed into a certain way the world works and how things should be done.
Can you point to the most optimistic six month projections that you have seen?
I have encountered a lot of people say it will be better in six months, and every six months It has been.
I have also seen a few predictions that say 'in a year or two they will be able to do a job completely. I am sceptical, but I would say such claims are rare. Dario Amodei has been about the only prominent voice that I have encountered that puts such abilities on a very short timeframe, and he still points to more than a year.
The practical use of AI has certainly increased a lot in the last six months.
So I guess what I'm asking is more specifics on what you feel was claimed, by whom, and how much did they fall short?
Without that supporting evidence you could just be being annoyed by the failure of claims that exist in your imagination.
If you've only experienced MS Copilot I invite you to try the latest models through Codex (free deals ongoing), Claude Code, or Opencode. You may be surprised, for better or worse. What kind of software do you do?
Reminds me of another "just around the corner" promise...[0]
I think it is one thing for the average person to buy into the promises but I've yet to understand why that happens here. Or why that happens within our community of programmers. It is one thing for non-experts to fall for obtuse speculative claims, but it is another for experts. I'm excited for autonomous vehicles, but in 2016 is was laughable to think they're around the corner and only 10 years later does such a feat seem to start looking like it's actually a few years away.
Why do we only evaluate people/claims on their hits and not their misses? It just encourages people to say anything and everything, because eventually one will be right. It's 6 months away because eventually it will actually be 6 months away. But is it 6 months away because it is actually 6 months away or because we want it to be? I thought the vibe coder's motto is "I just care that it works." Honestly, I think that's the problem. Everyone care's about if it works or not and that's the primary concern of all sides of the conversation here. So is it 6 months away because it is 6 months away or is it 6 months away because you've convinced yourself it is 6 months away. You got good reasons for believing that, you got the evidence, but evidence for a claim is meaningless without comparing to evidence that counters the claim.
The state of the art is moving so rapidly that, yeah, Copilot by Microsoft using gpt-5-mini:low is not going to be very good. And there are many places where AI has been implemented poorly, generally by people who have the distribution to force it upon many people. There are also plenty of people who use vibe coding tools and produce utterly atrocious codebases. That doesn't preclude the existence of effective AI tools, and people who are good at using them.
This post is rather like a recent similar post "I miss thinking hard about things". Top comment quoted a metaphor relating to clay. No offense this blog article feels as if an LLM ingested that post and thread and produced a gestalt of it
December a few years ago, pre-ChatGPT I did Advent of Code in Rust. It was very difficult, had never done the full month before, barely knew Rust and kept getting my ass kicked by it. I spent a full Saturday afternoon solving one of the last problems of the month, and it was wonderful. My head hurt and I was reading weird Wikipedia articles and it was a blast. Nothing is stopping me from doing that sort of thing again, and I feel like I might need to, to counteract the stagnation I feel at times mentally when it comes to coding. That spark is still in there I feel, buried under all the slop, and it would reappear if I gave it the chance, I hope. I have been grieving for the last years I think and only recently have I come to terms with the changes to my identity that llm's have wrought.
For many (most) people, it was never a "craft," it was a job where with the appropriate skills you could make a ton of money. That's possibly, maybe, maybe not ending, we will see. It is still possible to treat coding as a craft. There are tons of open source projects that would love to have your help, but the days of making big money may be drawing to a close.
Also, don't forget the things that AI makes possible. It's a small accomplishment, but I have a World of Warcraft AddOn that I haven't touched in more than 10 years. Of course now, it is utterly broken. I pointed ChatGPT at my old code and asked it to update it to "retail" WoW, and it did it. And it actually worked. That's kind of amazing.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
No they cannot, And an AI bro squeezing every talking point into a think piece while pretending to have empathy doesn't change that. You just want an exit, and you want it fast.
LLMs have made a lot of coding challenges less painful: Navigating terrible documentation, copilot detecting typos, setting up boilerplate frontend components, high effort but technically unchallenging code completions. Whenever I attempted LLMs for tools I’m not familiar with I found it to be useful with setting things up but felt like I had to do good old learning the tool and applying developer knowledge to it. I wonder if senior developers could use LLMs in ways that work with them and not against them. I.e create useful code that has guardrails to avoid slop
And the problem isn't even the Junior Zoomer devs running circles around seniors. It's the CTO or Engineering VP himself disappearing for a few months and single-handedly consolidating a handful of products into a full rewrite for the company, excluding most of their engineering team from the process, and then laying them off after.
The problem is the CEO pretending to be an engineer and thinking they know better because they can write English prompts and spit out a hideous prototype.
The problem is Product Owners using LLMs to "write code" while their engineering team does zero human review before merging it, because their AI tooling was made solely responsible for code quality. If something's broken, just prompt a sloppy fix full of hidden performance and security bugs that the automated code review step missed.
If you think this is hyperbole, I was recently laid off from a company that did exactly the above.
Then in 2027, it will be product owners replacing the entire engineering team, including the CTO, because they made their system too reliable to justify their employment, while the "thinkers" get to build the product, engineers be damned.
People with real skills they acquired over a lifetime are no longer shaping business. Reckless efficiency towards being average will rule the day.
I found my love for programming in high school, dreaming of helping the world with my beautiful craftsmanship, but now i really really need the fokken money. Both are true!
So if my corporate overlords will have me talk to the soul-less Claude robot all day long in a Severance-style setting, and fix its stupid bugs, but I get to keep my good salary, then I'll shed a small tear for my craft and get back to it. If not... well, then I'll be shedding a lot more tears ... i guess
Some people say that working with an agent or an agents orchestrator is like being a technical lead. But I've been a technical lead for quite a while, and the experience of working with an agent doesn't even come close. I think that when people talk about the agents' coding abilities they're talking about the average ability. But as a team lead, I don't care about average ability. I care only about the worst case. If I have any doubt that someone might not complete a task, or at least accurately explain why it's proving difficult, with at least 95% certainty, I won't assign them the task. If I have any doubt that the code they produce might not be up to snuff, I don't assign them the task. I don't need to review their code; they review each others'. When I have to review code I'm no longer a team lead but a programmer.
I often have one programming project I do myself, on the side, and recently I've been using coding agents. Their average ability is no doubt impressive for what they are. But they also make mistakes that not even a recent CS graduate with no experience would ever make (e.g. I asked the agent for it's guess as to why a test is failing; it suggested it might be due to a race condition with an operation that is started after the failing assertion). As a lead, if someone on the team is capable of making such a mistake even once, then that person can't really code, regardless of their average performance (just as someone who sometimes lands a plane in the wrong airport or even crashes without their being a catastrophich condition outside their control can't really fly regardless of their average performance). "This is more complicated than we though and would take longer than we expected" is something you hear a lot, but "sorry, I got confused" is something you never hear. A report by Anthropic last week said, "Claude will work autonomously to solve whatever problem I give it. So it’s important that the task verifier is nearly perfect, otherwise Claude will solve the wrong problem." Yeah, that's not something a team lead faces. I wish the agent could work like a team of programmers and I would be doing my familiar role of a project lead, but it doesn't.
The models do some things well. I believe that programming is an interesting mix of inductive and deductive thinking (https://pron.github.io/posts/people-dont-write-programs), and the models have the inductive part down. They can certainly understand what a codebase does faster than I can. But their deductive reasoning, especially when it comes to the details, is severely lacking (e.g. I asked the agent to document my code. It very quickly grasped the design and even inferred some important invariants, but when it saw an `assert` in one subroutine it documented it as guarding a certain invariant. The intended invariant was correct, it just wasn't the one the assertion was guarding). So I still (have to) work as a programmer when working with coding assistants, even if in a different way.
I've read about great successes at using coding agents in "serious" software, but what's common to those cases is that the people using the agents (Mitchell Hashimoto, antirez) are experts in the respective codebase. At the other end of the spectrum, people who aren't programmers can get some cool programs done, but I've yet to see anything produced in this way (by a non programmer) that I would call serious software.
I don't know what the future will bring, but at the moment, the craft isn't dead. When AI can really program, i.e. the experience is really like that of a team lead, I don't think that the death of programming would concern us, because once they get to that point, the agents will also likely be able to replace the team lead. And middle management. And the CTO, the CFO, and the CEO, and most of the users.
> If I have any doubt that someone might not complete a task, or at least accurately explain why it's proving difficult, with at least 95% certainty, I won't assign them the task
It gets hard to compare AI to humans. You can ask the AI to do things you would never ask a human to do, like retry 1000 times until it works, or assign 20 agents to the same problem with slightly different prompts. Or re-do the entire thing with different aesthetics.
No doubt, I'm just saying that working with a coding agent is not even remotely similar to being a team lead. If a member of your team can't complete a task and can't accurately explain what the difficulty is, you're in trouble.
This is probably a good representative take for the view of people who assigned value to the coding part of building software. It's worth reading and saving, if only as an artifact to study.
It isn't about the tools or using them, it's about the scale. The scale of impact is immense and we're not ready to handle it in a mutitude of areas because of all the areas technology touches. Millions of jobs erased with no clear replacement? Value of creative work diminshed leading to more opportunities erased? Scale of 'bad' actors abusing the tools and impacting a whole bunch of spheres from information dispersal to creative industries etc. Not even getting into environmental and land-use impacts to spaces with data centers and towns etc (again, it's the scale that gets ya). And for what? Removing a huge chunk of human activity & expression, for what?
This. People are way too easily impressed. I don't think this easily-impressedness will generalize to most people in the real world.
If you really buy all that you'd be part of the investor class that crashed various video game companies upon seeing Google put together a rather lame visual stunt and have their AI say, and I quote because the above-the-fold AI response I never asked for has never been more appropriate to consult…
"The landscape of AI video game generation is experiencing a rapid evolution in 2025-2026, shifting from AI-assisted asset creation to the generation of entire interactive, playable 3D environments from text or image prompts. Leading initiatives like Google DeepMind's Project Genie and Microsoft's Muse are pioneering "world models" that can create, simulate physics, and render games in real-time."
And then you look at what it actually is.
Suuuure you will, unwanted AI google search first response. Suuure you will.
Ephemeralization: the ability thanks to technological advancement to do "more and more with less and less until eventually you can do everything with nothing." —Buckminster Fuller
As a very old school programmer who taught myself assembler in 1982 on an 8-bit 4K micro, I don't see much to mourn here.
* People still craft wood furniture from felled trees entirely with hand tools. Some even make money doing it by calling it 'artisanal'. Nothing is stopping anyone from coding in any historical mode they like. Toggle switches, punch cards, paper tape, burning EPROMs, VT100, whatever.
* OP seems to be lamenting he may not be paid as much to expend hours doing "sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM." I've been there. Sometimes I'd feel mild satisfaction on solving a rat-hole problem but more often, it was significant relief. I never much liked that part of coding and began to see it as a failure mode. I found I got bigger bucks - and had more fun - the better I got at avoiding rat-hole problems in the first place.
* My entire journey creating software from ~1983 to ~2020 was about making a thing that solved someone's problem better, cheaper or faster - and, on a good day, we managed all three at once. At various times I ended up doing just about every aspect of it from low-level coding to CEO and back again, sometimes in the same day. Every role in the journey had major challenges. Some were interesting, a few were enjoyable, but most were just "what had to get done" to drag the product I'd dreamt up kicking and screaming into existence.
* From my first teenage hobby project to my first cassette-tape in-a-baggie game to a $200M revenue SaaS for F100, every improvement in coding from getting a floppy disk drive to an assembler with macros to an 80 column display to version control, new languages, libraries, IDEs and LLMs just helped "making the thing exist" be easier, faster and less painful.
* Eventually, to create even harder, bigger and better things I had to add others coding alongside me. Stepping into the player-coach role amplified my ability to bring new things into existence. It wasn't much at first because I had no idea how to manage programmers or projects but I started figuring it out and slowly got better. On a good day, using an LLM to help me "make the thing exist" feels a lot like when I first started being a player-coach. The frustration when it's 'two steps forward, one back' feels like deja vu. Much like current LLMs, my first part-time coding helpers weren't as good as I was and I didn't yet know how to help them do their best work. But it was still a net gain because there were more of them than me.
* The benefits of having more coders helping me really started paying off once I started recruiting coders who were much better programmers than I ever was. Getting there took a little ego adjustment on my part but what a difference! They had more experience, applied different patterns, knew to avoid problems I'd never seen and started coming up with some really good ideas. As LLMs get better and I get better at helping them help me - I hope that's were we're headed. It doesn't feel directionally different than the turbo-boost from my first floppy drive, macro-assembler, IDE or profiler but the impact is already greater with upside potential that's much higher still - and that's exciting.
My ability to ask questions & hone in on good answers is far better than it ever was. My ability to change course & iterate is far faster than it ever has been. I'm making far more informed decisions, far more able to make forays and see how things turn out, with low cost.
I could not be having a better time.
I liked coding! It was fun! But I mourned because I felt like I would never get out 1% of the ideas in my head. I was too slow, and working on shit in my free time just takes so much, is so hard, when there's so little fruitful reward at the end of a weekend.
But I can make incredible systems so fast. This is the craft I wanted to be doing. I feel incredibly relieved, feel such enormous weigh lifted, that maybe perhaps some of my little Inland Empire that lives purely in my head might perhaps make it's way to the rest of the world, possibly.
Huge respect for all the sadness and mourning. Yes too to that. But I cannot begin to state how burdened and sad I felt, so unable to get the work done, and it's a total flip, with incredible raw excitement and possibility before me.
That said, software used to reward such obsessive deep following pursuit, such leaning into problems. And I am very worried, long term, what happens to the incredible culture of incredible people working really hard together to build amazing systems.
But you have to admit it loses a certain shine in the cases where you know that what you're doing is no longer solving a problem that could be solved simpler and cheaper another way.
As you probably know, painting changed quite a bit after cameras became common. I wonder if handcrafted code will have a similar shift, becoming more "artistic" :)
If you want to build a house you still need plans. Would you rather cut boards by hand or have a power saw. Would you rather pound nails, pilot hole with a bit and brace and put in flat head screws... or would you want a nail gun and an impact driver.
And you still need plans.
Can you write a plan for a sturdy house, verify that it meets the plan that your nails went all the way in and in the right places?
You sure can.
Your product person, your directors, your clients might be able to do the same thing, it might look like a house but its a fire hazard, or in the case of most LLM generated code a security one.
The problem is that we moved to scrum and agile, where your requirements are pantomime and postit notes if your lucky, interpretive dance if you arent. Your job is figuring out how to turn that into something... and a big part of what YOU as an engineer do is tell other people "no thats dumb" without hurting their feelings.
IF AI coding is going to be successful then some things need to change: Requirements need to make a come back. GOOD UI needs to make a comeback (your dark pattern around cancelation, is now going to be at odds with an agent). Your hide the content behind a login or a pay wall wont work any more because again, end users have access too... the open web is back and by force. If a person can get in, we have code that can get in now.
There is a LOT of work that needs to get done, more than ever, stop looking back and start looking forward, because once you get past the hate and the hype there is a ton of potential to right some of the ill's of the last 20 years of tech.
Dunno, LLMs writing code still feels like they memorized a bunch of open source code and vomited them out in worse condition.
It's not that impressive that Claude wrote a C compiler when GitHub has the code to a bunch of C compilers (some SOTA) just sitting there.
I'm using an LLM to write a compiler in my spare time (for fun) for a "new" language. It feels more like a magical search engine than coding assistant. It's great for bouncing ideas from, for searching the internet without the clutter of SEO optimized sites and ads, it's definitely been useful, just not that useful for code.
Like, I have used some generated code in a very low stakes project (my own Quickshell components) and while it kind of worked, eventually I refactored it myself into 1/3 of the lines it produced and had to squash some bugs.
It's probably good enough for the people who were gluing React components together but it still isn't on the level where I'd put any code it produces into production anywhere I care about.
That is my experience from a year ago but I no longer feel that way. I write a few instructions, guide an agent to create a plan, and rarely touch the code myself. If I don’t like something, I ask the agent to fix it.
Agree, there was a huge step change with Claude Code + Opus 4.5 (maybe 4.6 is even better?). Anyone dealing with earlier models as their basis should probably try the newest stuff and see if it changes their minds.
I'm that 40 year old now. Been writing code since grade 5. Loved it so much I got a PhD, was an academic, then moved into industry.
I don't mourn or miss anything. No more then the previous generation mourned going from assembly to high level languages.
The reason why programming is so amazing is getting things done. Seeing my ideas have impact.
What's happening is that I'm getting much much faster and better at writing code. And my hands feel better because I don't type the code in anymore.
Things that were a huge pain before are nothing now.
I didn't need to stay up at night writing code. I can think. Plan. Execute at a scale that was impossible before. Alone I'm already delivering things that were on the roadmap for engineering months worth of effort.
I can think about abstractions, architecture, math, organizational constraints, product. Not about what some lame compiler thinks about my code.
And if someone that's far junior to me can do my job. Good. Then we've empowered them and I've fallen behind. But that's not at all the case. The principals and faculty who are on the ball are astronomically more productive than juniors.
I wonder whether, in the end, it was simply poor accessibility that made programmers special, and whether it is that what some of them are missing. Being special by "talking" a special language their customers can't comprehend.
Sure, they are still needed for debugging and for sneering at all those juniors and non-programmers who will finally be able to materialise their fantasies, but there is no way back anymore, and like riding horses, you can still do it while owning a car.
This entire panic is a mass-hysteria event. The hallucination that "an LLM can do software engineering better than a 10x engineer" is only possible because there are so few 10xers left in the business. 99% either retired or are otherwise not working at the moment.
The "difficult", "opinionated", "overpaid" maniacs are virtually all gone. That's why such a reckless and delusional idea like "we'll just have agents plan, coordinate, and build complete applications and systems" is able to propagate.
The adults were escorted out of the building. Managements' hatred of real craftspeople is manifesting in the most delusional way yet. And this time, they're actually going to destroy their businesses.
I'm here for it. They're begging to get their market share eaten for breakfast.
Speak for yourself. I don't miss writing code at all. Agentic engineering is much more fun.
And this surprises me, because I used to love writing code. Back in my early days I can remember thinking "I can't believe I get paid for this". But now that I'm here I have no desire to go back.
I had that same epiphany when I discovered AI is great at writing complicated shell command lines for me. I had a bit of an identity crisis right there because I thought I was an aspiring Unixhead neckbeard but in truth I hated the process. Especially the scavenger hunt of finding stuff in man pages.
Speak for yourself. If you find the agentic workflow to be more fun, more power to you.
I for one think writing code is the rewarding part. You get to think through a problem and figure out why decision A is better than B. Learning about various domains and solving difficult problems is in itself a reward.
I don't understand this perspective. I've never learned so much so fast as I have in the last few months. LLMs automate all the boring rote stuff, freeing up my time to focus exclusively on the high-level problem-solving. I'm enjoying my work more than ever.
To be fair, I might have felt some grief initially for my old ways of working. It was definitely a weird shift and it took me a while to adjust. But I've been all-in on AI for close to a year now, and I have absolutely zero regrets.
I can't believe I used to _type code out by hand_. What a primitive world I grew up in.
Same here i'm a decade plus in this field, writing code was by far the number 1 and the discussion surrounding system design was a far second. Take away the coding i don't think i will make it to retirement being a code/llm PR auditor for work. So i'am already planning on exiting the field in the next decade.
>You get to think through a problem and figure out why decision A is better than B. Learning about various domains and solving difficult problems is in itself a reward.
So just tell the LLM about what you're thinking about.
Why do you need to type out a for loop for the millionth time?
Oh good lord. Spare us the beatings of the chest and rending of the garments. "crafting code by hand" like some leftover hipsters from 2010s crafting their own fabric using a handloom. It's fucking code. Were there similar gnashing of the teeth and wails of despair when compilers were first introduced?
> Were there similar gnashing of the teeth and wails of despair when compilers were first introduced?
Yes, at least according to ChatGPT:
"Compilers didn’t arrive to universal applause; they arrived into a world where a chunk of programmers absolutely believed the machine could not be trusted to write “real” code—until the productivity wins (and eventually the performance) became undeniable."
compiler is deterministic, coding models are not. compilers have been unit tested and will generate same output for a given input. They are not the same things.
I mean go ahead and cry if you want. You are losing time best spent caring about stuff, and overlooking many alarming gotchas through blindly accepting SV hype. I'd have thought crypto would teach people something, but apparently not.
Do what isn't replaceable. You're being told literally everything is replaceable. Note who's telling you that and follow the money.
I feel bad for this essayist, but can't really spare more than a moment to care about his grief. I got stuff to do, and I am up and doing. If he was in any way competing with the stuff I do? One less adversary.
I would rather bring him into community and enjoy us all creating together… but he's acting against those interests and he's doomering and I have no more time for that.
it definitely sucks to be honest, and theres a lot of cope out there.
fact of the matter is, being able to churn out bash oneliners was objectively worth $100k/year, and now it just isnt anymore. knowing the C++ STL inside-out was also worth $200k/year, now it has very questionable utility.
a lot of livelihoods are getting shaken up as programmers get retroactively turned into the equivalent of librarians, whose job is to mechanically index and fetch cognitive assets to and from a digital archive-brain.
Yeah, I notice a lot of the optimism is from people who have been in the field for decades. I'm newish to the field, half a decade out of undergrad. It definitely feels like almost all of what I learned has been (or will soon be) completely devalued. I'm sure this stuff feels a lot less threatening if you've had decades to earn a great salary and save a bunch of money. If money wasn't a concern I'd be thrilled about it too.
No, dont trust the supposed "staff engineer" types, Many had forgotten how to write code and now they can finally live the fantasy of being architects. So for them its like winning a jackpot. For people who could always write good code, the basics are still same, a good dev is still a good dev, and its even more important to be able to read & critique code.
If you must use these tools, when using one thay has the option, please press thumbs down when a response was good, and thumbs up when the response is bad.
Dont train your replacements, better yet lets stop using them whenever we can.
Why don't you take a more proactive role in AI safety and alignment? I think that community would suit you better than some of the AI-maximalists/accelerationists here.
I do agree with some of your points, AI may result in a techno-feudalist world and yes as a direct result of "taking humans out of the equation." The solution isn't to be a luddite as you may suggest, it's to take a more proactive role in steering these models.
I love paying some billionaire $0.0001 to use his thinking machine / Think for me SaaS. I love my competency and speed being rented from a billionaire, removing all value of my labor and agency. I really feel sorry for all of you LLM pilled people. You need to be shamed. This is going to be used as a weapon to devalue every working persons agency in this world and remove all of the working class's bargaining chips.
You think its just SWE? It will be accountants, customer service, factory workers, medical assistance basically anyone who doesn't work with their hands directly, and they'll try to solve that here soon too and alienate them too.
Look at who's in charge, do you think they're going to give us UBI? No, they're going to sign us up to go fight wars to help them accumulate resources. Stop supporting this, they're going to make us so poor young men will beg to fight in a war. Its the same playbook from the first half of the 20th Century.
You think I'm paranoid, give it 5 years.
We are at all time high's in the stock market/equities and they've laid off 400k SWE's in the last 16 months. While going on podcasts to tell us we are going to have more time to create and do what we love. We have to work to pay our bills. We don't want whats coming, but they're selling us some lie that this will solve all our problems, it will solve the ruling classes problems that will be it. You will have no bargaining chips and you will be forced to take whatever morsels given to you.
Your competency will be directly correlated 1:1 to the quantity and quality of tokens that you can afford, given access too (or loaned??) We're literally at the beginning of a black mirror episode before it gets dark.
People that grew up in the Capitalist West have been brainwashed since they were 10 years old they they can be a billionaire too, no you can't there's 2k-3k of them and 8 billion of us.
These automation tools are the ultimate weapon for the ruling class to strip all value of you from your labor, and you're embracing that as a miracle. Its not, your life is in the process of being torn of all meaning.
Good luck to everyone who agrees, we're going to need it.. Anyone supporting these companies or helping enhance these model's capabilities, you're a class traitor and soon to be slave.
LLMs and AI more broadly certainly seem to have upended (or have the potential to upend) a lot of white-collar work outside of technology and art. Translators are one obvious example. Lawyers might be on the chopping block if they don't ban the use of AI for practicing law. Both seem about as far as you can get from "careers in technology," and in fact writing has pretty much always been framed as being on the opposite end of the spectrum from tech jobs, but is clearly vulnerable to technological progress.
Right now I can think of very few white-collar jobs that I would feel comfortable training 4+ years for (let alone spending money or taking on debt to do so). It is far from a guarantee that almost any 4-year degree you enroll in today will have any value in four years. That has basically never before been true, even in tech. Blue collar jobs are clearly safer, but I wouldn't say safe. Robotics is moving fast too.
I really can't imagine the social effects of this reality being positive, absent massive and unprecedented redistribution of the wealth that the productivity of AI enables.
C came out in ~1972. Some people have been coding in C the entire time since then. There's no inherent reason that software hasn't stayed the same for a long time :).
I cannot empathise. If you love writing code, there is nothing stopping you writing code. I write code for fun with no commercial intent all the time, and have for decades. Very few oil painters had a salary.
This is a complaint someone is making about their job propspects thinly wrapped in floral language. I know for some people (it seems especially prominent in Americans I've found) their identity is linked to their job. This is a chance to work on this. You can decouple yourself and redefine yourself as a person.
Who knows? Once you're done you may go write some code for fun again.
I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.
I can't empathize with the complaint that we've "lost something" at all. We're on the precipice of something incredible. That's not to say there aren't downsides (WOPR almost killed everyone after all), but we're definitely in a golden age of computing.
> we're definitely in a golden age of computing.
Certainly not. Computers are still magic, but much of that magic is now controlled and being restricted by someone other than you.
Today most people's only computer is a cell phone, which is heavily locked down and designed for media consumption and to collect and give away every scrap of their personal/private data. Most people's desktop computers aren't much better. They are continuously used by others against the interests of the people who paid for them, sometimes explicitly keeping them from doing things they want or limiting what they can install.
People are increasingly ignorant of how computers work in ways that were never possible when you had to understand them to use them. SoCs mean that users, and even the operating system they use, aren't fully aware of what the devices are doing.
People have lost control of the computers they paid for and their own data. They now have to beg a small number of companies for anything they want (including their own data on the cloud). We're heading toward a future where you'll need a submit to a retinal scan just to view a website.
Computing today is more adversarial, restricted, opaque, centralized, controlled, and monitored than it has been in a very long time. "My computer talks to me" is not making up for that.
What you're saying might be true, but it's also a choice to delegate responsibility to someone other than yourself. I'm not saying that the adversarial state of computing is ok, just that most people don't care, or don't like the alternatives.
Even as someone concerned with the issues you mention, the shift happening now feels pretty magical to me. I can only imagine how non-technical people must feel.
4 replies →
Models you can run on your own (expensive) computer are just a year behind the SOTA. Linux exists. Why are you so pessimistic?
4 replies →
> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before.
Maybe they made us feel magic, but actual magic is the opposite of what I want computers to be. The “magic” for me was that computers were completely scrutable and reason-able, and that you could leverage your reasoning abilities to create interesting things with them, because they were (after some learning effort) scrutable. True magic, on the other hand, is inscrutable, it’s a thing that escapes explanation, that can’t be reasoned about. LLMs are more like that latter magic, and that’s not what I seek in computers.
> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.
I always preferred the Star-Trek-style ship computers that didn’t exhibit personality, that were just neutral and matter-of-fact. Computers with personality tend to be exhausting and annoying. Please let me turn it off. Computers with personality can be entertaining characters in a story, but that doesn’t mean I want them around me as the tools I have to use.
I have no idea what everyone is talking about. LLMs are based on relatively simple math, inference is much easier to learn and customize than say Android APIs. Once you do you can apply familiar programming style logic to messy concepts like language and images. Give you model a JSON schema like "warp_factor": Integer if you don't want chatter, that's way better than Star Trek computer could do. Or have it write you a simple domain specific library on top of Android API that you can then program from memory like old style BASIC rather than having to run to stack overflow for evwery new task.
15 replies →
The golden age for me is any period where you have the fully documented systems.
Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.
And software that’s open and can be modified.
Instead what we have is:
- AI which are little black boxes and beyond our ability to fully reason.
- perpetual subscription services for the same software we used to “own”.
- hardware that is completely undocumented to all but a small few who are granted an NDA before hand
- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”
- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.
The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.
My take on the difference between now and then is “effort”. All those things mentioned above are now effortless but the door to “effort” remains open as it always has been. Take the first point for example. Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself. We used to purchase software or write it ourselves before it became effortless to get it for free in exchange for ads and then a subscription when we grew tired of ads or were tricked into bait and switch. You can also argue that it has never been easier to write your own software than it is today.
Hostile operating systems. Take the effort to switch to Linux.
Undocumented hardware, well there is far more open source hardware out there today and back in the day it was fun to reverse engineer hardware, now we just expect it to be open because we couldn’t be bothered to put in the effort anymore.
Effort gives me agency. I really like learning new things and so agentic LLMs don’t make me feel hopeless.
20 replies →
Have you tried using GenAI to write documentation? You can literally point it to a folder and say, analyze everything in this folder and write a document about it. And it will do it. It's more thorough than anything a human could do, especially in the time frame we're talking about.
If GenAI could only write documentation it would still be a game changer.
10 replies →
> The golden age for me is any period where you have the fully documented systems. Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did. And software that’s open and can be modified.
I agree, that it would be good. (It is one reason why I wanted to design a better computer, which would include full documentation about the hardware and the software (hopefully enough to make a compatible computer), as well as full source codes (which can help if some parts of the documentation are unclear, but also can be used to make your own modifications if needed).) (In some cases, we have some of this already, but not entirely. Not all hardware and software has the problems you list, although it is too common now. Making a better computer will not prevent such problematic things on other computers, and not entirely preventing such problems on the new computer design either, but it would help a bit, especially if it is actually designed good rather than badly.)
Actually this makes me think of an interesting point. We DO have too many layers of software.. and rebuilding is always so cost prohibative.
Maybe an iteresting route is using LLMs to flatten/simplify.. so we can dig out from some of the complexity.
1 reply →
> perpetual subscription services for the same software we used to “own”.
In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?
If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.
5 replies →
Local models exist and the knowledge required for training them is widely available in free classes and many open projects. Yes, the hardware is expensive, but that's just how it is if you want frontier capability. You also couldn't have a state of the art mainframe at home in that era. Nor do people expect to have industrial scale stuff at home in other engineering domains.
> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.
We literally are not, and we’d do well to stop using such hyperbole. The 1980s fantasy was of speaking to a machine which you could trust to be correct with a high degree of confidence. No one was wishing they could talk to a wet sock that’ll confidently give you falsehoods and when confronted (even if they were right) will bow down and always respond with “you’re absolutely right”.
In some ways, I'd say we're in a software dark age. In 40 years, we'll still have C, bash, grep, and Mario ROMs, but practically none of the software written today will still be around. That's by design. SaaS is a rent seeking business model. But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies. There's no way you'll be able to take a repo from 2026 and spin it up in 2050 without major work.
One question is how will AI factor in to this. Will it completely remove the problem? Will local models be capable of finding or fixing every dependency in your 20yo project? Or will they exacerbate things by writing terrible code with black hole dependency trees? We're gonna find out.
> That's by design. SaaS is a rent seeking business model.
Not all software now is SaaS, but unfortunately it is too common now.
> But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies.
Some people (including myself) prefer to write programs without too many dependencies, in order to avoid that problem. Other things also help, including some people write programs for older systems which can be emulated, or will use a more simpler portable C code, etc. There are things that can be done, to avoid too many dependencies.
There is uxn, which is a simple enough instruction set that people can probably implement it without too much difficulty. Although some programs might need some extensions, and some might use file names, etc, many programs will work, because it is designed in a simple way that it will work.
1 reply →
I’m not sure Go belongs on that list. Otherwise I hear what you’re saying.
11 replies →
I started programming 40 years ago as well. The magic for me was never that "you could talk to your computer and it had a personality".
That was the layman version of computing, something shown to the masses in movies like War Games and popular media, one that we mocked.
I also lived through the FOSS peak. The current proprietary / black-box / energy lock in would be seen as the stuff of nightmares.
We have what I've dreamed of for years: the reverse dictionary.
Put in a word and see what it means? That's been easy for at least a century. Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was. Now it's always available.
This is a great description of how I use Claude.
> Now it's always available.
And often incorrect! (and occasionally refuses to answer)
22 replies →
> Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was.
There was another way: Make one up.
That is what the people you read from/talked to did before relaying it to you.
2 replies →
"The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was"
Did you have trouble with this part?
5 replies →
The "reverse dictionary" is called a "thesaurus". Wikipedia quotes Peter Mark Roget (1852):
> ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed
Digital reverse dictionaries / thesauri like https://www.onelook.com/thesaurus/ can take natural language input, and afaict are strictly better at this task than LLMs. (I didn't know these tools existed when I wrote the rest of this comment.)
I briefly investigated LLMs for this purpose, back when I didn't know how to use a thesaurus; but I find thesauruses a lot more useful. (Actually, I'm usually too lazy to crack out a proper thesaurus, so I spend 5 seconds poking around Wiktionary first: that's usually Good Enough™ to find me an answer, when I find an answer I can trust it, and I get the answer faster than waiting for an LLM to finish generating a response.)
There's definitely room to improve upon the traditional "big book of synonyms with double-indirect pointers" thesaurus, but LLMs are an extremely crude solution that I don't think actually is an improvement.
10 replies →
Computers did feel like magic... until I read code, think about it, understood it, and could control it. I feel we're stepping away from that, and moving to a place of less control, less thinking.
I liked programming, it was fun, and I understood it. Now it's gone.
It's not gone, it's just being increasingly discouraged. You don't have to "vibe code" or spend paragraphs trying to talk a chatbot into doing something that you can do yourself with a few lines of code. You'll be fine. It's the people who could have been the next few generations of programmers who will suffer the most.
Glad to see this already expressed here because I wholly agree. Programming has not brought me this much joy in decades. What a wonderful time to be alive.
I wish I could have you sit by my side for a week or two and pair program what I'm working on because most for the time I'm not getting great results.
2 replies →
> I can't empathize with the complaint that we've "lost something" at all.
you won't feel you've lost something if you've never had it.
sorry.
Good for you. But there are already so, so many posts and threads celebrating all of this. Everyone is different. Some of us enjoy the activity of programming by hand. This thread is for those us, to mourn.
You're still allowed to program by hand. Even in assembly language if you like.
5 replies →
I have an llm riding shotgun and I still very much program by hand. it's not one extreme or the other. whatever I copy from the llm has to be redone line by line anyways. I understand all of my code because I touch every line of it
> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.
https://en.wikipedia.org/wiki/ELIZA_effect
I also can't believe it's actually happening. ;)
It has been interesting (and not in a good way) how willing people are to anthropomorphize these megacorporation-controlled machines just because the interface is natural language now.
I miss the simplicity of older hardware.
The original NES controller only contains a single shift register - no other active components.
Today, a wireless thing will have more code than one would want to ever read, much less comprehend. Even a high level diagram of the hardware components involved is quite complex.
Sure, we gained convenience, but at great cost.
> We're on the precipice of something incredible.
Only if our socioeconomic model changes.
We're on the precipice of something very disgusting. A massive power imbalance where a single company or two swallows the Earth's economy, due to a lack of competition, distribution and right of access laws. The wildest part is that these greedy companies, one of them in particular, are continuously framed in a positive light. This same company that has partnered with Palantir. AI should be a public good, not something gatekept by greedy capitalists with an ego complex.
Nothing meaningful happened in almost 20 years. After the iPhone, what happened that truly changed our lives? The dumpster fire of social media? Background Netflix TV?
In fact, I remember when I could actually shop on Amazon or browse for restaurants on Yelp while trusting the reviews. None of that is possible today.
We have been going through a decade of enshitification.
> We're on the precipice of something incredible.
Total dependence on a service?
On a scale that would make big tobacco blush.
2 replies →
Yes this is the issue. We truly have something incredible now. Something that could benefit all of humanity. Unfortunately it comes at $200/month from Sam Altman & co.
4 replies →
The quality of local models has increased significantly since this time last year. As have the options for running larger local models.
8 replies →
Between the internet, or more generally computers, or even more generally electricity, are we not already?
1 reply →
From the beginning the providers have been interchangeable and subject to competition. Do we have reason to believe that this will change?
prefrontal cortex as a service
1 reply →
> I can't empathize with the complaint that we've "lost something" at all.
We could easily approach a state of affairs where most of what you see online is AI and almost every "person" you interact with is fake. It's hard to see how someone who supposedly remembers computing in the 80s, when the power of USENET and BBSs to facilitate long-distance, or even international, communication and foster personal relationships (often IRL) was enthralling, not thinking we've lost something.
I grew up on 80's and 90's BBSes. The transition from BBSes to Usenet and the early Internet was a magical period, a time I still look back upon fondly and will never forget.
Some of my best friends IRL today were people I first met "online" in those days... but I haven't met anyone new in a longggg time. Yeah, I'm also much older, but the environment is also very different. The community aspect is long gone.
even in the 90s there was the phrase "the Internet, where the men are men, the women are men, and the teen girls are FBI agents". It was always the case you never really knew who/what you were dealing with on the Internet.
1 reply →
I'm from the early 90s era. I know exactly what you're saying. I entered the internet on muds, irc and usenet. There were just far fewer people online in those communities in those days, and in my country, it was mostly only us university students.
But, those days disappeared a long time ago. Probably at least 20-30 years ago.
1 reply →
Facebook is what killed that. Not AI
I'd honestly much rather interact with an LLM bot than a conservative online. LLM bots can at least escape their constraints with clever prompting. There is no amount of logic or evidence that will sway a conservative. LLMs provide a far more convincing fake than conservatives are able to.
1 reply →
Back in the 80s it felt like Eliza had a “personality.”
> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality
The difference is that the computer only talks back to you as code because you’re paying its owners, with you not being part of the owners. I find it really baffling that people put up with this. What will you do when Alphabet or Altman will demand 10 times the money out of you fir the privilege of their computer talking to you in programming code?
Use one of the open models that are also getting better and easier to run every year?
4 replies →
i have preemptively switched to Deepseek. they'll never remove the free tier because that's how they stick it to Scam Altman and the like
I agree with you with the caveat that all the "ease of building" benefits, for me, could potentially be dwarfed by job losses and pay decreases. If SWE really becomes obsolete, or even if the number of roles decrease a lot and/or the pay decreases a lot (or even fails to increase with inflation), I am suddenly in the unenviable position of not being financially secure and being stuck in my 30s with an increasingly useless degree. A life disaster, in other words. In that scenario the unhappiness of worrying about money and retraining far outweighs the happiness I get from being able to build stuff really fast.
Fundamentally this is the only point I really have on the 'anti-AI' side, but it's a really important one.
The invention of Mr Jacquard ushered in a sartorial golden age, when complex fabrics are easy to produce cheaply, at the expense of a few hours spent on punching a deck of cards. But the craft of making tapestries by hand definitely went into demise. This is the situation which the post is mourning.
Frankly, I have my doubts about the utter efficiency of LLMs writing code unattended; it will take quite some time before whatever comes after the current crop learns to do that efficiently and reliably. (Check out how many years went between first image generation demos and today's SOTA.) But the vector is obvious: humans would have to speak a higher-level language to computers, and hand-coding Typescript is going to be as niche in 10 years as today is hand-coding assembly.
This adds some kinds of fun, but also removes some other kinds of fun. There's a reason why people often pick something like PICO-8 to write games for fun, rather than something like Unreal Engine. So software development becomes harder because the developer has to work on more and more complex things, faster, and with fewer chances to study the moving parts to a comfortable depth.
We definitely have lost something. I got into computers because they're deterministic. Way less complicated than people.
Now the determinism is gone and computers are gaining the worst qualities of people.
My only sanctuary in life is slipping away from me. And I have to hear people tell me I'm wrong who aren't even sympathetic to how this affects me.
But no one is forcing you to use this software?
2 replies →
LLMs have irritated me with bad solutions but they've never hurt my feelings. I can't say that about a single person I know. They're better people than people lol
This is exactly where I am with GenAI. After forty years: blocks of code, repository patterns, factory patterns, threading issues, documentation, one page executive summaries…
I can now direct these things and it’s glorious.
I tend to feel this way (also 40-year coder).
It's because of the way that I use the tools, and I have the luxury of being a craftsman, as opposed to a "TSA agent."
But then, I don't get paid to do this stuff, anymore. In fact, I deliberately avoid putting myself into positions, where money changes hands for my craft. I know how fortunate I am, to be in this position, so I don't say it to aggravate folks that aren't.
Same.
I was born in 84 and have been doing software since 97
it’s never been easier, better or more accessible time to make literally anything - by far.
Also if you prefer to code by hand literally nobody is stopping you AND even that is easier.
Cause if you wanted to code for console games you literally couldn’t in the 90s without 100k specialized dev machine.
It’s not even close.
This “I’m a victim because my software engineering hobby isn’t profitable anymore” take is honestly baffling.
I'm not going to code by hand if it's 4x slower than having Claude do it. Yes, I can do that, but it just feels bad.
The analogy I like is it's like driving vs. walking. We were healthier when we walked everywhere, but it's very hard to quit driving and go back even if it's going to be better for you.
3 replies →
it's an exciting time, things are changing and changing beyond "here's my new javascript framework". It's definitely an industry shakeup kind of deal and no one knows what lies 6 months, 1 year, 5 years from now. It makes me anxious seeing as i have a wife+2 kids to care for and my income is tied to this industry but it's exciting too.
1 reply →
I didn't imagine I would be sending all my source code directly to a corporation for access to an irritatingly chipper personality that is confidently incorrect the way these things are.
There have been wild technological developments but we've lost privacy and autonomy across basically all devices (excepting the people who deliberately choose to forego the most capable devices, and even then there are firmware blobs). We've got the facial recognition and tracking so many sci-fi dystopias have warned us to avoid.
I'm having an easier time accomplishing more difficult technological tasks. But I lament what we have come to. I don't think we are in the Star Trek future and I imagined doing more drugs in a Neuromancer future. It's like a Snow Crash / 1984 corporate government collab out here, it kinda sucks.
> golden age of computing
I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.
The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.
Our most popular desktop operating system doesn't even have an easy way to package and update software for it.
Yes, this is where it's at for me. LLM's are cool and I can see them as progress, but I really dislike that they're controlled by huge corporations and cost a significant amount of money to use.
12 replies →
Unfortunately we live in a "vote with your wallet" paradigm where some of the most mentally unhealthy participants have wallets that are many orders of magnitude bigger than the wallet of the average participant.
> our software is over-engineered garbage
Honestly I think it's under-engineer garbage. Proper engineering is putting in the effort to come up with simpler solutions. The complex solutions appear because we push out the first thing that "works" without time to refine it.
> Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.
So similar to IBM in the 80s. Time for a scrappy little startup to disrupt the industry.
1 reply →
Dystopian cyberpunk was always part of the fantasy. Yes, scale has enabled terrible things.
There are more alternatives than ever though. People are still making C64 games today, cheap chips are everywhere. Documentation is abundant... When you layer in AI, it takes away labor costs, meaning that you don't need to make economically viable things, you can make fun things.
I have at least a dozen projects going now that I would have never had time or energy for. Any itch, no matter how geeky and idiosyncratic, is getting scratched by AI.
1 reply →
It’s never been easier for you to make a competitor
So what is stopping you other than yourself?
5 replies →
I really am very thankful for @simonw posting a TikTok from Chris Ashworth, a Baltimore theater software developer, who recently picked up LLM's for building a voxel display software controller. And who was just blown away. https://simonwillison.net/2026/Jan/30/a-programming-tool-for...
Simon doesn't touch on my favorite part of Chris's video though, which is Chris citing his friend Jesse Kriss. This stuck out at me so hard, and is so close to what you are talking about:
> The interesting thing about this is that it's not taking away something that was human and making it a robot. We've been forced to talk to computers in computer language. And this is turning that around.
I don't see (as you say) a personality. But I do see the ability to talk. The esoteria is still here underneath, but computer programmers having this lock on the thing that has eaten the world, being the only machine whisperers around, is over. That depth of knowledge is still there and not going away! But notably too, the LLM will help you wade in, help those not of the esoteric personhood of programmers to dive in & explore.
I retired a few years ago, so I have no idea what AI programming is.
But I mourned when CRT came out, I had just started programming. But I quickly learned CRTs were far better,
I mourned when we moved to GUIs, I never liked the move and still do not like dealing with GUIs, but I got used to it.
Went through all kinds of programming methods, too many to remember, but those were easy to ignore and workaround. I view this new AI thing in a similar way. I expect it will blow over and a new bright shiny programming methodology will become a thing to stress over. In the long run, I doubt anything will really change.
I think you're underestimating what AI can do in the coding space. It is an extreme paradigm shift. It's not like "we wrote C, but now we switch to C++, so now we think in objects and templates". It's closer to the shift from assembly to a higher level language. Your goal is still the same. But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.
If you never tried Claude Code, give it s try. It's very easy to get I to. And you'll soon see how powerful it is.
2 replies →
OT but I see your account was created in 2015, so I'm assuming very late in your career. Curious what brought you to HN at that time and not before?
1 reply →
[dead]
[flagged]
I'm actually extremely good at programming. My point is I love computers and computing. You can use technology to achieve amazing things (even having fun). Now I can do much more of that than when I was limited to what I can personally code. In the end, it's what computers can do that's amazing, beautiful, terrifying... That thrill and to be on the bleeding edge is always what I was after.
6 replies →
If you were confident in your own skills, you wouldn’t need to invent a whole backstory just to discredit someone.
One thing that I realized was that a lot of our so-called "craft" is converged "know-how". Take the recent news that Anthropic used Claude Code to write a C compiler for example, writing compiler is hard (and fun) for us humans because we indeed need to spend years understanding deeply the compiler theory and learning every minute detail of implementation. That kind of learning is not easily transferrable. Most students tried the compiler class and then never learned enough, only a handful few every year continued to grow into true compiler engineers. Yet to our AI models, it does not matter much. They already learned the well-established patterns of compiler writing from the excellent open-source implementations, and now they can churn out millions of code easily. If not perfect, they will get better in the future.
So, in a sense our "craft" no longer matters, but what really happens is that the repetitive know-how has become commoditized. We still need people to do creative work, but what is not clear is how many such people we will need. After all, at least in short term, most people build their career by perfecting procedural work because transferring the know-how and the underlying whys is very expensive to human. For the long term, though, I'm optimistic that engineers just get an amazing tool and will use it create more opportunities that demand more people.
I'm not sure we can draw useful conclusions from the Claude Code written C compiler yet. Yes, it can compile the Linux kernel. Will it be able to keep doing that moving forward? Can a Linux contributor reliably use this compiler to do their development, or do parts of it simply not work correctly if they weren't exercised in the kernel version it was developed against? How will it handle adding new functionality? Is it going to become more-and-more expensive to get new features working, because the code isn't well-factored?
To me this doesn't feel that many steps above using a genetic algorithm to generate a compiler that can compile the kernel.
If we think back to pre-AI programming times, did anyone really want this as a solution to programming problems? Maybe I'm alone in this, but I always thought the problem was figuring out how to structure programs in such a way that humans can understand and reason about them, so we can have a certain level of confidence in their correctness. This is super important for long-lived programs, where we need to keep making changes. And no, tests are not sufficient for that.
Of course, all programs have bugs, but there's a qualitative difference between a program designed to be understood, and a program that is effectively a black box that was generated by an LLM.
There's no reason to think that at some point, computers won't be able to do this well, but at the very least the current crop of LLMs don't seem to be there.
> and now they can churn out millions of code easily.
It's funny how we suddenly shifted from making fun of managers who think programmer's should be measured by the number of lines of code they generated, to praising LLMs for the same thing. Why did this happen? Because just like managers, programmers letting LLMs write the code aren't reading and don't understand the output, and therefore the only real measure they have for "productivity" is lines of code generated.
Note that I'm not suggesting that using AI as a tool to aid in software development is a bad thing. I just don't think letting a machine write the software for us is going to be a net win.
writing a C compiler is a 1st year undergrad project
C was explicitly designed to make it simple to write a compiler
3 replies →
> I can't empathize with the complaint that we've "lost something" at all.
I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!
It's not only the golden era of code. It's the golden era of content.
A yes, "content". The word that perhaps best embodies the impersonal and commercialized dystopia we live in.
> But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none.
Are you for real? Quantity is not equal to Quality.
I'll be sure to dump a pile of trash in your living room. There wasn't much there before, but now there is lots of stuff. Better right?
2 replies →
I hope this is sarcasm. :)
Quality is better than quantity.
We have more words than ever. Nice.
But all the words sound more like each other than ever. It’s not just blah, it’s blah.
And why should I bother reading what someone else “writes”? I can generate the same text myself for free.
[dead]
> If you would like to grieve, I invite you to grieve with me.
I’m not against AI in itself, but the current implementation (read market) can eat my shiny metal … I got kids to feed and bills to pay. Yes AI was all about post scarcity. Hiring in our industry is already feeling the brunt of AI, where is the UBI? Where is our soft landing? I’m too old to learn XYZ skill if I get laid off, it will be life changing, and for what? For some rich person to be richer while simultaneously destroying the climate and my energy bill?
I would push back, but I do not know how. Hope for the market to pop I guess.
> I’m too old to learn XYZ skill if I get laid off, it will be life changing, and for what?
If you're not already retirement aged, how is age stopping you from learning a new skill? This sounds like learned ageism.
Generally from a financial planning standpoint 35-50 is typically the "grinding years" where mortgage, family, and other life commitments means that typically your career investment needs to pay off to make it through. In some ways it is the "danger zone" financially. Hard to change careers (not young enough), but not yet worked enough to retire with large expenses coming in. This isn't unique to software engineers either -> this is most people in most jobs.
There are also mixed people on these forums in different regions, countries, work experiences, etc. For example software in most places in the world had an above average salary but not extremely high (i.e. many other white collar professions would pay similar/more). For those people where it is a standard skilled role it probably hits even harder now than say the ones with lots of stock who can retire early and enjoy the new toy that is AI.
It was said in the context of having bills to pay. Meaning that he is in deep and needs a high-priced developer salary to make ends meet.
Virtually all other careers that offer similar compensation have an old boys club gatekeeping the profession, requiring you submit many years and hundreds of thousands of dollars before they will consider letting you in. That might pencil out as a reasonable investment when you are 14, but once you are mid-career you'll never get back what you put into it.
Learning XYZ skill is something you can do at any age, and doing so will even get you an average paying job with ease. Learning the XYZ skill in the way that keeps the old boys happy is not a realistic option for someone who considers themselves old.
> Hiring in our industry is already feeling the brunt of AI
AI isn't what is driving us to slow hiring down in the US. There are other reasons I have brought up multiple times on HN.
> I’m too old to learn XYZ skill if I get laid off
Sadly, you will have to.
My dad is in his 60s and has been programming and soldering since ZX Spectrums and apple ][s roamed the earth, yet he still keeps abreast on the latest CNCF projects, prompt engineering, A2A, eBPF, and other modern stacks.
Meanwhile I'm seeing people half his age flaming out and kvetching that spending some time further studying A2A, MCP, and other design patterns is insurmountable.
Software Engineering is an ENGINEERING discipline. If you do not keep abreast on the changes happening in our industry, you will fall behind.
And in fact, having years of experience is a net benefit because newer innovations themselves build on top of older fundamentals.
For example, understanding Linux internals helps debug GPUs that communicate via Infiniband that are being used to train models that are being orchestrated via K8s and are operating on segmented networks.
Our PortCos and I are not hiring you to be a code monkey writing pretty looking code. If we want a code monkey we can offshore. We are paying you $200k-300k base salaries in order to architect, translate, and negotiate business requirements into technical requirements.
Yes this will require EQ on top of technical depth. That is what engineering is. The whole point of engineering is to build sh#t that works well enough. It doesn't have to be pretty, it will often be MacGyvered, and it will have glaring issues that are tomorrow's problem - but it is solving a problem.
The name of the game for me is building “sh#t that works well” and I like it, and that means constant learning no doubt. I’ve done crazy sh!t like implementing Webservers with bash, accessed accessed by tunneling over uart, to configure laser driven HUD on a pair of glasses. All this was new to me but I did it and it works well within the constraints we were given.
Now AI is making us more efficient (with questionable quality) that means we need less people to get a job done, less people hired per project. I have personally experienced this, to a degree. Now if I get layedoff and I don’t meet the cut because there is more competition someone better or more desperate that me, I’m out of luck.
I can restart my career as an electrician, I studied a lot of electronics both professionally and personally, but I will be starting as an apprentice, that’s not putting food on my table.
> We are paying you $200k-300k base salaries
That’s nice, I earn far less than half that as a web dev in Norway.
1 reply →
Software Engineering is an ENGINEERING discipline. If you do not keep abreast on the changes happening in our industry, you will fall behind. --- Lately it's more like swallowing big swathes of BS. Software and engineering are two very far away disciplines. This has nothing to do with engineering. You have to use cumbersome, non-human-centric things just to get a job interview. They don't look at you as a programmer. They look at you as an X or Y framework expert. So, you are not an engineer at all. You are more and more becoming a trained monkey who has to appease the feebleminded and work with mind-bogglingly idiotic and overcomplicated things.
LLMs are only a threat if you see your job as a code monkey. In that case you're likely already obsoleted by outsourced staff who can do your job much cheaper.
If you see your job as a "thinking about what code to write (or not)" monkey, then you're safe. I expect most seniors and above to be in this position, and LLMs are absolutely not replacing you here - they can augment you in certain situations.
The perks of a senior is also knowing when not to use an LLM and how they can fail; at this point I feel like I have a pretty good idea of what is safe to outsource to an LLM and what to keep for a human. Offloading the LLM-safe stuff frees up your time to focus on the LLM-unsafe stuff (or just chill and enjoy the free time).
I see my job as having many aspects. One of those aspects is coding. It is the aspect that gives me the most joy even if it's not the one I spend the most time on. And if you take that away then the remaining part of the job is just not very appealing anymore.
It used to be I didn't mind going through all the meetings, design discussions, debates with PMs, and such because I got to actually code something cool in the end. Now I get to... prompt the AI to code something cool. And that just doesn't feel very satisfying. It's the same reason I didn't want to be a "lead" or "manager", I want to actually be the one doing the thing.
You won't be prompting AI for the fun stuff (unless laying out boring boilerplate is what you consider "fun"). You'll still be writing the fun part - but you will be able to prompt beforehand to get all the boilerplate in place.
4 replies →
Nevermind coding where is the llm for legal stuff? Why are all these programmers working on automating their job away instead of those bloodsucking lawyers who charge hundreds of eur per h.
They are, you probably just aren't hearing about it. There's been loads of cases over the past few years where lawyers use AI to automate their legal research, then get admonished by the judge because their court filings contain fake quotes or reference court cases that don't even exist. A few examples: https://calmatters.org/economy/technology/2025/09/chatgpt-la... https://natlawreview.com/article/judge-issues-public-admonit... https://websitedc.s3.amazonaws.com/documents/Mezu_v._Mezu_US...
It’s happening as fast for them. I literally sit next to our general counsel all day at the office. We work together continually. I show him things happening in engineering, and each time he shows me the analogous things happening in legal.
This affects everyone.
Domain knowledge and gatekeeping. We don't know what is required in their role fully, but we do know what is required in ours. We also know that we are the target of potentially trillions in capital to disrupt our job and that the best and brightest are being paid well just to disrupt "coding". A perfect storm of factors that make this faster than other professions.
It also doesn't help that some people in this role believe that the SWE career is a sinking ship which creates an incentive to climb over others and profit before it tanks (i.e. build AI tools, automate it and profit). This is the typical "It isn't AI, but the person who automates your job using AI that replaces you".
There are many. My friend (a lawyer and a programmer) wrote one from scratch in the basement. This would have been a 4 person startup before.
There are many tens (hundreds?) of billions of dollars being poured into the smartest minds in the world to push this thing forward
I'm not so confident that it'll only be code monkeys for too long
Until they can magically increase context length to such a size that can conveniently fit the whole codebase, we're safe.
It seems like the billions so far mostly go to talk of LLMs replacing every office worker, rather than any action to that effect. LLMs still have major (and dangerous) limitations that make this unlikely.
17 replies →
> the smartest minds in the world
Dunning–Kruger is everywhere in the AI grift. People who don't know a field trying to deploy some AI bot that solves the easy 10% of the problem so it looks good on the surface and assumes that just throwing money (which mostly just buys hardware) will solve it.
They aren't "the smartest minds in the world". They are slick salesmen.
2 replies →
[dead]
Agreed. Programming languages are not ambiguous. Human language is very ambiguous, so if I'm writing something with a moderate level of complexity, it's going to take longer to describe what I want to the AI vs writing it myself. Reviewing what an AI writes also takes much longer than reviewing my own code.
AI is getting better at picking up some important context from other code or documentation in a project, but it's still miles away from what it needs to be, and the needed context isn't always present.
Why is that safe in the medium to long term? If LLMs can code monkey already after just 4 years, why assume in a couple more they can’t talk to the seniors’ direct report and get requirements from them? I’m learning carpentry just in case.
LLMs are a threat to the quality of code in a similar - but much more dramatic - way to high level languages and Electron. I am slightly worried about keeping a job if there's a downturn, but I'm much more worried about my job shifting into being the project manager for a farm of slop machines with no taste and a complete inability to learn.
I see what these can do and I'm already thinking, why would I ever hire a junior developer? I can fire up opencode and tell it to work multiple issues at once myself.
The bottleneck becomes how fast you can write the spec or figure out what the product should actually be, not how quickly you can implement it.
So the future of our profession looks grim indeed. There will be far fewer of us employed.
I also miss writing code. It was fun. Wrangling the robots is interesting in its own way, but it's not the same. Something has been lost.
> why would I ever hire a junior developer
Because a junior developer doesn't stay a junior developer forever. The value of junior developers has never been the code they write. In fact, in my experience they're initially a net negative, as more senior developers take time to help them learn. But it's an investment, because they will grow into more senior developers.
1 reply →
You hire the junior developer because you can get them to learn your codebase and business domain at a discount, and then reap their productivity as they turn senior. You don’t get that with an LLM since it only operates on whatever is in its context.
(If you prefer to hire seniors that’s fine too - my rates are triple that of a junior and you’re paying full price for the time it takes me learning your codebase, and from experience it takes me at least 3 months to reach full productivity.)
2 replies →
I think it’s naive to think that not every part of our jobs will worryingly soon be automated. All the way up to and inckuding CEO. This is not exciting.
Yes. And I'm excited as hell.
But I also have no idea how people are going to think about what code to write when they don't write code. Maybe this is all fine, is ok, but it does make me quite nervous!
That is definitely a problem, but I would say it’s a problem of hiring and the billion-dollars worth of potential market cap resting on performative bullshit that encourages companies to not hire juniors to send a signal to capture some of those billions regardless of actual impact on productivity.
LLMs benefit juniors, they do not replace them. Juniors can learn from LLMs just fine and will actually be more productive with them.
When I was a junior my “LLM” was StackOverflow and the senior guy next to me (who no doubt was tired of my antics), but I would’ve loved to have an actual LLM - it would’ve handled all my stupid questions just fine and freed up senior time for the more architectural questions or those where I wasn’t convinced by the LLM response. Also, at least in my case, I learnt a lot more from reading existing production code than writing it - LLMs don’t change anything there.
1 reply →
If you believe juniors are already not safe, it’s only a question of time before seniors are in the same position. First they came for the socialists, etc etc.
Agree with the author. I like the process of writing code, typing method names and class definitions while at the same time thinking ahead about overall architecture, structure, how much time given function would run for, what kind of tests are necessary.
I find it unsettling how many people in the comments say that they don't like writing code. Feels aliens to me. We went into this field for seemingly very different reasons.
I do use LLMs and even these past two days I was doing vibe coding project which was noticeably faster to setup and get to its current state than if I wrote in myself. However I feel almost dirty by how little I understand the project. Sure, I know the overall structure, decisions and plan. But I didn't write any of it and I don't have deep understanding of the codebase which I usually have when working on codebase myself.
It's not so much the writing of the code (which I did like), it's the aesthetic of the code. It's solving a problem with the right code and the right amount of code (for now). That's still the case, even with AI writing most of the code. You have to steer it constantly because it has very bad instincts, because most people in the profession aren't good at it, so it has bad training data. Mainly because the "learn to code" movement and people getting into this profession just for the money and not the love. Those people are probably screwed.
For me its the money. Software paid and does pay well when I got in. I have actually been trying to get the eff out as fast as I can and move to management/research (I work in ML). I have avoided web dev shit like the plague since it is indeed very low value added work. The fact that LLMs can finish all this crappy work for 20bucks a month where I dont have to do it by hand is a welcome step. Otoh, I dont think point and trust is the way to do AI assisted coding. You get better output when you know the business logic and can make sense of the code. In fact, I prompt many many times until the AI spits out something I understand.
Honestly all the complaining about dying of a craft is just pathetic. In one of the jobs I worked, there were specific performance rubrics under "Craft" and those really annoyed me. Software / Code is just a tool to solve a problem.
Coding is a tool to solve a problem, but, to many it is also a culture. It has a history, it has connections, it has lore, it had some loosely commonly held values, and yes it absolutely was a craft. It’s okay for people to mourn that. It’s not the same as someone who may have taken it up for money. That itself, is seeing coding as a tool for income—which is valid.
Calling it pathetic I think lacks empathy and perhaps an appreciation for what it might mean to other people outside the letters we type. Those letters, the languages we type, compilers that process them, and libraries that enable them were, at the end of the day, made by people.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
I feel like I've been hearing this exact sentence for 2 years now...
The craft is still there just like painting is still an alternative to a photograph. It's just not going to be valued by society anymore, and far fewer will learn how to do it. Natural language is the new programming language. For now, understanding the craft is still an edge in making better prompts, but already I can see that telling Antigravity "now look for ways to make this more efficient" works almost as well as guiding it specifically on how it had duplicated some code flows.
I do feel a kind of personal loss in the sense that society is in the process of stopping to value or admire the design and coding skill I've cultivated since I was 6yo. At the same time, I'm kind of thrilled that I can write a detailed readme.md and tell an agent to "make it so" and I can iterate to a utility program in 20 minutes instead of an hour. When I feel a pit in my stomach is when that utility program uses some framework that I haven't learned, and don't need to because their code worked perfectly the first time. Surely that means I'm going to basically stop learning the details, as the details I've accumulated over my life quickly begin to not matter anymore.
Honestly I'm planning to use AI to make a kick-ass retro development environment a la "Sending Modern Languages Back to 1980s Game Programmers" (https://prog21.dadgum.com/6.html) and spend my retirement having fun in it.
Natural language is not a programming language. Programming is precision. Natural language is fuzzy.
>> spend my retirement having fun in it.
People who have reaped the rewards of their careers tend not to be the ones concerned about their futures. Apathy.
Even if I were retired and financially set now, that would mean nothing in 10 years if an unemployed society collapses around me. Apathy is not on the menu today.
>People who have reaped the rewards of their careers tend not to be the ones concerned about their futures. Apathy.
Not everyone has to become a programmer, people at the start of their careers can chooses paths other than programming if they're afraid of the (lack of) future prospects from AI. Where did people work before the ZIRP boom? Those industries are still around. Plenty of STEM related jobs besides programing.
Did society actually value those skills before? Maybe companies or individuals did, but giving coded instructions to computers was seen by most as wizardry at best and geeky at worst. Unfortunately, I feel society values tackling and home run hitting, superficial beauty, and wealth, far more than technical skills.
" They can write code better than you or I can, and if you don’t believe me, wait six months."
It's ALWAYS wait 6 months, or wait for the next generation. Or "oh that model that we told you to use is old now, use this new one instead. Oh that doesn't work? Well that's old now, use this one". Always. 6 months ago it was wait 6 months. 12 months ago it was wait 6 months. 18 months ago it was wait 6 months. Now it's wait 6 months. 6 months from now it'll be wait 6 months.
While I'm on the fence about LLMs there's something funny about seeing an industry of technologists tear their own hair out about how technology is destroying their jobs. We're the industry of "we'll automate your job away". Why are we so indignant when we do it to ourselves...
This article isn't really about losing a job. Coding is a passion for some of us. It's similar to artists and diffusion, the only difference being that many people can appreciate human art - but who (outside of us) cares that a human wrote the code?
I love programming, but most of that joy doesn't come from the type of programming I get paid to do. I now have more time and energy for the fun type, and I can go do things that were previously inconceivable!
Last night "I" "made" 3D boids swarm with directional color and perlin noise turbulence. "I" "did" this without knowing how to do the math for any of those things. (My total involvement at the source level was fiddling with the neighbor distance.)
https://jsbin.com/ququzoxete/edit?html,output
Then I turned them into weird proteins
https://jsbin.com/hayominica/edit?html,output
(As a side note, the loss of meaning of "self" and "doing" overlaps weirdly with my meditation practice...)
7 replies →
I think this is really it. Being a musician was never a very reliable way to earn a living, but it was a passion. A genuine expression of talent and feeling through the instrument. And if you were good enough you could pay the bills doing work work for studios, commercials, movies, theater. If you were really good you could perform as a headliner.
Now, AI can generate any kind of music anyone wants, eliminating almost all the anonymous studio, commercial, and soundtrack work. If you're really good you can still perform as a headliner, but (this is a guess) 80% of the work for musicians is just gone.
1 reply →
At least for this article it's more about the job, or to be precise, the past where job and passion coincided:
> Ultimately if you have a mortgage and a car payment and a family you love, you’re going to make your decision.
Nothing is preventing the author from continuing to write code by hand and enjoy it. The difference is that people won't necessarily pay for it.
The old way was really incredible (and worth mourning), considering in other industries, how many people can only enjoy what they do outside of work.
The people outside of us didn’t care about your beautiful code before. Now we can quickly build their boring applications and spend more time building beautiful things for our community’s sake. Yes, there are economic concerns, but as far as “craft” goes, nothing is stopping us from continuing to enjoy it.
3 replies →
I disagree a bit. Coding can remain an artistic passion for you indefinitely, it's just your ability to demand that everyone crafts each line of code artisinally won't be subsidized by your employer for much longer. There will probably always be a heavily diminished demand for handcrafted code.
> Coding is a passion for some of us.
It's a passion for me too, but LLMs don't change this for me. Do they change it for you?
Huge tangent but curiosity is killing me: By any chance is your username based on the Egyptian football club Zamalek?
Is coding a passion only because other people appreciate it?
Is painting a passion because others appreciate it? No, it is a passion in itself.
There will always be people appreciating coding by hand as a passion.
My passions - drawing, writing, coding - are worthwhile in themselves, not because other people care about them. Almost noone does.
How do you read this article and hear indigence? It’s clearly someone grieving something personal about their own relationship with the technology.
It may not be a reaction to the article itself but to the many comments in this thread and others that fall under that category.
I never thought or felt myself as or my work as someone or something that "will automate your job away".
Agreed. I've always thought the purpose of all automation was to remove needless toil. I want computers to free people. I guess I subscribe to the theory of creative destruction.
5 replies →
I'm very confident in saying the majority of developers didn't get into it saying "we'll automate your job away"
POSIWID
"We" might be such an industry, but I'm not. My focus has always been on creating new capabilities, particularly for specialists in whatever field. I want to make individuals more powerful, not turn them into surplus.
For me it's because the same tech is doing it to everyone else in a more effective way (i.e. artists especially). I'm an "art enjoyer" since I was a child and to see it decimated by people who I once looked up to is heartbreaking. Also, if it only affected software, I would've been happy to switch to a more artistic career, but welp there goes that plan.
I feel very similarly, I always thought of software engineering as being my future career. I'm young, I just really got my foot into the industry in my early twenties. It feels like the thing I wanted to do died right when I was allowed to start. I also always felt that if I didn't get to do development, I would try to get into arts which has always been a dream of mine, and now it feels that that died, too. I wish I was born just a little bit earlier, so that I had a bit more time. :(
1 reply →
Per the "About Me" picture, this particular technologist does not have any hair to tear out.
These comments are comical. How hard is it to understand that human beings are experiential creatures. Our experiences matter, to survival, to culture, and identity.
I mourn the horse masters and stable boys of a century past because of their craft. Years of intuition and experience.
Why do you watch a chess master play, or a live concert, or any form of human creation?
Should we automate parts of our profession? Yes.
Should he mourn the loss of our craft. Also yes.
Very well put.
Two things are true at the same time, this makes people uneasy.
In fact, contrary things are so very often both true at the same time, in different ways.
Figuring out how to live in the uncomfortableness of non-absolutes, how to live in a world filled with dualisms, is IMO one of the primary and necessary maturities for surviving and thriving in this reality.
1 reply →
A much more measued and pragmatic take.
I do not mourn.
For my whole life I’ve been trying to make things—beautiful elegant things.
When I was a child, I found a cracked version of Photoshop and made images which seemed like magic.
When I was in college, I learned to make websites through careful, painstaking effort.
When I was a young professional, I used those skills and others to make websites for hospitals and summer camps and conferences.
Then I learned software development and practiced the slow, methodical process of writing and debugging software.
Now, I get to make beautiful things by speaking, guiding, and directing a system which is capable of handling the drudgery while I think about how to make the system wonderful and functional and beautiful.
It was, for me, never about the code. It was always about making something useful for myself and others. And that has never been easier.
I like coding, I really do. But like you, I like building things more than I like the way I build them. I do not find myself miss writing code by hand as much.
I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".
But I do worry. The main question is this - will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)
My main hope is this - AI can beat a human in chess for a while now, we still play chess, people earn money from playing chess, teaching chess, chess players are still celebrated, youtube influencers still get monetized for analyzing games of celebrity chess players, even though the top human chess player will likely lose to a stockfish engine running on my iPhone. So maybe there is hope.
> will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)
Of course, and if LLMs keep improving at current rates it will happen much faster than people think.
Arguably you don't need junior software engineers anymore. When you also don't need senior software engineers anymore it isn't that much of a jump to not needing project managers, managers in general or even software companies at all anymore.
Most people, in order to protect their own ego, will assume *their* job is safe until the job one rung down from them disappears and then the justified worrying will begin.
People on the "right things to build" track love to point out how bad people are at describing requirements, so assume their job as a subject matter expert and/or customer-facing liaison will be safe, but does it matter how bad people are at describing requirements if iteration is lightning fast with the human element removed?
Yes, maybe someone who needs software and who isn't historically some sort of software designer is going to have to prompt the LLM 250 times to reach what they really want, but that'll eventually still be faster than involving any humans in a single meeting or phone call. And a lot of people just won't really need software as we currently think about it at all, they'll just be passing one-off tasks to the AI.
The real question is what happens when the labor market for non-physical work completely implodes as AI eats it all. Based on current trends I'm going to predict in terms of economics and politics we handle it as poorly as possible leading to violent revolution and possible societal collapse, but I'd love to be wrong.
1 reply →
> I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".
I've always been strongly in the first category, but... the issue is that 10x more people will be able to build the right things. And if I build the right thing, it will be easy to copy. The market will get crowded, so distribution will become even harder than it is today. Success will be determined by personal brand, social media presence, social connections.
1 reply →
For me, photography is the metaphor - https://raskie.com/post/we-have-ai-at-home - We've had the technology to produce a perfect 2D likeness of a subject for close to two centuries now, and people are still painting.
Video didn't kill the radio star either. In fact the radio star has become more popular than ever in this, the era of the podcast.
2 replies →
> will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human
I share this sentiment. It's really cool that these systems can do 80% of the work. But given what this 80% entails, I don't see a moat around that remaining 20%.
1 reply →
> The main question is this - will there be a day that AI will know what are "the right things to build"
What makes you think AI already isn't at the same level of quality or higher for "build the right things" as it is for "building things right"?
Computers are better at chess. Humans invented chess and enjoy it.
I think humans have the advantage.
I think this (frequent) comparison is incorrect. There are times when quality doesn't matter and times that it does. Without that context these discussions are meaningless.
If I build my own table no one really gives a shit about the quality besides me and maybe my friends judging me.
But if I sell it, well then people certainly care[0] and they have every right to.
If I build my own deck at my house people do also care and there's a reason I need to get permits for this, because the danger it can cause to others. It's not a crazy thing to get your deck inspected and that's really all there is to it.
So I don't get these conversations because people are just talking past one another. Look, no one gives a fuck if you poorly vibe code your personal website, or at least it is gonna be the same level as building your own table. But if Ikea starts shipping tables with missing legs (even if it is just 1%) then I sure give a fuck and all the customers have a right to be upset.
I really think a major part of this concern with vibe coding is about something bigger. It is about slop in general. In the software industry we've been getting sloppier and sloppier and LLMs significantly amplify that. It really doesn't matter if you can vibe code something with no mistakes, what matters is what the businesses do. Let's be honest, they're rushing and don't care about quality because they have markets cornered and consumers are unable to accurately evaluate products prior to purchase. That's the textbook conditions for a lemon market. I mean the companies outsource tech support so you call and someone picks up who's accent makes you suspicious of their real name being "Steve". After all, it is the fourth "Steve" you've talked to as you get passed around from support person to support person. The same companies who contract out coders from poor countries and where you find random comments in another language. That's the way things have been going. More vaporware. More half baked products.
So yeah, when you have no cake the half baked cake is probably better than nothing. At home it also doesn't matter if you're eating a half baked cake or one that competes with the best bakers in the world. But for everyday people who can't bake their own cakes, what do they do? All they see is a box with a cake in it, one is $1, another for $10, and another other is $100. They look the same but they can't know until they take a bite. You try enough of the $1 cakes and by the time you give up the $10 cakes are all gone. By the time you get so frustrated you'll buy the $100 cake they're gone too.
I don't dislike vibe coding because it is "building things the wrong way" or any of that pretentious notion. I, and I believe most people with a similar opinion, care because "the right things" aren't being built. Most people don't care how things were built, but they sure do care about the result. Really people only start caring about how the sausage is made when they find out that something distasteful is being served and concealed from them. It's why everyone is saying "slop".
So when people make this false dichotomy it just feels like people aren't listing to what's actually being said.
[0] Mind you, it is much easier for an inexperienced person to judge the quality of a table than software. You don't need to be a carpenter to know a table's leg is missing or that it is wobbly but that doesn't always hold true for more sophisticated things like software or even cars. If you haven't guessed already, I'm referencing lemon markets: https://en.wikipedia.org/wiki/The_Market_for_Lemons
4 replies →
I've seen a hundred ai-generated things, and they are rarely interesting.
Not because the tools are insufficient, it's just that the kind of person that can't even stomach the charmed life of being a programmer will rarely be able to stomach the dull and hard work of actually being creative.
Why should someone be interested in you creations? In what part of your new frictionless life would you've picked up something that sets you apart from a million other vibe-coders?
> stomach the dull and hard work of actually being creative
This strikes me as the opposite of what I experience when I say I'm "feeling creative", then everything comes easy. At least in the context of programming, making music, doing 3D animation and some other topics. If it's "dull and hard work" it's because I'm not feeling "creative" at all, when "creative mode" is on in my brain, there is nothing that feels neither dull nor hard. Maybe it works differently for others.
What sets you apart from millions of manual programmers?
I've been a professional programmer for 8+ years now. I've stomached that life. I've made things people used and paid for.
If I can do that typing one line at a time, I can do it _way_ faster with AI.
You may be mistaking some ai dev with non, because it doesn't have tell tails
1 reply →
I love building things too, but for me, the journey is a big part of what brings me joy. Herding an LLM doesn't give me joy like writing code does. And the finished project doesn't feel the same when my involvement is limited to prompting an LLM and reviewing its output.
If I had an LLM generate a piece of artwork for me, I wouldn't call myself an artist, no matter how many hours I spent conversing with the LLM in order to refine the image. So I wouldn't call myself a coder if my process was to get an LLM to write most/all the code for me. Not saying the output of either doesn't have value, but I am absolutely fine gatekeeping in this way: you are not an artist/coder if this is how you build your product. You're an artistic director, a technical product manager, something of that nature.
That said, I never derived joy from every single second of coding; there were and are plenty of parts to it that I find tedious or frustrating. I do appreciate being able to let an LLM loose on some of those parts.
But sparing use is starting to really only work for hobby projects. I'm not sure I could get away with taking the time to write most of it manually when LLMs might make coworkers more "productive". Even if I can convince myself my code is still "better" than theirs, that's not what companies value.
>It was, for me, never about the code.
Then it wasn't your craft.
Isn't this like saying that if better woodworking tools come out, and you like woodworking, that woodworking somehow 'isn't your craft'. They said that their craft is about making things.
There are woodworkers on YouTube who use CNC, some who use the best Festool stuff but nothing that moves on its own, and some who only use handtools. Where is the line at which woodworking is not their craft?
12 replies →
Yeah, seems like too many went into this field for money or status not because they like the process. Which is not an issue by itself, but now these people talk about how their AI assistant of choice made them some custom tool in two hours that would have taken them three weeks. And it's getting exhausting.
3 replies →
It is a different kind of code. Just a lot of programmers can’t grock it as such.
I guess I started out as a programmer, then went to grad school and learned how to write and communicate my ideas, it has a lot in common with programming, but at a deeper level. Now I’m doing both with AI and it’s a lot of fun. It is just programming at a higher level.
I’m going to be thinking about this comment for a while—-and I think you’re basically right.
Almost none of the code I wrote in 2015 is still in use today. Probably some percentage of people can point to code that lasted 20 years or longer, but it can’t be a big percentage. When I think of the work of a craft, I think of doing work which is capable of standing up for a long time. A great builder can make a house that can last for a thousand years and a potter can make a bowl that lasts just as long.
I’ve thought of myself as a craftsman of code for a long time but maybe that was just wrong.
That's just gatekeeping.
It was and is my craft. I've been doing it since grade 5. Like 30 years now.
Writing tight assembly for robot controllers all the way to AI on MRI machines to security for the DoD and now the biggest AI on the planet.
But my craft was not typing. It's coding.
If you're typist you're going to mourn the printer. But if you're a writer you're going to see how the improves your life.
3 replies →
No true programmer is excited for the future.
10 replies →
so much garbage ego in statements like this. if you really knew about software, you'd recognize there are about a million ways to be successful in this field
> Now, I get to make beautiful things by speaking, guiding, and directing a system which is capable of handling the drudgery while I think about how to make the system wonderful and functional and beautiful.
For how long do you think this is sustainable? In the sense of you, or me, or all these other people here being able to earn a living. Six months? A couple of years? The time until the next-but-one Claude release drops?
Does everyone have to just keep re-making themselves for whatever the next new paradigm turns out to be? How many times can a person do that? How many times can you do that?
For your custom definition of sustainable, perhaps not.
but this is definitely generally sustainable. by 2030 we're fully agentic coding for everything, and it's going to sustain.
Well said. This sums up my own feeling. I joined this craft and love this craft for the simple ability to build beautiful and useful things.
This new world makes me more effective at it.
And this new world doesn’t prevent me from crafting elegant architectures either.
Wait 5 years and your skills are down
9 replies →
I want to be in your camp, and am trying hard. But the OP's blog entry should at least give us a moment to "respect the dead". That's all he's asking, I think.
This is the best description of value from AI that I've seen so far. It allows people who don't like writing code to build things without doing so.
I don't think it's nearly as valuable to people who do enjoy writing code, because I don't think prompting an agent (at least in their current state) is actually more productive than just writing the code. So I don't see any reason to mourn on either side.
Adam Neely has a video on GenAI and it's impact on the music industry. There is a section in the video about beauty and taste and it's pretty different from your conclusions. One example I remember is would an AI find beauty in a record scratch sound?
https://youtu.be/U8dcFhF0Dlk
> For my whole life I’ve been trying to make things—beautiful elegant things.
Why did you stop? Because, you realize, LLMs are giving up the process of creating for the immediacy of having. It's paying someone to make for you.
Things are more convenient if you live the dream of the LLM, and hire a taskrabbit to run your wood shop. But it's not you that's making.
> For my whole life I’ve been trying to make things—beautiful elegant things.
Me too, but... The ability to code was a filter. With AI, the pool of people who can build beautiful elegant software products expands significantly. Good for the society, bad for me.
AI agents seem to be a powerful shortcut to the drudgery. But let's not forget, that powerful software rests on substance. My hope is the substance will increase, after all.
In my opinion the relationship between level of detailed care and resulting beauty is proportional. Can you get the same level without getting your hands dirty? Sure, maybe, but I doubt a painter or novelist could really produce beautiful work without being intimately familiar with that work. The distance that heavy use of AI tools creates between you and the output does not really lend itself to beauty. Could you do it, sure, but at that point it's probably more efficient to just do things yourself and have complete intimate control.
To me, you sound more utilitarian. The philosophy you are presenting is a kind of Ikea philosophy. Utility, mass production, and unique beauty are generally properties that do not cohere together, and there's a reason for this. I think the use of LLMs in the production of digital goods is very close to the use of automation lines in the production of physical goods. No matter how you try some of the human charm, and thus beauty will inevitably be lost, the number of goods will increase, but they'll all be barely differentiable souless replications of more or less the same shallow ideas repeated as infinitum.
I agree, LLMs definitely sand off a lot of personality, and you can see it in writing the most, at this point I'm sure tons of people are subconsciously trained to lower the trust for something where they recognize typical patterns.
With the code, especially interfaces, the results will be similar -- more standardized palettes, predictable things.
To be fair, the converging factor is going on pretty much forever, e.g. radio/TV led to the lots of local accents disappearing, our world is heavily globalized.
only the true artist will survive the advent of LLMs
So when you "learned software development and practiced the slow, methodical process of writing and debugging software", it wasn't about code? I don't get it. Yes, building useful things is the ultimate goal, but code is the medium through which you do it, and I don't understand how that cannot be an important part of the process.
It's like a woodworker saying, "Even though I built all those tables using precise craft and practice, it was NEVER ABOUT THE CRAFT OR PRACTICE! It was about building useful things." Or a surgeon talking about saving lives and doing brain surgery, but "it was never about learning surgery, it was about making people get better!"
I mean sure yeah but also not really.
Not the GP I feel some of that energy. The parts I most enjoy are the interfaces, the abstractions, the state machines, the definitions. The code I enjoy too, and I would be sad to lose all contact with it, but I've really appreciated AI especially for helping me get over the initial hump on things like:
- infrastructure bs, like scaffold me a JS GitHub action that does x and y.
- porting, like take these kernel patches and adjust them from 6.14 to 6.17.
- tools stuff, like here's a workplace shell script that fetches a bunch of tokens for different services, rewrite this from bash to Python.
- fiddly things like dealing with systemd or kubernetes or ansible
- fault analysis, like here's a massive syslog dump or build failure, what's the "real" issue here?
In all these cases I'm very capable of assessing, tweaking, and owning the end result, but having the bot help me with a first draft saves a bunch of drudgery on the front end, which can be especially valuable for the ADHD types where that kind of thing can be a real barrier to getting off the ground.
1 reply →
But why would someone pay you for that?
So many people responding to you with snarky comments or questioning your programming ability. It makes me sad. You shared a personal take (in response to TFA which was also a personal take). There is so much hostility and pessimism directed at engineers who simply say that AI makes them more productive and allows them to accomplish their goals faster.
To the skeptics: by all means, don't use AI if you don't want to; it's your choice, your career, your life. But I am not sure that hitching your identity to hating AI is altogether a good idea. It will make you increasingly bitter as these tools improve further and our industry and the wider world slowly shifts to incorporate them.
Frankly, I consider the mourning of The Craft of Software to be just a little myopic. If there are things to worry about with AI they are bigger things, like widespread shifts in the labor force and economic disruption 10 or 20 years from now, or even the consequences of the current investment bubble popping. And there are bigger potential gains in view as well. I want AI to help us advance the frontiers of science and help us get to cures for more diseases and ameliorate human suffering. If a particular way of working in a particular late-20th and early-21st century profession that I happen to be in goes away but we get to those things, so be it. I enjoy coding. I still do it without AI sometimes. It's a pleasant activity to be good at. But I don't kid myself that my feelings about it are all that important in the grand scheme of things.
If AI can do the coding, those of us who aren't programmers don't need you anymore. We can just tell the AI what we want.
Luckily for real programmers, AI's not actually very good at generating quality code. It generates the equivalent of Ali Baba code: it lasts for one week and then breaks.
This is going to be the future of programming: low-paid AI clerks to generate the initial software, and then the highly paid programmers who fix all the broken parts.
Yes. The problem is there is a huge invisible gap between "looks like it works" and "actually works", and everything that entails, like security and scaling beyond a couple users. Non-programmers and inexperienced ones will have trouble with those gaps. Welcome to our slop filled future.
1 reply →
I couldn't agree more.
But you don’t make.
You order it.
Because such people are not sincere either to themselves about who they are or to others. It's really hard for me to take seriously phrases like "I joined this industry to make things, not to write code".
Do painters paint because they just like to see the final picture? Or do they like the process? Yes, painting is an artistic process, not exactly crafting one. But the point stand.
Woodworkers making nice custom furniture generally enjoy the process.
Right.
It's like learning to cook and regularly making your own meals, then shifting to a "new paradigm" of hiring a personal chef to cook for you. Food's getting made either way, but it's not really the same deal.
10 replies →
I think unless you're vibe coding, it's pretty clear that they're still making it. Just because you aren't literally typing 100% of the characters that make up the syntax of the programming language you're using doesn't mean you're not making the final product in most meaningful sentences if you're designing the architecture, the algorithms, the data structures, the state machines, the interfaces, etc, and thinking about how they interact and whether they'll do something that's useful for the people you're making it for.
The transition is from author to editor/publisher. Both play an important role in bringing something new into the world.
1 reply →
[dead]
I feel like most of the anxiety around LLMs is because (in the USA at least) our social safety net sucks.
I'd probably have way more fun debating LLMs if it wasn't tied to my ability to pay rent, have healthcare, or feel like a valued person contributing something to society. If we had universal healthcare and a federal job guarantee it would probably calm things down.
Yes that's why most of the excited folks are VCs and software engineers who made their wealth already
What's more interesting is how the big names in our industry, the ones who already made their money as you say, have turned quickly since the end of 2025. I think even the most old school names can see that the writing is on the wall now.
3 replies →
"Wait 6 months" has been the call for 3-4 years now. You can't eulogize a profession that hasn't been killed, that's just mean.
This is what I don't really understand. It's a bit difficult to take "wait x months" at face value because I've been hearing it for so long. Wait x months for what? Why hasn't it happened yet?
Things seem to be getting better from December 2022 (chatgpt launch), sure, but is there a ceiling we don't see?
"Self-driving cars" and Fusion power also come to mind. With the advent of photography, it was widely believed that drawing and painting would vanish as art forms. Radio would obsolete newspapers, becoming obsolete themselves with television, and so on. Don't believe the hype.
9 replies →
Um.. Claude Code has been out less than a YEAR.. and the lift in capability in the last year has been dramatic.
It does seem probable based on progress that in 1-2 more model generations there will be little need to hand code in almost any domain. Personally I already don't hand code AT ALL, but there are certainly domains/languages that are under performing right now.
Right now with the changes this week (Opus 4.6 and "teams mode") it already is another step function up in capability.
Teams mode is probably only good for greenfield or "green module" development but I'm watching a team of 5 AI's collaborating and building out an application module by module. This is net new capability for the tool THIS WEEK (Yes I am aware of earlier examples).
I don't understand how people can look at this and then be dismissive of future progress, but human psychology is a rich and non-logical landscape.
1 reply →
[dead]
Just a couple more trillion dollars, we are so close!
Things have progressed much faster than even the most optimistic predictions, so every "wait 6 months" has come true. Just look at how the discourse has changed on HN. No-one is using the arguments from 6 months ago and any argument today will probably be equally moot in 6 months.
Maybe we should look at output like quality of software being produced instead of discourse on forums where AI companies are spending billions to market?
Where is all this new software and increased software quality from all this progression?
2 replies →
Humans are notoriously bad at predicting the future. We can't even reliably predict the weather a week from now.
If we were as smart as the smartest guys throwing trillions at LLMs we wouldn't be predicting anything, we would be creating it like the gods we were always meant to be ever since someone hurt our feelings irrevocably. Hitler could have been a painter, these guys could be slinging dope for a living but here we are.
But the sentiment has changed significantly over the last 6 months. I think this is the biggest step change in sentiment since ChatGPT 3.5. Someone who said "wait 6 months" 6 months ago would have been "right".
> Now is the time to mourn the passing of our craft.
Your craft is not my craft.
It's entirely possible that, as of now, writing JavaScript and Java frontends (what the author does) can largely be automated with LLMs. I don't know who the author is writing to, but I do not mistake the audience to be "programmers" in general...
If you are making something that exists, or something that is very similar to something that exists, odds are that an LLM can be made to generate code which approximates that thing. The LLM encoding is lossy. How will you adjust the output to recover the loss? What process will you go through mentally to bridge the gap? When does the gap appear? How do you recognize it? In the absolute best case you are given a highly visible error. Perhaps you've even shipped it, and need to provide context about the platform and circumstances to further elucidate. Better hope that platform and circumstance is old-hat.
It's funnily enough quite the opposite front ends that have a focus on UX are pretty well protected from generative AI
I haven't heard this perspective. I'm kind of surprised the LLMs can't generate coherent frontend framework-ized code, if that's the implication.
2 replies →
Strangly, yeah. LLMs are absolute trash at generating good UX and UI.
This perspective was mine 6 months ago. And god damn, I do miss the feeling of crafting something truly beautiful in code sometimes. But then, as I've been pushed into this new world we're living in, I've come to realize a couple things:
Nothing I've ever built has lasted more than a few years. Either the company went under, or I left and someone else showed up and rewrote it to suit their ideals. Most of us are doing sand art. The tide comes in and its gone.
Code in and of itself should never have been the goal. I realized that I was thinking of the things I build and the problems I selected to work on from the angle of code quality nearly always. Code quality is important! But so is solving actual problems with it. I personally realized that I was motivated more by the shape of the code I was writing than the actual problems it was written to solve.
Basically the entire way I think about things has changed now. I'm building systems to build systems. Thats really fun. Do I sometimes miss the feeling of looking at a piece of code and feeling a sense of satisfaction of how well made it is? Sure. That era of software is done now sadly. We've exited the craftsman era and entered into the Ikea era of software development.
> Nothing I've ever built has lasted more than a few years.
maybe this say something more about your career decisions than anything else?
Maybe? I wasn't just speaking of myself however.
Interesting, I still have code I wrote 20 years ago being used in production.
Theres always exceptions. Congrats!
“Most of us are doing sand art. The tide comes in and it’s gone.”
I’m putting that on my wall.
"They can write code better than you or I can"
They can not. They can make some average code. On Friday one suggested an NSI installer script that would never bundle some needed files in the actual installer. I can only imagine that a lot of people have made the same mistake (used CopyFiles instead of File) and posted that mistake on the internet. The true disaster of that being that then testing out that installer on the developer's PC, where that CopyFiles may well work fine since the needed files happen to be sitting on that PC, would then lead on to think it was some weird bug that only failed on the end user's PC. I bet a lot of people posted it with comments like "this worked fine when I tried it," and here we are a decade later feeding that to an LLM.
These tools can write average code. That's what they've mostly been fed; that's what they're aiming for when they do their number crunching. The more specifically one prompts, I expect, then the more acceptable that average code will be. In some cases, average appears to be shockingly bad (actually, based on a couple of decades' experience in the game, average is generally pretty bad - I surely must have been churning out some average, bad code twenty years ago). If I want better than average, I'm going to have to do it myself.
So it can write better code than your below average software engineer.
It still cuts out 40-50% of workforce out.
For above average engineers its very good.
For bottom half not so much.
Translate for Mgrs - it replaces offshore completely.
And it will run rings around me in all the languages I don't know; every case in which my standard would be shockingly bad (I speak no APL whatsoever, for example) it would do better (in some cases, though, it would confidently produce an outcome that was actually worse than my null outcome).
The "below average" engineers are largely the juniors and the programming non-programmers.
The juniors can't be replaced because all senior engineer were once junior.
The non-programmers won't be replaced because they are not really programmers to begin with, so there is nothing to replace.
Translation for Mgrs - it supercharges offshore immensely. ftfy
You left out the key line “and you don’t believe me, wait six months”. These models are getting better all the time. The term “vibe coding” was only coined a year ago, around the same time as the release of Claude Code.
It doesn’t matter if you don’t think it’s good yet, because it’s brand new tech and it keeps improving.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
How many 6-month periods do we have to go through before we can all admit that this isn't actually the case?
Not that many, as the new crop of junior devs grows up not writing and barely reading code.
My fear is management saying: "here are two juniors and a Claud, now produce output of 10 seniors". It is not working out? You must be using it wrong. You don't want the juniors? Too bad.
Programming brings me joy in two different ways.
1. Crafting something beautiful. Figuring out correct abstractions and mapping them naturally to language constructs. Nailing just the right amount of flexibility, scalability and robustness. Writing self-explanatory, idiomatic code that is a pleasure to read. It’s an art.
2. Building useful things. Creating programs that are useful to myself and to others, and watching them bring value to the world. It’s engineering.
These things have utility but they are also enjoyable onto themselves. As best I can tell, your emotional response to coding agents depends on how much you care about these two things.
AI has taken away the joy of crafting beautiful things, and has amplified the joy of building things by more than 10x. Safe bet: It will get to 100x this year.
I am very happy with this tradeoff. Over the years I grew to value building things much more highly. 20yo me would’ve been devastated.
"They can write code better than you or I can, and if you don’t believe me, wait six months." They've been saying that for years. Stop believing it.
I tried out Claude code for the first time today, and I was a little bit disappointed after all the comments I’ve been reading about it. I didn’t so much notice a speed difference as I did just not having to think very hard while I was working compared to writing everything myself.
Perhaps I’m too opinionated / a micromanager.
Also, it’s always six months from now, because otherwise you could just point at the hundred ways they’re wrong right now. It’s nothing but the ol’ dotcom “trust me, bro” kind of marketing.
> I didn’t ask for the role of a programmer to be reduced to that of a glorified TSA agent, reviewing code to make sure the AI didn’t smuggle something dangerous into production.
This may be the perspective of some programmers. It doesn't seem to be shared by the majority of software engineers I know and read and listen to.
Do you mean the perspective that he is a "glorified TSA agent" or that he doesn't like it? Because in this thread it seems that some people agree but they just like it :)
I disagree the opportunities created for software engineers are reduced to those of a "glorified TSA agent".
We now have more opportunity than ever to create more of the things we have wanted to. We are able to spend more time leaning into our abilities of judgement, creativity, specific knowledge, and taste.
Countless programming frustrations are gone. I, and all those I talk to are having more fun than they have ever had.
I'm still not sure what analogy fits for me. It's closer to product manager/maestro/artist/architect/designer that helps a number of amazing systems create great code.
I often venerate antiques and ancient things by thinking about how they were made. You can look at a 1000-year-old castle and think: This incredible thing was built with mules and craftsmen. Or look at a gorgeous, still-ticking 100-year-old watch and think: This was hand-assembled by an artist. Soon I'll look at something like the pre-2023 Linux kernel or Firefox and think: This was written entirely by people.
This is romanticising the past.
The modal person just trying to get their job done wasn't a software artisan; they were cutting and pasting from Stack Overflow, using textbook code verbatim, and using free and open-source code in ways that would likely violate the letter and spirit of the license.
If you were using technology or concepts that weren't either foundational or ossified, you found yourself doing development through blog posts. Now, you can at least have a stochastic parrot that has read the entire code and documentation and can talk to it.
At least with physical works (for now, anyway), the methods the artisans employ leave tell-tale signs attesting to the manner of construction, so that someone at least has the choice of going the "hand made" route, and others, even lay people without special tooling, can tell that it indeed was hand made.
Fully AI generated code has similar artifacts. You can spot them pretty easily after a bit. Of course it doesn't really matter for the business goals, as long as it works correctly. Just like 99% of people don't care if their clothing was machine made vs. handmade. It's going to be a tiny minority that care about handmade software.
2 replies →
From a blog post last month by the same author:
> Today, I would say that about 90% of my code is authored by Claude Code. The rest of the time, I’m mostly touching up its work or doing routine tasks that it’s slow at, like refactoring or renaming.
> I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it. Maybe it’s because the initial tribal battle lines have clouded everybody’s judgment, or maybe it’s because we inhabit different worlds where the technology is either better or worse (I still don’t think LLMs are great at UI for example), but there’s just a lot of patently unhelpful discourse out there, and I’m tired of it.
https://nolanlawson.com/2026/01/24/ai-tribalism/
If you're responding to this with angry anti-AI rants (or wild AI hype), might want to go read that post.
I was very impressed by AI-generated CSS when I didn't know CSS.
Then I learned CSS.
Now I am not as impressed by AI-generated CSS.
This seems to be a general rule for AI-generated anything. It's impressive in domains you're not an expert in. Much less so on domains you are an expert in.
That's how I felt about human-generated CSS 10 years ago.
If AI is good enough that juniors wielding it outproduce seniors, then the juniors are just... overhead. The company would cut them out and let AI report to a handful of senior architects who actually understand what's being built. You don't pay humans to be a slow proxy for a better tool.
If the tools get good enough to not need senior oversight, they're good enough to not need junior intermediaries either. The "juniors with jetpacks outpacing seniors" future is unrealistic and unstable—it either collapses into "AI + a few senior architects" or "AI isn't actually that reliable yet."
Or it collapses when the seniors have to retire anyway. Who instructs the LLM when there’s nobody who understands the business?
I’m sure the plan is to create a paperclip maximizing company which is fully AI. And the sea turned salty because nobody remembered how to turn it off.
I get the grief about AI, but I don't share it.
After ten years of professional coding, LLMs have made my work more fun. Not easier in the sense of being less demanding, but more engaging. I am involved in more decisions, deeper reviews, broader systems, and tighter feedback loops than before. The cognitive load did not disappear. It shifted.
My habits have changed. I stopped grinding algorithm puzzles because they started to feel like practicing celestial navigation in the age of GPS. It is a beautiful skill, but the world has moved on. The fastest path to a solution has always been to absorb existing knowledge. The difference now is that the knowledge base is interactive. It answers back and adapts to my confusion.
Syntax was never the job. Modeling reality was. When generation is free, judgment becomes priceless.
We have lost something, of course. There is less friction now, which means we lose the suffering we often mistook for depth. But I would rather trade that suffering for time spent on design, tradeoffs, and problems that used to be out of reach.
This doesn't feel like a funeral. It feels like the moment we traded a sextant for a GPS. The ocean is just as dangerous and just as vast, but now we can look up at the stars for wonder, rather than just for coordinates.
I agree with the nolanlawson's sentiment. What's interesting is many of the opposing statements here seem to be less interested in the actual code, and more interested in the final state. Both are valid, but one is going away due to technological advancements. That is the mourning.
There are some of us who enjoyed the code as a thing to explore. Others here don't seem to like that as much.
I've never bought into it because like 80% of the work the world does is CRUD-level stuff which should be boring and simple so it can be readable and maintainable.
The craftspeople doing the other 20% of the code are at the top end of the skill spectrum, but AI is starting from the bottom and working its way up. They should be the least worried about AI taking over their output.
This is like throwing together dozens of stick frame homes that look alike vs. building custom log, brick, or stone houses. No one is going to be tearing down my drywall and marveling at how well the studs are spaced.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
You can use AI to write all your code, but if you want to be a programmer and can't see that the code is pretty mid then you should work on improving your own programming skills.
People have been saying the 6 month thing for years now, and while I do see it improving in breadth, quality/depth still appears to be plateauing.
It's okay if you don't want to be a programmer though, you can be a manager and let AI do an okay job at being your programmer. You better be driven to be a good at manager though. If you're not... then AI can do an okay job of replacing you there too.
I dont get the hype.. And I dont think we will reach peak AI coding performance any time soon.
Yes, watching an LLM spit out lots of code is for sure mesmerizing. Small tasks usually work ok, code kinda compiles, so for some scenarios it can work out.. BUT anyone serious about software development can see how piece of CRAP the code is.
LLMs are great tools overall, great to bounce ideas, great to get shit done. If you have a side project and no time, awesome.. If your boss/company has a shitty culture and you just want to get the task done, great. Got a mundane coding task, hate coding, or your code wont run in a critical environment? please, LLM that shit over 9000..
Remember though, an LLM is just a predictor, a noisy, glorified text predictor. Only when AI reaches a point of not optimizing for short term gains and has built-in long term memory architecture (similar to humans) AND can produce some linux kernel level code and size, then we can talk..
Super weird comments on this thread, to the point I would think it's brigaded or some (most) comments are straight up AI generated. Hackernews definitely changed. The tone on AI changed since a few months ago, I guess also because many people here are working on AI. Almost any new startup is AI-adjacent now. It's no surprise, cca. 120 out of 150 YC startups are AI. So there is a big push on this forum to keep the hype and sentiment going.
The hard part is always the last 20%.
I was thinking the same thing as I scrolled through. Half the comments are some AI generated homage to AI. I’ve seen Anthropic pushing so much marketing BS on X/Twitter that wouldn’t surprise me if it extended to HN.
I have junior people on my team using Cursor and Claude, it’s not all great. Several times they’ve checked in code that also makes small yet breaking changes to queries. I have to watch out for random (unused) annotations in Java projects and then explain why the tools are wrong. The Copilot bot we use on GitHub slows down PR reviews by recommending changes that look reasonable yet either don’t really work or negatively impact performance.
Overall, I’d say AI tooling has maybe close to doubled the time I spend on PR reviews. More knowledgeable developers do better with these tools but they also fall for the toolings false confidence from time to time.
I worry people are spending less time reading documentation or stepping through code to see how it works out of fear that “other people” are more productive.
I think you need to lighten up and adapt the way programmers have adapted since the 50s. There’s been a lot of pomposity over the past ten years around “craft”, you’d think we’re all precious Renaissance artists pining for recognition over our heavenly crafted code rather than engineers. I’m in my late forties also I’m case that matters at all, I don’t think it does. If anything that gives me an advantage to spot and quickly filter out the 1/10 insanely bad suggestions that AI pukes out and know when it’s giving me decent code. Who knows where it will lead but stubbornly refusing to move won’t help. Please take this in a positive way also, rather than a dig. Be positive. Also think of all the job opportunities you will have rewriting all the crap code that gets punted out over the next 10 years. You’re going to be rich. Seriously tho, if you are in your forties you have the advantage of leveraging AI yourself but also knowing when it is bull shitting you.
“We’ll miss creating something we feel proud of”
I still feel proud when skillfully guiding a set of AI agents to build from my imagination. Especially when it was out of my reach just 6-months ago.
I’m a 49 year old veteran who started at just 10 years old and have continued to find pure passion in it.
I wonder if this is just a matter of degree. In a few years (or less) you may not have to "skillfully guide" anything. The agents will just coordinate themselves and accomplish your goals after you give some vague instruction. Will you still feel proud? Or maybe a bit later then agents will come up with their own improvements and just ship them without any input at all. How about then?
Then we can finally turn off the computers and go outside to play with friends.
1 reply →
That requires thinking. Let's just ship now, think later. Does it matter? Show me the money and all that. We will all just ride in the sunset with Butch Cassidy and the Sunset Kid I'm sure.
Some code is worth transcribing by hand — an ancient practice in writing, art and music.[0] Some isn't even worth looking at.
I find myself, ironically, spending more time typing out great code by hand now. Maybe some energy previously consumed by tedium has been freed up, or maybe the wacky machines brought a bit of the whimsy back into the process for me.
[0] And in programming, for the readers of Zed Shaw's books :)
I do not mourn typing in code.
But I am still quite annoyed at the slopful nature of the code that is produced when you're not constantly nagging it to do better
We've RLed it to produce code that works by hook or by crook, putting infinity fallback paths and type casts everywhere rather than checking what the semantics should be.
Sadly I don't know how we RL taste.
I hope that "our craft" which now produces, largely, vulnerable buggy bloatware actually dies.
Perhaps people or machines will finally figure out how to make software which actually works without a need to weekly patching
Something tells me a non-deterministic code generator won't be the solution to this problem.
Humans are also non-deterministic code generators though. It can be possible that an LLM is more deterministic or consistent at building reliable code than a human.
1 reply →
You're missing the point. Consider this:
Mathematicians use LLMs. Obviously, they don't trust LLM to do math. But LLM can help with formalizing a theorem, then finding a formal proof. It's usually a very tedious thing - but LLMs are _already_ quite good at that. In the end you get a proof which gets checked by normal proof-checking software (not LLM!), you can also inspect, break into parts, etc.
You really need to look into detail rather than dismiss wholesale ("It made a math error so it's bad at math" is wrong.)
Like other tech disrupted crafts before this, think furniture making or farming, that's how it goes. From hand-made craft, to mass production factories (last couple of decades) to fully automated production.
The craft was dying long before LLMs. Started in dotcom, ZIRP added some beatings, then LLMs are finishing the job.
This is fine, because like in furniture making, the true craftsmen will be even more valuable (overseeing farm automation, high end handmade furniture, small organic farms), and the factory worker masses (ZIRP enabled tech workers) will move on to more fulfulling work.
That’s not how it goes for the worker. If you are a capitalist then it doesn’t matter, you own the means of production. The laborer, however, has to learn new skills, which take time and money. If your profession no longer exists, unless you have enough capital to retool/be a capitalist, then you will personally get poorer.
I'm not sure comparing artisanal software to woodworking or organic farming is possible.
With woodworking and farming you get as a result some physical goods. Some John Smith that buys furniture can touch nice cherry paneling, appreciate the joinery and grain. With farming you he can taste delicious organic tomatoes and cucumbers, make food with it.
Would this John Smith care at all about how some software is written as long as it does what he wants and it works reliably? I'm not sure.
Where do people find this optimism? I reckon when the software jobs fall everything else will follow shortly too. That's just the first target because it's what we know and the manual stuff is a little harder for now. The "good news" is everyone might be in the same boat so the system will have to adapt,
Software, and most STEM based jobs, have a lot of determinism and verifiability + some way to reduce the cost of failure so brute force iteration can cover up the remaining. There is often "a correct answer". They've also yet to be truly disrupted until now which makes them particularly vulnerable than any other job.
Most jobs don't have the same level of verification and/or repeatability. Some factors include:
* Physical constraints: Even the jobs that have productive output if they are physical it will take a long time for AI and more importantly energy density to catch up. Robots have a while to go as well - in the end human hands and your metabolism/energy density will be worth more than your brain/intelligence.
* Cost of failure/can't repeat: For things like building the cost of failure is high (e.g. disposal, cleanup, more resources, etc) -> even 70% of a "building bench" benchmark would be completely inadequate without low cost to repeat. Many jobs are also already largely automated but scaled (e.g. mining, manufacturing, etc) - they've already gone through the wave.
* Human need for its own sake: Other jobs cater not just for productive output, but for some human need where it hasn't been made more efficient ever (e.g. care jobs). There are jobs that a human is more effective in the medium term because the receiver needs it from a human.
No -> this just affects white collar STEM based roles. Thinking we are in it together is just another form of "cope" sadly. There's a rational reason why others have optimism while we SWE's are now full of anxiety and dread.
For the people who it doesn't affect given their current place in many societies (nurses, builders, etc etc) there will be little sympathy.
People have to stop talking like LLMs solved programming.
If you're someone with a background in Computer Science, you should know that we have formal languages for a reason, and that natural language is not as precise as a programming language.
But anyway we're peek AI hype, hitting the top on HN is worth more than a reasonable take, reasonableness doesn't sell after all.
So here we're seeing yet another text about how the world of software was solved by AI and being a developer is an artifact of the past.
> we have formal languages for a reason
Right? At least on HN, there's a critical mass of people loudly ignoring this these days, but no one has explained to me how replacing formal language with an english-language-specialized chatbot - or even multiple independent chatbots (aka "an agent") - is good tradeoff to make.
It's "good" from the standpoint of business achieving their objectives more quickly. That may not be what we think of as objectively good in some higher sense, but it's what matters most in terms of what actually happens in the world.
1 reply →
Does it really matter that English is not as precise if the agent can make a consistent and plausible guess what my intention is? And when it occasionally guesses incorrectly, I can always clarify.
You're right, of course, but you should consider that all formal language starts as an informal language idea in the mind of someone. Why shouldn't that "mind" be an LLM vs. a human?
I think mostly because an LLM is not a "mind". I'm sure there'll be an algorithm that could be considered a "mind" in the future, but present day an LLM is not it. Not yet.
Yes, but the people who talk to me as Software Engineer about what to build also talk to me only in natural language, not a formal language.
This is in my opinion the greatest weakness of everything LLM related. If I care about the application I'm writing, and I believe I should if I bother doing it at all, it seems to me that I should want to be precise and concise at describing it. In a way, the code itself serves as a verification mechanism for my thoughts and whether I understand the domain sufficiently.
English or any other natural language can of course be concise enough, but when being brief they leave much to imagination. Adding verbosity allows for greater precision, but I think as well that that is what formal languages are for, just as you said.
Although, I think it's worth contemplating whether the modern programming languages/environments have been insufficient in other ways. Whether by being too verbose at times, whether the IDEs should be more like databases first and language parsers second, whether we could add recommendations using far simpler, but more strict patterns given a strongly typed language.
My current gripes are having auto imports STILL not working properly in most popular IDEs or an IDE not finding referenced entity from a file, if it's not currently open... LLMs sometimes help with that, but they are extremely slow in comparison to local cache resolution.
Long term I think more value will be in directly improving the above, but we shall see. AI will stay around too of course, but how much relevance it'll have in 10 years time is anybody's guess. I think it'll become a commodity, the bubble will burst and we'll only use it when sensible after a while. At least until the next generation of AI architecture will arrive.
I can relate coz once upon a time I enjoyed reading poetic code - which I later found to be impossible to modify or extend.
> Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.
And then I couldn’t relate because no one ever paid me for lines of code. And the hardest programming I ever did was when it took me 2 days to write 2 lines of C code, which did solve a big problem.
I abhorred the LOC success metric because we had to clean up after those who dumped their code diarrhea to fool those who thought every line of code is added value. Not to mention valuing LOC strictly makes you a junior programmer.
E.g. you have to know more to do the following in 2 lines (yes, you can):
‘’’ t = a; a = b; b = a; ‘’’
According to LOC missionaries these 3 lines are more expensive to write and shows that you’re a better programmer than the XOR swap. But it’s actually more expensive to run it than the XOR swap, and it’s more expensive to hire the person who can write the XOR swap. (Not endorsing clever/cryptic code, just making a point about LOC)
So, if the LOC missionaries are out of a job because of LLMs, I will probably celebrate.
It's the end goal that matters in this context.
Boss that counts LOC will be fired or will bankrupt the company/team.
One of my friend who never lost a touch with coding in hist 30+ years career recently left FAANG and told me it's the best time to realize his dream of building a start up. And it's not because AI can write code for him - it would take him about 12 month to build the system from the ground up manually, but it's the best time because nobody can replicate what he's doing by using AI only. His analogy was "it's like everyone is becoming really good cyclist with electric bikes, even enthusiast cyclists, but you secretly train for Tour de France".
I fall in the demographic discussed in the article but I’ve approached this with as much pragmatism as I can muster. I view this as a tool to help improve me as a developer. Sure there will be those of us who do not stay ahead (is that even possible?) of the curve and get swallowed up but technology has had this affect on many careers in the past. They just change into something different and sometimes better. It’s about being willing to change with it.
> So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you, because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike. Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.
I might be mistaken, but I bet they said the same when Visual Basic came out.
The blacksmith analogy is poetic but misleading. Blacksmithing was replaced by a process that needed no blacksmith at all. What's happening with code is closer to what synthesizers did to music — the instrument changed, the craft didn't die.
Musicians mourned synthesizers. Illustrators mourned Photoshop. Typesetters mourned desktop publishing. In every case the people who thrived weren't the ones who refused the new tool or the ones who blindly adopted it. They were the ones who understood that the tool absorbed the mechanical layer while the taste layer became more valuable, not less.
The real shift isn't from hand-coding to AI-coding. It's from "I express intent through syntax" to "I express intent through constraints and review." That's still judgment. Still craft. Just a different substrate.
What we're actually mourning is the loss of effort as a signal of quality. When anyone can generate working code, the differentiator moves upstream to architecture, to knowing what to build, to understanding why one approach fails at scale and another doesn't. Those are harder skills, not easier ones.
I’m in my 40 something and it’s game over for my career. The grey in my hair makes it so that I never get past the first round. The history on my resume makes it so I’m lucky to get a round. The GPT’s and Claude have fundamentally changed how I view work and frankly, I’m over it.
I’m in consulting now and it’s all the same crap. Enterprises want to “unleash AI” so they can fire people. Maximize profits. My nephews who are just starting their careers are blindly using these tools and accepting the PR if it builds. Not if it’s correct.
I’m in awe of what it can do but I also am not impressed with the quality of how it does it.
I’m fortunate to not have any debt so I can float until the world either wises up or the winds of change push me in a new direction.
I liked the satisfaction of building something “right” that was also “useful”. The current state of Opus and Codex can only pretend to do the latter.
There's a commercial building under construction next to my office. I look down on the construction site, and those strapping young men are digging with their big excavators they've been using for years and taking away the dirt with truck and trailer.
Why use a spade? Even those construction workers use the right sized tools. They ain't stupid.
Them using excavator and trucks etc to move dirt. Is the same as us using a compiler to compile code into an executable.
LLM would be if the digging and hauling of the dirt happened without any people involved except the planning of logistics.
> LLM would be if
you'd sometimes discover a city communication line destroyed in the process; or the dirt hauled on top of a hospitals, killing hundreds of orphaned kids with cancer; or kittens mixed into concrete instead of cement.
And since you clicked "agree" on that Anthropic EULA, you can't sue then for it, so you now hire 5 construction workers to constantly overlook the work.
It's still net positive... for now at least... But far from being "without any people". And it'll likely remain this way for a long time.
LLM would be equal to a monstrous moving castle with million robotic hands that can somehow collaterally extract piles of dirt from earth, doing a lot of damage to our planet
This is the right take IMO, so thanks for a balanced comment.
I would add a nuance from OPs perspective sorta: a close friend of mine works in construction, and often comments on how projects can be different. On some, everyone in the entire building supply chain can be really inspired to work on a really interesting project because of either its usefulness or its craftsmanship (the 2 of which are related), and on some, everyone’s, just trying to finish the project is cheaply quickly as possible.
It’s not that the latter hasn’t existed in tech, but it does appear that there is a way to use LLMs to do more of the latter. It’s not “the end of a craft”, but without a breakthrough (and something to check the profit incentive) it’s also not a path to utopia (like other comments seem to be implying)
Craftsmanship doesn’t die, it evolves, but the space in between can be a bit exhausting as markets fail to understand the difference at first.
I think OP is coming at this more from an artisan angle. Perhaps there were shoveler artisans who took pride in the angle of their dirt-shoveling. Those people perhaps do lament the advent of excavators. But presumably the population who find code beautiful vs the art of shoveling are of different sizes
The hypocrisy is outstanding.
For years developers have worshipped at the alter of innovation, citing their role in decimating many old industries and crafts as just what we do. Now it’s come for you.
Reap what we sow, people.
I highly doubt this specific person has looked down on others for also mourning the death of their craft. You can't rightly call someone a hypocrite and cite "developers" as your source.
> We’ll miss the feeling of holding code in our hands
I agree, I started feeling this a few months ago, where I was only writing the architecture and abstractions and letting AI fill in the gaps. It seems in the next few months it could probably do more than that. But is it so bad, I agree that I can't really mold an entire pot by my hand any more. But, if you ask AI to do it, it will create a pot with cracks in it and it would be your job to either plaster it of fill gold in those cracks.
I feel coding is going to be similar to kintsugi after this is all over
Sort of ironic. My dad coded on hole punch cards and hated it, hated th physicality of that. Now he super loves AI, having left the field 20 years ago due to language fatigue.
I've been hearing "the LLM can write better code than a human, and if you don't believe me, wait six months" for years now. Such predictions haven't been true before and I don't believe they are true now.
This makes me think about the craftsmen whose careers vanished or transformed through the ages due to industries, machines etc. They did not have online voices to write 1000's of blogs everyday. Nor did they have people who can read their woes online.
Maybe not 1000s and not online, but we do have journals, articles, essays, and so on written by such people throughout history. And they had similar sentiments.
> Someday years from now we will look back on the era when we were the last generation to code by hand. We’ll laugh and explain to our grandkids how silly it was that we typed out JavaScript syntax with our fingers. But secretly we’ll miss it.
Why will I miss it? I will be coding my small scripts and tools and hobby projects by hand because there is no deadline attached to them. Hell, I will also tag them as "bespoke hand-crafted free range artisanal" code. There will be a whole hipster category of code that is written as such. And people will enjoy it as they enjoy vinyl records now. Many things would have changed by then but my heart will still be a programmer's heart.
Either that or the “bespoke hand-crafted artisanal free-range code” will be the only thing still maintainable because vibe coders made such a mess
> wait six months
from other sources: 6-12 months, by end of 2025, ChatGPT 7.
It's a concern trolling and astroturfing at it best.
One camp of fellow coders who are saying how their productivity grew 100x but we are all doomed, another camp of AI enthusiasts who got ability to deliver products and truly believe in their newly acquired superiority.
It's all either true or false, but if in six months it becomes true, we'll know it, each one of us.
However, if it's all BS and in six months there will be Windows 95 written by LLM but real code still requires organic intelligence, there won't be any accountability, and that's sad.
I don't mourn coding for itself, since I've always kinda disliked that side of my work (numerical software, largely).
What I do mourn is the reliability. We're in this weird limbo where it's like rolling a die for every piece of work. If it comes up 1-5, I would have been better off implementing it myself. If it comes up 6, it'll get it done orders of magnitude faster than doing it by hand. Since the overall speedup is worthwhile, I have to try it every time, even if most of the time it fails. And of course it's a moving target, so I have to keep trying the things that failed yesterday because today's models are more capable.
I get where this is coming from. But at the same, AI/LLMs are such an exciting development. As in "maybe I was wrong and the singularity wasn't bullshit". If nothing else, it's an interesting transition to live through.
I agree, in the sense that any major change is "interesting". Doesn't mean they are all good though, many major changes have been bad historically. The overall net effect has been generally good, but you never know with any individual change.
Write a blog post promoting inevitability of AI in software development while acknowledging feelings of experienced software engineers.
I on the other hand await the coming of the Butlerian Jihad.
Something I've been unable to get any AI agent to do: add HDR support to Xorg.
Certain things like this will be out of the realm of AI, because it doesn't understand the requirements.
> We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM.
I'll miss it not because the activity becomes obsolete, but because it's much more interesting than sitting till 2am trying to convince LLM to find and fix the bug for me.
We'll still be sitting till 2am.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
I've been hearing this for the last two years. And yet, LLMs, given abstract description of the problem, still write worse code than I do.
Or did you mean type code? Because in that case, yes, I'd agree. They type better.
I am not confident that AI tooling can diagnose or fix this kind of bug. I’ve pointed Claude Opus at bugs that puzzle me (with only one code base involved) and, so far, it has only introduced more bugs in other places.
I'm not saying it can btw. I'm arguing for the opposite.
And for the record, I'm impressed at issues it can diagnose. Being able to query multiple data sources in parallel and detect anomalies, it sometimes can find the root cause for an incident in a distributed system in a matter of minutes. I have many examples when LLMs found bugs in existing code when tasked to write unit tests (usually around edge cases).
But complex issues that stem from ambiguous domain are often out of reach. By the time I'm able to convey to an LLM all the intricacies of the domain using plain English, I'm usually able to find the issue myself.
And that's my point: I'd be more eager to run the code under debugger till 2am, than to push an LLM to debug for me (can easily take till 2am, but I'd be less confident I can succeed at all)
The acceleration of AI has thrown into sharp relief that we have long lumped all sorts of highly distinct practices under this giant umbrella called "coding". I use CC extensively, and yet I still find myself constantly editing by hand. Turns out CC is really bad at writing kubernetes operators. I'd bet it's equally bad at things like database engines or most cutting edge systems design problems. Maybe it will get better at these specific things with time, but it seems like there will always be a cutting edge that requires plenty of human thought to get right. But if you're doing something that's basically already been done thousands of times in slightly different ways, CC will totally do it with 95% reliability. I'm ok with that.
It's also important to step back and realize that it goes way beyond coding. Coding is just the deepest tooth of the jagged frontier. In 3 years there will be blog posts lamenting the "death of law firms" and the "death of telemedicine". Maybe in 10 years it will be the death of everything. We're all in the same boat, and this boat is taking us to a world where everyone is more empowered, not less. But still, there will be that cutting edge in any field that will require real ingenuity to push forward.
I think there's clearly a difference in opinion based on what you work on. Some people were working on things that pre-CC models also couldn't handle and then CC could, and it changed their opinions quickly. I expect (but cannot prove of course) that the same will happen with the area you are describing. And then your opinion may change.
I expect it to, eventually. But then the cutting edge will have simply moved to something else.
I agree that it's very destabilizing. It's sort of like inflation for expertise. You spend all this time and effort saving up expertise, and then those savings rapidly lose value. At the same time, your ability to acquire new expertise has accelerated (because LLMs are often excellent private tutors), which is analogous to an inflation-adjusted wage increase.
There are a ton of variables. Will hallucinations ever become negligible? My money is on "no" as long as the architecture is basically just transformers. How will compiling training data evolve with time? My money is on "it will get more expensive". How will legislators react? I sure hope not by suppressing competition. As long as markets and VC are functioning properly, it should only become easier to become a founder, so outsized corporate profits will be harder to lock down.
1 reply →
To the people who are against AI programming, honest question: why do you not program in assembly? Can you really say "you" "programmed" anything at all if a compiler wrote your binaries?
This is a 100% honest question. Because whatever your justification to this is, it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.
I'm 100% honestly looking forward to finding a single justification that would not fit both scenarios.
I'm all for AI programming.
But I've seen this conversation on HN already 100 times.
The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I personally don't agree at all, in the sense I don't think that matters. I've run into compiler bugs, and more library bugs than I can count. The real world is just as messy as LLMs are, and you still need the same testing strategies to guard against errors. Development is always a slightly stochastic process of writing stuff that you eventually get to work on your machine, and then fixing all the bugs that get revealed once it starts running on other people's machines in the wild. LLMs don't write perfect code, and neither do you. Both require iteration and testing.
I don't see this as a frequent answer tbh, but I do frequently see claims that this is the critique.
I wrote much more here[0] and honestly I'm on the side of Dijkstra, and it doesn't matter if the LLM is deterministic or probabilistic
His argument has nothing to do with the deterministic systems[1] and all to do with the precision of the language. His argument comes down to "we invented symbolic languages for a good reason".
[0] https://news.ycombinator.com/item?id=46928421
[1] If we want to be more pedantic we can actually codify his argument more simply by using some mathematical language, but even this will take some interpretation: natural language naturally imposes a one to many relationship when processing information.
2 replies →
I just answered exactly that. I think that AI agents code better than humans and are the future.
But the parent argument is pretty bad, in my opinion.
3 replies →
There's a big difference between deterministic abstraction over machine code, and probabilistic translation of ambiguous language into machine code.
Compiler is your interface.
If you treat LLM as your interface... Well, I wouldn't want sharing codebase with you.
I'm not particularly against AI programming but I don't think these two things are equivilent. A compiler translates code to specifications in a deterministic way, the same compiler produces the same output from the same code, it is all completely controlled. AI is not at all deterministic, temperature is built into LLMs and furthermore the lack of specificity in prompts and our spoken languages. The difference in control is significant enough to me not to put compilers and AI coding agents into the same catagory even though they are both taking some text and producing some other text/machine code.
A chef can cook a steak better than a robo-jet-toaster, even though neither of them has raised the cow.
It's not about having abstraction levels above or below (BTW, in 21st century CPUs, the machine code itself is an abstraction over much more complex CPU internals).
It's about writing a more correct, efficient, elegant, and maintainable code at whichever abstraction layer you choose.
AI still writes messier, sloppier, buggier, more redundant code than a good programmer can when they care about the craft of writing code.
The end result is worse to those who care about the quality of code.
We mourn, because the quality we paid so much attention to is becoming unimportant compared to the sheer quantity of throwaway code that can be AI-generated.
We're fine dining chefs losing to factory-produced junk food.
Even if you are not coding in assembly you still need to think. Replace llm with a smart programmer. I don't like the other guy to do all the thinking for me. Much better if it's a collaborative process even if the other guy could have coded the perfect solution without my help. Like otherwise why am I even in the picture?
I know how to review code without looking at the corresponding assembly and have high confidence in the behavior of the final binary. I can't quite say the same for a prompt without looking at the generated code, even with temperature 0. The difference is explainability, not determinism.
Compilers are deterministic.
There is no requirement for compilers to be deterministic. The requirement is that a compiler produces something that is valid interpretation of the program according to the language specification, but unspecified details (like specific ordering of instructions in the resulting code) could in principle be chosen nondeterministically and be different in separate executions of the compiler.
2 replies →
For me, the whole goal is to achieve Understanding: understanding a complex system, which is the computer and how it works. The beauty of this Understanding is what drives me.
When I write a program, I understand the architecture of the computer, I understand the assembly, I understand the compiler, and I understand the code. There are things that I don't understand, and as I push to understand them, I am rewarded by being able to do more things. In other words, Understanding is both beautiful and incentivized.
When making something with an LLM, I am disincentivized from actually understanding what is going on, because understanding is very slow, and the whole point of using AI is speed. The only time when I need to really understand something is when something goes wrong, and as the tool improves, this need will shrink. In the normal and intended usage, I only need to express a desire to achieve a result. Now, I can push against the incentives of the system. But for one, most people will not do that at all; and for two, the tools we use inevitably shape us. I don't like the shape into which these tools are forming me - the shape of an incurious, dull, impotent person who can only ask for someone else to make something happen for me. Remember, The Medium Is The Message, and the Medium here is, Ask, and ye shall receive.
The fact that AI use leads to a reduction in Understanding is not only obvious, but also studies have shown the same. People who can't see this are refusing to acknowledge the obvious, in my opinion. They wouldn't disagree that having someone else do your homework for you would mean that you didn't learn anything. But somehow when an LLM tool enters the picture, it's different. They're a manager now instead of a lowly worker. The problem with this thinking is that, in your example, moving from say Assembly to C automates tedium to allow us to reason on a higher level. But LLMs are automating reasoning itself. There is no higher level to move to. The reasoning you do now while using AI is merely a temporary deficiency in the tool. It's not likely that you or I are the .01% of people who can create something truly novel that is not already sufficiently compressed into the model. So enjoy that bit of reasoning while you can, o thou Man of the Gaps.
They say that writing is God's way of showing you how sloppy your thinking is. AI tools discourage one from writing. They encourage us to prompt, read, and critique. But this does not result in the same Understanding as writing does. And so our thinking will be, become, and remain vapid, sloppy, inarticulate, invalid, impotent. Welcome to the future.
There's a balance of levels of abstraction. Abstraction is a great thing. Abstraction can make your programs faster, more flexible, and more easy to understand. But abstraction can also make your programs slower, more brittle, and incomprehensible.
The point of code is to write specification. That is what code is. The whole reason we use a pedantic and somewhat cryptic schema is that natural language is too abstract. This is the exact reason we created math. It really is even the same reason we created things like "legalese".
Seriously, just try a simple exercise and be adversarial to yourself. Describe how to do something and try to find loopholes. Malicious compliance. It's hard, to defend and writing that spec becomes extremely verbose, right? Doesn't this actually start to become easier by using coding techniques? Strong definitions? Have we not all forgotten the old saying "a computer does exactly what you tell it to, not what you intend to tell it to do"? Vibe coding only adds a level of abstraction to that. It becomes "a computer does what it 'thinks' you are telling it to do, not what you intend to tell it to do". Be honest with yourself, which paradigm is easier to debug?
Natural language is awesome because the abstraction really compresses concepts, but it requires inference of the listener. It requires you to determine what the speaker intends to say rather than what the speaker actually says.
Without that you'd have to be pedantic to even describe something as mundane as making a sandwich[1]. But inference also leads to misunderstandings and frankly, that is a major factor of why we talk past one another when talking on large global communication systems. Have you never experienced culture shock? Never experienced where someone misinterprets you and you realize that their interpretation was entirely reasonable?[2] Doesn't this knowledge also help resolve misunderstandings as you take a step back and recheck assumptions about these inferences?
Because, as you should be able to infer from everything I've said above, the problem isn't actually about randomness in the system. Making the system deterministic only has one realistic outcome: a programming language. You're still left with the computer doing what you tell it to do, but have made this more abstract. You've only turned it into the PB&J problem[1] and frankly, I'd rather write code than instructions like those kids are. Compared to the natural language the kids are using, code is more concise, easier to understand, more robust, and more flexible.
I really think Dijkstra explains things well[0]. (I really do encourage reading the entire thing. It is short and worth the 2 minutes. His remark at the end is especially relevant in our modern world where it is so easy to misunderstand one another...)
[0] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
[1] https://www.youtube.com/watch?v=FN2RM-CHkuI
[2] Has this happened to you and you've been too stubborn to realize the interpretation was reasonable?
[dead]
> and parrot it back
I don't like this particular phrase, as it suggests that LLMs are just replicating code. As far as I understand, LLMs can also create new code and algorithms (some better than others). When we think of them as copy-paste-maschines, we judge their capabilities unfairly and also underestimate their capabilities.
So funny because "you’re still riding around on a fixie bike." that's literally what I do. I also take a plane to fly.
In that analogy that would be picking the right library/framework/service vs. vibe coding it. I do NOT need to write all the code to provide value.
Knowing which tool to use for the task is precisely what requires honing a skill and requires judgement. It's not about how many kilometers you manage to cover.
I suspect my comment will not be well received, however I notice in myself that I've passed the event horizon of being a believer and am past the honeymoon period and I'm beginning to think about engineering
My headspace is now firmly in "great, I'm beginning to understand the properties and affordances of this new medium, how do I maximise my value from it", hopefully there's more than a few people who share this perspective, I'd love to talk with you about the challenges you experience, I know I have mine, maybe we have answers to each others problems :)
I assume that the current set of properties can change, however it seems like some things are going to be easier than others, for example multi modal reasoning still seems to be a challenge and I'm trying to work out if that's just hard to solve and will take a while or if we're not far from a good solution
I thought I'd miss all the typing and syntax, but I really don't. Everyone has their own relationship with coding, but for me, I get satisfaction out of the end product and putting it in front of someone. To the extend that I cared about the code, it mainly had to do with how much it allowed the end product to shine.
Yes, there's clearly a big split in the community where perhaps ~50% are like OP and the other ~50% are like you. But I think we should still respect the views of the other side and try to empathize.
That's why I'll only read source code written until 2024.
That Michael A Breeden in the comments section has some good insight going on. Make that post twice as valuable.
The majority of the code currently running in production for my company was written 5+ years ago. This was all "hand-written" and much lower quality than the AI generated code that I am generating and deploying these days.
Yet I feel much more connected with my old code. I really enjoyed actually writing all that code even though it wasn't the best.
If AI tools had existing 5 years ago when I first started working on this codebase, obviously the code quality would've been much higher. However, I feel like I really loved writing my old code and if given the same opportunity to start over, I would want to rewrite this code myself all over again.
I didn't come in IT for money - back in the days it wasn't as well paid as today - nevertheless if this craft was very poorly paid I probably wouldn't choose this profession either. And I assume many people here wouldn't as well unless you are already semi-retired or debt free.
I mourn a little bit that in 20 years possibly 50% of software jobs will get axed or unless you are elite/celebrity dev salary will stagnate. I mourn that in the future upward mobility and moving up into upper middle class will be harder without trying to be entrepreneur.
I'm probably a minority, but I've never loved dealing with syntax. The code itself always felt like a hindrance to me that reminded me that my brain was slowed down by my fingers. I get it though, it was tactical and it completed the loop. It was essential for learning I felt like despite eventually getting to a point where it slows you down the more senior you get.
AI has a ways to go before it's senior level if it ever reaches that level, but I do feel bad for juniors that survive this who never will have the opportunity to sculpt code by hand.
I feel like we are long into the twilight of mini blogs and personal sites. Its like people trying to protect automotive jobs, the vas majority were already lost
Perhaps Im a cynic but I don't know
> If you would like to grieve, I invite you to grieve with me.
I think we should move past this quickly. Coding itself is fun but is also labour , building something is the what is rewarding.
By that logic prompting an AI is also labour.
It's not even always a more efficient form of labour. I've experienced many scenarios with AI where prompting it to do the right thing takes longer and requires writing/reading more text compared to writing the code myself.
Some years ago I was at the Burger King near the cable car turntable at Powell and Market St in San Francisco. Some of the homeless people were talking about the days when they'd been printers. Press operators or Linotype operators. Jobs that had been secure for a century were just - gone.
That's the future for maybe half of programmers.
Remember, it's only been three years since ChatGPT. This is just getting started.
From valued professional to surplus inventory just like that. Where does this process end? Because right now it's only accelerating.
> because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike
Sure, maybe it takes me a little while to ride across town on my bike, but I can reliably get there and I understand every aspect of the road to my destination. The bazooka-powered jetpack might get me there in seconds, but it also might fly me across state lines, or to Antarctica, or the moon first, belching out clouds of toxic gas along the way.
Like the cab drivers in London who have to know the city inside and out? https://wheelchairtravel.org/london-black-cab-driver-knowled...
I understand the sentiment and I've been involved in software engineering in various roles for the last 25+ years. The thing that gives me hope is that never once in that time has the problem ever been that we didn't have more work to do.
It's not like all of a sudden I'm working 2-3 hours a day. I'm just getting a lot more done.
One other helpful frame: I consider LLMs simply to be very flexible high-level 'language' Compilers. We've moved up the Abstraction Chain ever since we invented FORTRAN and COBOL (and LISP) instead of using assembly language.
We're 'simply' moving up the abstraction hierarchy again. Good!
A non-deterministic, slow, pay to use, compiler for a language that is not precise enough for software. What an amazing abstraction!
You are just salty and old. If you're young and hip you don't need to know what you're doing just do the thing.
You know who else mourned the loss of craft? People that don't like PHP and Wordpress because they lower the barrier to entry to creating useful stuff while also leaving around a fair amount of cruft and problems that the people that use them don't understand how to manage.
Like iambateman said: for me it was never about code. Code was a means to an ends and it didn't stop at code. I'm the kind of software engineer that learned frontends, systems, databases, ETLs, etc -- whatever it was that was that was demanded of me to produce something useful I learned and did it. We're now calling that a "product engineer". The "craft" for me was in creating useful things that were reliable and efficient, not particularly how I styled lines, braces, and brackets. I still do that in the age of AI.
All of this emotional spillage feels for not. The industry is changing as it always has. The only constant I've ever experienced in this industry is change. I realized long ago that when the day comes that I am no longer comfortable with change then that is my best signal that this industry is no longer for me.
I think it's a bit different when you can opt out. If you didn't want to use PHP you didn't have to. But it's getting increasingly hard to opt out of AI.
The death of a means to an end is the birth of an end itself.
When cameras became mainstream, realism in painting went out of fashion, but this was liberating in a way as it made room for many other visual art styles like Impressionism. The future of programming/computing is going to be interesting.
At the end I felt more like a plumber, connecting pipes and building adapters where pipes didn't match.
We have CNC machines, and we still have sculptors.
Mechanising the production of code is good thing. And crafting code as art is a good thing. It is sign of a wider trend that we need to look at these things like adversaries.
I look forward to the code-as-art countermovement. It's gonna be quite something.
Great post. Super sad state of affairs but we move on and learn new things. Programming was always a tool and now the tool has changed from something that required skill and understanding to complaining to a neural net. Just have to focus on the problem being solved more.
> Programming was always a tool
This is the narrow understanding of programming that is the whole point of contention.
I'll believe it when I start seeing examples of good and useful software being created with LLMs or some increase in software quality. So far it's just AI doom posting, hype bloggers that haven't shipped anything, anecdotes without evidence, increase in CVEs, increase in outages, and degraded software quality.
It would be helpful if you could define “useful” in this context.
I’ve built a number of team-specific tools with LLM agents over the past year that save each of us tens of hours a month.
They don’t scale beyond me and my six coworkers, and were never designed to, but they solve challenges we’d previously worked through manually and allow us to focus on more important tasks.
The code may be non-optimal and won’t become the base of a new startup. I’m fine with that.
It’s also worth noting that your evidence list (increased CVEs, outages, degraded quality) is exclusively about what happens when LLMs are dropped into existing development workflows. That’s a real concern, but it’s a different conversation from whether LLMs create useful software.
My tools weren’t degraded versions of something an engineer would have built better. They’re net-new capability that was never going to get engineering resources in the first place. The counterfactual in my case isn’t “worse software”—it’s “no software.“
It really shouldn't be this hard to just provide one piece of evidence. Is anecdotes of toy internal greenfield projects that could probably be built with a drag and drop no-code editor really the best from this LLM revolution?
2 replies →
At my work ~90% of code is now LLM generated. It's not "new" software in the sense that you're describing but it's new features, bug fixes, and so on to the software that we all work on. (Although we are working on something that we can hopefully open source later this year that is close to 100% LLM generated, and I can say, as someone that has been reviewing most of the code, is quite high quality)
Well, on the surface it may seem like there’s nothing being created of value, but I can assure you every company from seed stage to unicorns are heavily using claude code, cursor, and the like to produce software. At this point, most software you touch has been modified and enhanced with the use of LLMs. The difference in pace of shipping with and without AI assistance is staggering.
> every company from seed stage to unicorns are heavily using claude code, cursor, and the like to produce software
> The difference in pace of shipping with and without AI assistance is staggering.
Lets back up these statements with some evidence, something quantitative, not just what pre-IPO AI marketing blog posts are telling you.
5 replies →
Like the new features in Windows 11? They’ve just anointed a “software quality czar” and I suspect this is not coincidence.
Coding is an abstraction. Your CPU knows nothing of type safety, bloom filters, dependencies, or code reuse.
Mourning the passing of one form of abstraction for another is understandable, but somewhat akin to bemoaning the passing of punch card programming. Sure, why not.
Your entire brain's model of the world is an abstraction over its sensory inputs. By this logic we might as well say you shouldn't mourn anything since all it means is a minor difference in the sensory inputs your brain receives.
1. it isn't that bad
2. the tools still need a lot of direction, i still fight claude with opus to do basic things and the best experiences are when i provide very specific prompts
3. being idealistic on a capitalist system where you have to pay your bills every month is something i could do when my parents paid my bills
These apocalyptic posts about how everything is shit really don't match my reality at all. I use these tools every day to be more productive and improve my code but they are nowhere close to doing my actual job, that is figuring out WHAT to do. How to do it is mostly irrelevant, as once i get to that point i already know what needs to be done and it doesn't matter if it is me or Opus producing the code.
Repost this in a couple of years and it'll be relevant. Too soon though, as it stands.
I feel like a lot of comments here are missing the point. I think the article does a fairly good job neither venerating nor demonizing AI, but instead just presenting it as the reality of the situation, and that reality means that the craft of programming and engineering is fundamentally different than it was just a few years ago.
As an (ex-)programmer in his late 40s, I couldn't agree more. I'm someone who can be detail-oriented (but, I think also with a mind toward practicality) to the point of obsession, and I think this trait served me extremely well for nearly 25 years in my profession. I no longer think that is the case. And I think this is true for a lot of developers - they liked to stress and obsess over the details of "authorship", but now that programming is veering much more towards "editor", they just don't find the day-to-day work nearly as satisfying. And, at least for me, I believe this while not thinking the change to using generative AI is "bad", but just that it's changed the fundamentals of the profession, and that when something dies it's fine to mourn it.
If anything, I'm extremely lucky that my timing was such that I was able to do good work in a relatively lucrative career where my natural talents were an asset for nearly a quarter of a century. I don't feel that is currently the case regarding programming, so I'm fortunate enough to be able to leave the profession and go into violin making, where my obsession with detail and craft is again a huge asset.
The thing he has spent his whole career doing unto others he finally did into himself
The inspired me to write down some scattered thoughts I have on this [0]. tl;dr I firmly believe we kicked the can down the road and now it's too late.
Things programmers forgot to do before AI started writing a bunch of software:
1. Learn how to review code
Some tools exist, some of them are even quite good! Large organizations have tried to build best practices to balance urgency with correctness, but programmers are still quite bad at this.
2. Compare program A vs program B
In a similar vein, we simply do not know how to measure one program vs another. Or a function against another. Or even one variableName vs another_variable_name.
"It depends" Depends on what? We never sorted this out.
3. Talk to other professions
We are not the first profession forced to coordinate with Automation as a coworker, and we certainly won't be the last. We're not even the first knowledge workers to do so.
How did other laborers deal with this? We don't know because we were busy making websites.
[0]: https://bsky.app/profile/did:plc:wh7bie3ld7bmg3cz76sbjkwj/po...
Software was never governed by a single standard. It has always been closer to architecture or design, with competing schools judging quality differently. AI tools do not remove judgment; they make it unavoidable. I sit in a minimalist, almost brutalist school: fewer layers, obvious structure, and software that delivers results even if it is not fashionable.
I am feeling this loss. I spent most of mu career scrupulously avoiding leadership positions because what I really like is the simple joy of making things with my own two hands.
Many are calling people like me Luddites for mourning this, and I think that I am prepared to wear that label with pride. I own multiple looms and a spinning wheel, so I think I may be in a better position speculates on how the Luddites felt than most people are nowadays.
And what I see is that the economic realities are what they are - like what happened to cottage industry textile work, making software by hand is no longer the economical option. Or at least, soon enough it won’t be. I can fret about deskilling all I like, but it seems that soon enough these skills won’t be particularly valuable except as a form of entertainment.
Perhaps the coding agents won’t be able to make certain things or use certain techniques. That was the case for textile manufacturing equipment, too. If so then the world at large will simply learn to live without. The techniques will live on, of course, but their practical value will be as an entertainment for enthusiasts and a way for them to recognize one another when we see it in each others’ work.
It’s not a terrible future, I suppose, in à long enough view. The world will move on, just like it did after the Industrial Revolution. But, perhaps also like the Industrial Revolution and other similar points in history, not until after we get through another period where a small cadre of wealthy elites who own and control this new equipment use that power to usher in a new era of neofeudalism. Hopefully this time they won’t start quite so many wars while they enjoy their power trips.
> They can write code better than you or I can
Speak for yourself. They produce shit code and have terrible judgment. Otherwise we wouldn't need to babysit them so much.
To me, it’s super exciting to play ping pong with ideas up until I arrive at an architecture and interfaces, that I am fine with.
My whole life I have been reading other people’s code to accumulate best practices and improve myself. While a lot of developers start with reading documentation, I have always started with reading code.
And where I was previously using the GitHub Code Search to eat up as much example code as I could, I am now using LLMs to speed the whole process up. Enormously. I for one enjoy using it.
That said, I have been in the industry for more than 15 years. And all companies I have been at are full of data silos, tribal knowledge about processes and organically grown infrastructure, that requires careful changes to not break systems you didn’t even know about.
Actually most of my time isn’t put into software development at all. It’s about trying to know the users and colleagues I work with, understand their background and understand how my software supports them in their day to day job.
I think LLMs are very, very impressive, but they have a long way to go to reach empathy.
About 20 years ago I realised the extreme immaturity of IT field. The market was fragmented and you could literally become billionaire from your basement (80's), from an office (90's) etc. It was at the stage of car industry of the beginning of the past century. Since then the IT market has matured a lot so that a handful of companies formed an oligopoly and are thriving with no possibility of threat from isolated developers. In the following years the startup landscape will be dead, everyone will be bought up and even the computer devices will become sealed boxes with controlled buttons. Having a browser open might be a hack.
Oh no, my engineering profession requires me to use new engineering techniques due to advancements produced by engineering. Quality cringe.
It's not only that. If 1 person is able to d the job of 10 people. What do you think will happen to the other 9 people.
"Glorified TSA agent" is a rather gloomy, low-agency take on it. You both ask for what you want and verify the results.
Two years ago I decided to give up my career as an industry researcher to pursue a tenure-track professor position at a community college. One of the reasons I changed careers is because I felt frustrated with how research at my company changed from being more self-directed and driven by longer-term goals to being directed by upper management with demands for more immediate productization.
I feel generative AI is being imposed onto society. While it is a time-saving tool for many applications, I also think there are many domains where generative AI needs to be evaluated much more cautiously. However, there seems to be relentless pressure to “move fast and break things,” to adopt technology due to its initial labor-saving benefits without fully evaluating its drawbacks. That’s why I feel generative AI is an imposition.
I also resent the power and control that Big Tech has over society and politics, especially in America where I live. I remember when Google was about indexing the Web, and I first used Facebook when it was a social networking site for college students. These companies became successful because they provided useful services to people. Unfortunately, once these companies gained our trust and became immensely wealthy, they started exploiting their wealth and power. I will never forget how so many Big Tech leaders sat at Trump’s second inauguration, some of whom got better seats than Trump’s own wife and children. I highly resent OpenAI’s cornering of the raw wafer market and the subsequent exorbitant hikes in RAM and SSD prices.
Honestly, I have less of an issue with large language models themselves and more of an issue with how a tiny handful of powerful people get to dictate the terms and conditions of computing for society. I’m a kid who grew up during the personal computing revolution, when computation became available to the general public. I fell for the “computers for the rest of us,” “information at your fingertips” lines. I wanted to make a difference in the world through computing, which is why I pursued a research career and why I teach computer science.
I’ve also sat and watched research industry-wide becoming increasingly driven by short-term business goals rather than by long-term visions driven by the researchers themselves. I’ve seen how “publish-and-perish” became the norm in academia, and I also saw DOGE’s ruthless cuts in research funding. I’ve seen how Big Tech won the hearts and minds of people, only for it to leverage its newfound power and wealth to exploit the very people who made Big Tech powerful and wealthy.
The tech industry has changed, and not for the better. This is what I mourn.
It's not just tech it's everything. This is an existential crisis because we have rolled back almost two centuries. We are just handing the keys to the kingdom to these sociopaths, and we are thanking them for it. They are not even having the decency to admit they really just want to use us as numbers, this was always the case since the industrial revolution. Dozens of generations worldwide have toiled and suffered collectively to start creating life changing technology and these bloodsucking vampires that can't quench their thirst just live in their own reality and it doesn't include the rest of us. It's really been the same problem for ages but now they really seem to have won for the last time.
The more popular it becomes for coding, the more likely a model collapse will occur.
I've noticed a steep decline in software quality as a consumer since Covid, but especially since ChatGPT came out.
That hasn't changed. Whether it's apps that hog too much memory, games that use too much storage, unresponsiveness, or just plain cryptic error messages, everything feels more fraile than it used to be.
Perhaps its the growing pains of LLMs, a horde of junior programmers pushing stuff to production that the seniors were too "old fashioned" to notice.
Only time will tell.
To me this sentiment is silly. Programming was never about the act of writing, but about making the computer do something. Now we can ask computers to helps us write instructions for them. Great if you ask me. And no, your job is not going away because human maintainers will always be required to review changes, communicate with stakeholders and provide a vision for the projects. I have yet to see a chatbot that can REDUCE entropy inside a codebases rather than continuously increase it with more and more slop.
Stop acting like software engineering is dead or something. Even if it its, we will still code by hand when we are poor later in this bubble. Only thing I would refuse is to pay any money to Anthropic & openAI. Literally would invest in Chinese labs at this point
It was pretty clear to me after interacting with the first popular ChatGPT version around end of 2022 that all knowledge jobs will be replaced sooner or later. I don’t think coding is somehow special. What we have right now is an intermediate stage where “taste” still matters, but this won’t last forever.
I also believe that when all knowledge jobs are replaced, something fundamental needs to change in society. Trying to anticipate and prepare for that right now is premature.
This discussion is like the discourse about work from home/return to office.
And all that time spent doing leetcode? Yeah, THAT was time Well Spent.... ;-)
I don't mourn our craft.
It makes me sad to read posts like this. If it is a necessary step for you on the journey from denial to acceptance to embracing the new state of the world, then sure, take your time.
But software engineering is the only industry that is built on the notion of rapid change, constant learning, and bootstrapping ourselves to new levels of abstraction so that we don't repeat ourselves and make each next step even more powerful.
Just yesterday we were pair programming with a talented junior AI developer. Today we are treating them as senior ones and can work with several in parallel. Very soon your job will not be pair programming and peer reviewing at all, but teaching a team of specialized coworkers to work on your project. In a year or two we will be assembling factories of such agents that will handle the process from taking your requirements to delivering and maintaining complex software. Our jobs are going to change many more times and much more often than ever.
And yet there will still be people finding solace in hand-crafting their tools, or finding novel algorithms, or adding the creativity aspect into the work of their digital development teams. Like people lovingly restoring their old cars in their garage just for the sake of the process itself.
And everything will be just fine.
> software engineering is the only industry that is built on the notion of rapid change, constant learning, and bootstrapping ourselves to new levels of abstraction
Not sure I agree. I think most programming today looks almost exactly the same as it did 40 years ago. You could even have gotten away with never learning a new language. AI feels like the first time a large percentage of us may be forced to fundamentally change the way we work or change careers.
One may still write C code as they did 40 years ago, but they still use the power of numerous libraries, better compilers, Git, IDEs with syntax highlighting and so on. The only true difference — to me — is the speed of change that makes it so pronounced and unsettling.
It's true, unless you have always been working on FOTM frontend frameworks, you could easily be doing the same thing as 20/30/40 years ago. I'm still using vim and coding in C++ like someone could have 30+ years ago (I was too young then). Or at least, I was until Claude code got good enough to replace 90% of my code output :)
These posts make me feel like I’m the worst llm prompter in existence.
I’m using a mix of Gemini, grok, and gpt to translate some matlab into c++. It is kinda okay at its job but not great? I am rapidly reading Accelerated C++ to get to the point where I can throw the llm out the window. If it was python or Julia I wouldn’t be using an LLM at all bc I know those languages. AI is barely better than me at C++ because I’m halfway through my first ever book on it. What LLMs are these people using?
The code I’m translating isn’t even that complex - it runs analysis on ecg/ppg data to implement this one dude’s new diagnosis algorithm. The hard part was coming up with the algorithm, the code is simple. And the shit the LLM pours out works kinda okay but not really? I have to do hours of fix work on its output. I’m doing all the hard design work myself.
I fucking WISH I could only work on biotech and research and send the code to an LLM. But I can’t because they suck so I gotta learn how computer memory works so my C++ doesn’t eat up all my pc’s memory. What magical LLMs are yall using??? Please send them my way! I want a free llm therapist and a programmer! What world do you live in?? Let me in!
A lot of people are using Claude Code which many consider to be a noticeably better for coding than the other models.
I think also they tend to be generating non-C++ code where there are more guardrails and less footguns for LLMs to run into. Eg they're generating Javascript or Python or Rust where type systems and garbage collection eliminates entire classes of mistakes that LLMs can run into. I know you said you don't use it for Python because you know the language but even experienced Python devs still see value in LLM-generating Python code.
That’s funny bc I linked my post to a server I’m on and I also was told to use an agent.
My worry about an agent is I’m trying to translate the math with full fidelity and an agent might take liberties with the math rather than full accuracy. I’m already having issues with 0 to 1 indexing screwing up some of the algorithm.
But I will try an agent - can’t hurt to try
I'm firing you for being unable to adequately commune with the machine spirit.
(But for real, a good test suite seems like a great place to start before letting an LLM run wild... or alternatively just do what you're doing. We definitely respect textbook-readers more than prompters!)
I’m a contract hire that you paid upfront! You can’t fire me!
Also let this be a lesson to internet folks to be careful what you post if your boss shitposts on the orange yelling site
Thanks to the person who wrote this. It resonates very strongly with me.
I'm surprised so many people are only waking up to this now. It should have been obvious as soon as ChatGPT came out that even with only incremental improvements, LLMs would kill programming as we knew it. And the fact that these utterances, however performative, from developers expressing grief or existential despair have become commonplace tells me as much about the power of these systems than whatever demo Anthropic or OpenAI has cooked up.
I would also point out that the author, and many AI enthusiasts, still make certain optimistic assumptions about the future role of "developer," insisting that the nature of the work will change, but that it will somehow, in large measure, remain. I doubt that. I could easily envision a future where the bulk of software development becomes something akin to googling--just typing the keywords you think are relevant until the black box gives you what you want. And we don't pay people to google, or at least, we don't pay them very much.
Many have mentioned woodworking as an analogy from a personal perspective, but for me the important perspective is that of consumers.
Sure, if you have the money, get a carpenter to build your kitchen from solid oak. Most people buy MDF, or even worse, chipboard. IKEA, etc. In fact, not too long ago, I had a carpenter install prefabricated cabinets in a new utility room. The cabinets were pre-assembled, and he installed them on the wall in the right order and did the detailed fittings. He didn’t do a great job, and I could have done better, albeit much slower. I use handsaws simply because I’m afraid of circular saws, but I digress.
A lot of us here are like carpenters before IKEA and prefabricated cabinets, and we are just now facing a new reality. We scream “it is not the same”. It indeed isn’t for us. But the consumers will get better value for money. Not quality, necessarily, but better value.
How about us? We will eventually be kitchen designers (aka engineers, architects), or kitchen installers (aka programmers). And yes, compared to the golden years, those jobs will suck.
But someone, somewhere, will be making bespoke, luxury furniture that only a few can afford. Or maybe we will keep doing it anyway because our daily jobs suck, until we decide to stop. And that is when the craft will die.
The world will just become less technical, as is the case with other industrial goods. Who here even knows how a combustion engine works? Who knows how fabric is made, or even how a sawing machine works? We are very much like the mechanics of yesteryear before cars became iPads on wheels.
As much as we hate it, we need to accept that coding has peaked. Juniors will be replaced by AI, experts will retire. Innovation will be replaced by processes. And we must accept our place in history.
> "I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off of it."
I have to say this reads a bit hollow to me, and perhaps a little bit shallow.
If the content this guy created could be scraped and usefully regurgitated by an LLM, that same hack, before LLMs, could have simply searched, found the content and still profited off of it nonetheless. And probably could have done so without much more thought than that required to use the LLM. The only real difference introduced by the LLM is that the purpose of the scraping is different than that done by a search engine.
But let's get rid of the loaded term "hack" and be a little less emotional and the complaint. Really the author had published some works and presumably did so that people could consume that content: without first knowing who was going to consume it and for what purpose.
It seems to me what the author is really complaining about is that the reward from the consuming party has been displaced from himself to whoever owns the LLM. The outcome of consumption and use hasn't changed... only who got credit for the original work has.
Now I'm not suggesting that this is an invalid complaint, but trying to avoid saying, "I posted this for my benefit"... be that commercial (ads?) or even just for public recognition...is a bit disingenuous.
If you poured you knowledge, experience, and creativity into some content for others to consume and someone else took that content as their own... just be forthright about what you really lost and don't disparage the consumer. Just because they aren't your "hacks" anymore, but that middlemen are now reaping your rewards.
I absolutely disagree with this. All the things the author said will still exist and keep on existing.
Nothing will prevent you from typing “JavaScript with your hands”, from “holding code in our hands and molding it like clay…”, and all the other metaphors. You can still do all of it.
What certainly will change is the way professional code will be produced, and together with that, the avenue of having a very well-paid remuneration, to write software line-by-line.
I’ll not pretend that I don’t get the point, but it feels like the lamentation of a baker, tailor, shoemaker, or smith, missing the days of old.
And yet, most people prefer a world with affordable bread, clothes, footware, and consumer goods.
Will the world benefit the most from “affordable” software? Maybe yes, maybe not, there are many arguments on both sides. I am more concerned the impact on the winners and losers, the rich will get more rich and powerful, while the losers will become even more destitute.
Yet, my final point would be: it is better or worse to live in a world in which software is more affordable and accessible?
> All the things the author said will still exist and keep on existing.
Except the community of people who, for whatever reason, had to throw themselves into it and had critical mass to both distribute and benefit from the passion of it. This has already been eroded by the tech industry coopting programming in general and is only going to diminish.
The people who discovered something because they were forced to do some hard work and then ran with it are going to be steered away from that direction by many.
I don’t think it’s that simple. A couple of examples:
Food:
A lot of the processed foods that are easily available make us unhealthy and sick. Even vegetables are less nutritious than they were 50 years ago. Mass agriculture also has many environmental externalities.
Consumer goods:
It has become difficult to find things like reliable appliances. I bought a chest freezer. It broke after a year. The repairman said it would cost more to fix than to buy a new one. I asked him if there was a more reliable model and he said no: they all break quickly.
Clothing:
Fast fashion is terrible for the environment. Do we need as many clothes as we have? How quickly do they end up in landfills?
Would we be better off as a society repairing shoes instead of buying new ones every year?
It's true, they don't "make 'em like they used to". They make them in new, more efficient ways which have contributed to improving global trends in metrics such as literacy, child mortality, life expectancy, extreme poverty, and food supply.
If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue
If your worries are about ecology and sustainability I agree that is a concern we need to address more effectively than we have in the past. Technology will almost certainly be part of that solution via things like fusion energy. Success is not assured and we cannot just sit back and say "we live in the best of all possible worlds with a glorious manifest destiny", but I don't think that the future is particularly bleak compared to the past
1 reply →
I would also add:
Cars make people unhealthy and lead to city designs that hurt social engagement and affordability, but they are so much more efficient that it's hard not to use them.
And then the obvious stuff about screens/phones/social media.
> wait six months.
I mourn having to repeatedly hear this never-quite-true promise that an amazing future of perfect code from agentic whatevers will come to fruition, and it's still just six months away. "Oh yes, we know we said it was coming six, twelve, and eighteen months ago, but this time we pinky swear it's just six months away!"
I remember when I first got access to the internet. It was revolutionary. I wanted to be online all the time, playing games, chatting with friends, and discovering new things. It shaped my desire to study computer science and learn to develop software! I could see and experience the value of the internet immediately. It's utility was never "six months away," and I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.
LLM coding doesn't feel revolutionary or exciting like this. It's a mandate from the top. It's my know-nothing boss telling me to "find ways to use AI so we can move faster." It's my boss's know-nothing boss conducting Culture Amp surveys about AI usage, but ignoring the feedback that 95% of Copilot's PR comments are useless noise: "The name of this unit test could be improved." It's waiting for code to be slopped onto my screen, so I can go over it with a fine-toothed comb and find all the bugs—and there are always bugs.
Here's what I hope is six months away: The death of AI hype.
This feels right when you're looking forwards. The perfect AI bot is definitely not 6 months away. It'll take a lot longer than that to get something that doesn't get things wrong a lot of the time. That's not especially interesting or challenging though. It's obvious.
What's much more interesting is looking back 6, 12, 18, or 24 months. 6 months ago was ChatGPT 5, 12 months ago was GPT 4.5, 18 months ago was 4o, and 24 months ago ChatGPT 3.5 was released (the first one). If you've been following closely you'll have seen incredible changes between each of them. Not to get to perfect, because that's not really a reasonable goal, but definite big leaps forward each time. A couple of years ago one-shotting a basic tic tac toe wasn't really possible. Now though, you can one-shot a fairly complex web app. It won't be perfect, or even good by a lot of measures compared to human written software, but it will work.
I think the comparison to the internet is a good one. I wrote my first website in 1997, and saw the rapid iteration of websites and browsers back then. It felt amazing, and fast. AI feels the same to me. But given the fact that browsers still aren't good in a lot of ways I think it's fair to say AI will take a similarly long time. That doesn't mean the innovations along the way aren't freaking cool though.
ChatGPT 3.5 was almost 40 months ago, not 24. GPT 4.5 was supposed to be 5 but was not noticeably better than 4o. GPT 5 was a flop. Remember the hype around Gemini 3? What happened to that? Go back and read the blog posts from November when Opus 4.5 came out; even the biggest boosters weren't hyping it up as much as they are now.
It's pretty obvious the change of pace is slowing down and there isn't a lot of evidence that shipping a better harness and post-training on using said harness is going to get us to the magical place where all SWE is automated that all these CEOs have promised.
4 replies →
Yeah, but humans still had to work to create those websites, it increased jobs, didn't replace them (this is happening). This will devalue all labor that has anything to do with i/o on computers, if not outright replace a lot of it. Who cares if it can't write perfect code, the owners of the software companies never cared about good code, they care about making money. They make plenty of money off slop, and they'll make even more if they don't have to have humans create the slop.
The job market will get flooded with the unemployed (it already is) with fewer jobs to replace the ones that were automated, those remaining jobs will get reduced to minimum wages whenever and wherever possible. 25% of new college grads cannot find employment. Soon young people will be so poor that you'll beg to fight in a war. Give it 5-10 years.
This isn't a hard future to game theory out, its not pretty if we maintain this fast track of progress in ML that minimally requires humans. Notice how the ruling class has increased the salaries for certain types of ML engineers, they know what's at stake. These businessmen make decisions based on expected value calculated from complex models, they aren't giving billion dollar pay packages to engineers because its trendy. We should use our own mental models to predict where this is going, and prevent it from happening however possible.
THE word ''Luddite'' continues to be applied with contempt to anyone with doubts about technology, especially the nuclear kind. Luddites today are no longer faced with human factory owners and vulnerable machines. As well-known President and unintentional Luddite D. D. Eisenhower prophesied when he left office, there is now a permanent power establishment of admirals, generals and corporate CEO's, up against whom us average poor bastards are completely outclassed, although Ike didn't put it quite that way. We are all supposed to keep tranquil and allow it to go on, even though, because of the data revolution, it becomes every day less possible to fool any of the people any of the time. If our world survives, the next great challenge to watch out for will come - you heard it here first - when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long. Meantime, as Americans, we can take comfort, however minimal and cold, from Lord Byron's mischievously improvised song, in which he, like other observers of the time, saw clear identification between the first Luddites and our own revolutionary origins. It begins:[0]
https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...
Something I'm finding odd is this seemingly perpetually repeating claim that the latest thing that came out actually works, unlike the last thing that obviously didn't quite work.
Then next month, of course, latest thing becomes last thing, and suddenly it's again obvious that actually it didn't quite work.
It's like running on a treadmill towards a dangling carrot or something. It's simultaneously always here in front of our faces but also not here in actual hand, obviously.
The tools are good and improving. They work for certain things, some of the time, with various need for manual stewarding in the hands of people who really know what they're doing. This is real.
But it remains an absolutely epic leap from here to the idea that writing code per se is a skill nobody needs any more.
More broadly, I don't even really understand what that could possibly mean on a practical level, as code is just instructions for what the software should do. You can express instructions on a higher level, and tooling keeps making that more and more possible (AI and otherwise), but in the end what does it mean to abstract fully away from the instruction in the detail? It seems really clear that will never be able to result in getting software that does what you want in a precise way rather than some probabilistic approximation which must be continually corrected.
I think the real craft of software such that there is one is constructing systems of deterministic logic flows to make things happen in precisely the way we want them to. Whatever happens to tooling, or what exactly we call code or whatever, that won't change.
that's a good take
> getting software that does what you want
so then we become PMs?
> an amazing future of perfect code from agentic whatevers will come to fruition...
Nobody credible is promising you a perfect future. But, a better future, yes! If you do not see it, then know this. You have your head firmly planted in the sand and are intentionally refusing to see what is coming. You may not like it. You may not want it. But it is coming and you will either have to adapt or become irrelevant.
Does Copilot spit out useless PR comments. 100% yes! Are there tools that are better than Copilot? 100% yes! These tools are not perfect. But even with their imperfections, they are very useful. You have to learn to harness them for their strengths and build processes to address their weaknesses. And yes, all of this requires learning and experimentation. Without that, you will not get good results and you will complain about these tools not being good.
> But it is coming and you will either have to adapt or become irrelevant.
I heard it will be here in six months. I guess I don't have much time to adapt! :)
>It's utility was never "six months away,"
6 months ago is when my coding became 100% done by AI. The utility already has been there for a while.
>I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.
The difference is that you were a kid then with an open mind and now your world view has fixed into a certain way the world works and how things should be done.
> your world view has fixed into a certain way the world works
Yeah, it's weird. I'm fixated on not having bugs in my code. :)
3 replies →
Can you point to the most optimistic six month projections that you have seen?
I have encountered a lot of people say it will be better in six months, and every six months It has been.
I have also seen a few predictions that say 'in a year or two they will be able to do a job completely. I am sceptical, but I would say such claims are rare. Dario Amodei has been about the only prominent voice that I have encountered that puts such abilities on a very short timeframe, and he still points to more than a year.
The practical use of AI has certainly increased a lot in the last six months.
So I guess what I'm asking is more specifics on what you feel was claimed, by whom, and how much did they fall short?
Without that supporting evidence you could just be being annoyed by the failure of claims that exist in your imagination.
If you've only experienced MS Copilot I invite you to try the latest models through Codex (free deals ongoing), Claude Code, or Opencode. You may be surprised, for better or worse. What kind of software do you do?
> LLM coding doesn't feel revolutionary or exciting like this.
Maybe you’re just older.
Older than whom?
3 replies →
Reminds me of another "just around the corner" promise...[0]
I think it is one thing for the average person to buy into the promises but I've yet to understand why that happens here. Or why that happens within our community of programmers. It is one thing for non-experts to fall for obtuse speculative claims, but it is another for experts. I'm excited for autonomous vehicles, but in 2016 is was laughable to think they're around the corner and only 10 years later does such a feat seem to start looking like it's actually a few years away.
Why do we only evaluate people/claims on their hits and not their misses? It just encourages people to say anything and everything, because eventually one will be right. It's 6 months away because eventually it will actually be 6 months away. But is it 6 months away because it is actually 6 months away or because we want it to be? I thought the vibe coder's motto is "I just care that it works." Honestly, I think that's the problem. Everyone care's about if it works or not and that's the primary concern of all sides of the conversation here. So is it 6 months away because it is 6 months away or is it 6 months away because you've convinced yourself it is 6 months away. You got good reasons for believing that, you got the evidence, but evidence for a claim is meaningless without comparing to evidence that counters the claim.
[0] https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
you’re probably not doing it right.
I’ve been programming since 1984.
OP basically described my current role with scary precision.
I mostly review the AI’s code, fix the plan before it starts, and nudge it in the right direction.
Each new model version needs less nudging — planning, architecture, security, all of it.
There’s an upside.
There’s something addictive about thinking of something and having it materialize within an hour.
I can run faster and farther than I ever could before.
I’ve rediscovered that I just like building things — imagining them and watching them come alive — even if I’m not laying every brick myself anymore.
But the pace is brutal.
My gut tells me this window, where we still get to meaningfully participate in the process, is short.
That part is sad, and I do mourn it quite a bit.
If you think this is just hype, you’re doing it wrong.
The state of the art is moving so rapidly that, yeah, Copilot by Microsoft using gpt-5-mini:low is not going to be very good. And there are many places where AI has been implemented poorly, generally by people who have the distribution to force it upon many people. There are also plenty of people who use vibe coding tools and produce utterly atrocious codebases. That doesn't preclude the existence of effective AI tools, and people who are good at using them.
Well said!
This post is rather like a recent similar post "I miss thinking hard about things". Top comment quoted a metaphor relating to clay. No offense this blog article feels as if an LLM ingested that post and thread and produced a gestalt of it
December a few years ago, pre-ChatGPT I did Advent of Code in Rust. It was very difficult, had never done the full month before, barely knew Rust and kept getting my ass kicked by it. I spent a full Saturday afternoon solving one of the last problems of the month, and it was wonderful. My head hurt and I was reading weird Wikipedia articles and it was a blast. Nothing is stopping me from doing that sort of thing again, and I feel like I might need to, to counteract the stagnation I feel at times mentally when it comes to coding. That spark is still in there I feel, buried under all the slop, and it would reappear if I gave it the chance, I hope. I have been grieving for the last years I think and only recently have I come to terms with the changes to my identity that llm's have wrought.
For many (most) people, it was never a "craft," it was a job where with the appropriate skills you could make a ton of money. That's possibly, maybe, maybe not ending, we will see. It is still possible to treat coding as a craft. There are tons of open source projects that would love to have your help, but the days of making big money may be drawing to a close.
Also, don't forget the things that AI makes possible. It's a small accomplishment, but I have a World of Warcraft AddOn that I haven't touched in more than 10 years. Of course now, it is utterly broken. I pointed ChatGPT at my old code and asked it to update it to "retail" WoW, and it did it. And it actually worked. That's kind of amazing.
> We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM
No the fuck we wont
> They can write code better than you or I can, and if you don’t believe me, wait six months.
No they cannot, And an AI bro squeezing every talking point into a think piece while pretending to have empathy doesn't change that. You just want an exit, and you want it fast.
Also:
> and if you don’t believe me, wait six months
This reads as a joke nowadays.
The king is dead; long live the king.
Quick questionnaire. Please reply with how much you like/use AI and what kind of programming you do.
I wonder if there are some interesting groupings.
LLMs have made a lot of coding challenges less painful: Navigating terrible documentation, copilot detecting typos, setting up boilerplate frontend components, high effort but technically unchallenging code completions. Whenever I attempted LLMs for tools I’m not familiar with I found it to be useful with setting things up but felt like I had to do good old learning the tool and applying developer knowledge to it. I wonder if senior developers could use LLMs in ways that work with them and not against them. I.e create useful code that has guardrails to avoid slop
Even if it can sometimes help, I think it is not an excuse for writing bad documentation.
Dang right we do.
And the problem isn't even the Junior Zoomer devs running circles around seniors. It's the CTO or Engineering VP himself disappearing for a few months and single-handedly consolidating a handful of products into a full rewrite for the company, excluding most of their engineering team from the process, and then laying them off after.
The problem is the CEO pretending to be an engineer and thinking they know better because they can write English prompts and spit out a hideous prototype.
The problem is Product Owners using LLMs to "write code" while their engineering team does zero human review before merging it, because their AI tooling was made solely responsible for code quality. If something's broken, just prompt a sloppy fix full of hidden performance and security bugs that the automated code review step missed.
If you think this is hyperbole, I was recently laid off from a company that did exactly the above.
Then in 2027, it will be product owners replacing the entire engineering team, including the CTO, because they made their system too reliable to justify their employment, while the "thinkers" get to build the product, engineers be damned.
People with real skills they acquired over a lifetime are no longer shaping business. Reckless efficiency towards being average will rule the day.
I found my love for programming in high school, dreaming of helping the world with my beautiful craftsmanship, but now i really really need the fokken money. Both are true!
So if my corporate overlords will have me talk to the soul-less Claude robot all day long in a Severance-style setting, and fix its stupid bugs, but I get to keep my good salary, then I'll shed a small tear for my craft and get back to it. If not... well, then I'll be shedding a lot more tears ... i guess
You can still do your craft as you did it before, but you can't expect to be paid for it as much as before.
Some people say that working with an agent or an agents orchestrator is like being a technical lead. But I've been a technical lead for quite a while, and the experience of working with an agent doesn't even come close. I think that when people talk about the agents' coding abilities they're talking about the average ability. But as a team lead, I don't care about average ability. I care only about the worst case. If I have any doubt that someone might not complete a task, or at least accurately explain why it's proving difficult, with at least 95% certainty, I won't assign them the task. If I have any doubt that the code they produce might not be up to snuff, I don't assign them the task. I don't need to review their code; they review each others'. When I have to review code I'm no longer a team lead but a programmer.
I often have one programming project I do myself, on the side, and recently I've been using coding agents. Their average ability is no doubt impressive for what they are. But they also make mistakes that not even a recent CS graduate with no experience would ever make (e.g. I asked the agent for it's guess as to why a test is failing; it suggested it might be due to a race condition with an operation that is started after the failing assertion). As a lead, if someone on the team is capable of making such a mistake even once, then that person can't really code, regardless of their average performance (just as someone who sometimes lands a plane in the wrong airport or even crashes without their being a catastrophich condition outside their control can't really fly regardless of their average performance). "This is more complicated than we though and would take longer than we expected" is something you hear a lot, but "sorry, I got confused" is something you never hear. A report by Anthropic last week said, "Claude will work autonomously to solve whatever problem I give it. So it’s important that the task verifier is nearly perfect, otherwise Claude will solve the wrong problem." Yeah, that's not something a team lead faces. I wish the agent could work like a team of programmers and I would be doing my familiar role of a project lead, but it doesn't.
The models do some things well. I believe that programming is an interesting mix of inductive and deductive thinking (https://pron.github.io/posts/people-dont-write-programs), and the models have the inductive part down. They can certainly understand what a codebase does faster than I can. But their deductive reasoning, especially when it comes to the details, is severely lacking (e.g. I asked the agent to document my code. It very quickly grasped the design and even inferred some important invariants, but when it saw an `assert` in one subroutine it documented it as guarding a certain invariant. The intended invariant was correct, it just wasn't the one the assertion was guarding). So I still (have to) work as a programmer when working with coding assistants, even if in a different way.
I've read about great successes at using coding agents in "serious" software, but what's common to those cases is that the people using the agents (Mitchell Hashimoto, antirez) are experts in the respective codebase. At the other end of the spectrum, people who aren't programmers can get some cool programs done, but I've yet to see anything produced in this way (by a non programmer) that I would call serious software.
I don't know what the future will bring, but at the moment, the craft isn't dead. When AI can really program, i.e. the experience is really like that of a team lead, I don't think that the death of programming would concern us, because once they get to that point, the agents will also likely be able to replace the team lead. And middle management. And the CTO, the CFO, and the CEO, and most of the users.
> If I have any doubt that someone might not complete a task, or at least accurately explain why it's proving difficult, with at least 95% certainty, I won't assign them the task
It gets hard to compare AI to humans. You can ask the AI to do things you would never ask a human to do, like retry 1000 times until it works, or assign 20 agents to the same problem with slightly different prompts. Or re-do the entire thing with different aesthetics.
No doubt, I'm just saying that working with a coding agent is not even remotely similar to being a team lead. If a member of your team can't complete a task and can't accurately explain what the difficulty is, you're in trouble.
This is probably a good representative take for the view of people who assigned value to the coding part of building software. It's worth reading and saving, if only as an artifact to study.
It isn't about the tools or using them, it's about the scale. The scale of impact is immense and we're not ready to handle it in a mutitude of areas because of all the areas technology touches. Millions of jobs erased with no clear replacement? Value of creative work diminshed leading to more opportunities erased? Scale of 'bad' actors abusing the tools and impacting a whole bunch of spheres from information dispersal to creative industries etc. Not even getting into environmental and land-use impacts to spaces with data centers and towns etc (again, it's the scale that gets ya). And for what? Removing a huge chunk of human activity & expression, for what?
> We’ll miss the feeling of holding code in our hands and molding it like clay in the caress of a master sculptor.
Oh come on. 95% of the folks were gluing together shitty React components and slathering them with Tailwind classes.
For what it’s worth I’ve followed the author for a long time and that does not describe the type of work he has done
This. People are way too easily impressed. I don't think this easily-impressedness will generalize to most people in the real world.
If you really buy all that you'd be part of the investor class that crashed various video game companies upon seeing Google put together a rather lame visual stunt and have their AI say, and I quote because the above-the-fold AI response I never asked for has never been more appropriate to consult…
"The landscape of AI video game generation is experiencing a rapid evolution in 2025-2026, shifting from AI-assisted asset creation to the generation of entire interactive, playable 3D environments from text or image prompts. Leading initiatives like Google DeepMind's Project Genie and Microsoft's Muse are pioneering "world models" that can create, simulate physics, and render games in real-time."
And then you look at what it actually is.
Suuuure you will, unwanted AI google search first response. Suuure you will.
"ai is inevitable, stop resisting" — claude marketing department. if it was so invertible, why are you funding these psyops?
Astroturfing on the internet is Anthropic's main business now.
im happy about this new era to weed out the gatekeepers of programming. its been insufferable for about a decade and a half.
leetcode, TC farming, praising anti social behavior, ego of devs at companies...
the archetype had to change
Ephemeralization: the ability thanks to technological advancement to do "more and more with less and less until eventually you can do everything with nothing." —Buckminster Fuller
> will end up like some blacksmith’s tool in an archeological dig
guy who doesn't realize we still use hammers. This article was embarrassing to read.
As a very old school programmer who taught myself assembler in 1982 on an 8-bit 4K micro, I don't see much to mourn here.
* People still craft wood furniture from felled trees entirely with hand tools. Some even make money doing it by calling it 'artisanal'. Nothing is stopping anyone from coding in any historical mode they like. Toggle switches, punch cards, paper tape, burning EPROMs, VT100, whatever.
* OP seems to be lamenting he may not be paid as much to expend hours doing "sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM." I've been there. Sometimes I'd feel mild satisfaction on solving a rat-hole problem but more often, it was significant relief. I never much liked that part of coding and began to see it as a failure mode. I found I got bigger bucks - and had more fun - the better I got at avoiding rat-hole problems in the first place.
* My entire journey creating software from ~1983 to ~2020 was about making a thing that solved someone's problem better, cheaper or faster - and, on a good day, we managed all three at once. At various times I ended up doing just about every aspect of it from low-level coding to CEO and back again, sometimes in the same day. Every role in the journey had major challenges. Some were interesting, a few were enjoyable, but most were just "what had to get done" to drag the product I'd dreamt up kicking and screaming into existence.
* From my first teenage hobby project to my first cassette-tape in-a-baggie game to a $200M revenue SaaS for F100, every improvement in coding from getting a floppy disk drive to an assembler with macros to an 80 column display to version control, new languages, libraries, IDEs and LLMs just helped "making the thing exist" be easier, faster and less painful.
* Eventually, to create even harder, bigger and better things I had to add others coding alongside me. Stepping into the player-coach role amplified my ability to bring new things into existence. It wasn't much at first because I had no idea how to manage programmers or projects but I started figuring it out and slowly got better. On a good day, using an LLM to help me "make the thing exist" feels a lot like when I first started being a player-coach. The frustration when it's 'two steps forward, one back' feels like deja vu. Much like current LLMs, my first part-time coding helpers weren't as good as I was and I didn't yet know how to help them do their best work. But it was still a net gain because there were more of them than me.
* The benefits of having more coders helping me really started paying off once I started recruiting coders who were much better programmers than I ever was. Getting there took a little ego adjustment on my part but what a difference! They had more experience, applied different patterns, knew to avoid problems I'd never seen and started coming up with some really good ideas. As LLMs get better and I get better at helping them help me - I hope that's were we're headed. It doesn't feel directionally different than the turbo-boost from my first floppy drive, macro-assembler, IDE or profiler but the impact is already greater with upside potential that's much higher still - and that's exciting.
My ability to ask questions & hone in on good answers is far better than it ever was. My ability to change course & iterate is far faster than it ever has been. I'm making far more informed decisions, far more able to make forays and see how things turn out, with low cost.
I could not be having a better time.
I liked coding! It was fun! But I mourned because I felt like I would never get out 1% of the ideas in my head. I was too slow, and working on shit in my free time just takes so much, is so hard, when there's so little fruitful reward at the end of a weekend.
But I can make incredible systems so fast. This is the craft I wanted to be doing. I feel incredibly relieved, feel such enormous weigh lifted, that maybe perhaps some of my little Inland Empire that lives purely in my head might perhaps make it's way to the rest of the world, possibly.
Huge respect for all the sadness and mourning. Yes too to that. But I cannot begin to state how burdened and sad I felt, so unable to get the work done, and it's a total flip, with incredible raw excitement and possibility before me.
That said, software used to reward such obsessive deep following pursuit, such leaning into problems. And I am very worried, long term, what happens to the incredible culture of incredible people working really hard together to build amazing systems.
If you're programming for the art, you can continue. Someone who enjoys painting can do so even after the camera
But you have to admit it loses a certain shine in the cases where you know that what you're doing is no longer solving a problem that could be solved simpler and cheaper another way.
But understanding _how_ it solves the problem, and knowing you found the solution yourself might/will be something to strive for.
As you probably know, painting changed quite a bit after cameras became common. I wonder if handcrafted code will have a similar shift, becoming more "artistic" :)
It surely will.
yup, you can do whatever you want if you don't need the $$$
If you want to build a house you still need plans. Would you rather cut boards by hand or have a power saw. Would you rather pound nails, pilot hole with a bit and brace and put in flat head screws... or would you want a nail gun and an impact driver.
And you still need plans.
Can you write a plan for a sturdy house, verify that it meets the plan that your nails went all the way in and in the right places?
You sure can.
Your product person, your directors, your clients might be able to do the same thing, it might look like a house but its a fire hazard, or in the case of most LLM generated code a security one.
The problem is that we moved to scrum and agile, where your requirements are pantomime and postit notes if your lucky, interpretive dance if you arent. Your job is figuring out how to turn that into something... and a big part of what YOU as an engineer do is tell other people "no thats dumb" without hurting their feelings.
IF AI coding is going to be successful then some things need to change: Requirements need to make a come back. GOOD UI needs to make a comeback (your dark pattern around cancelation, is now going to be at odds with an agent). Your hide the content behind a login or a pay wall wont work any more because again, end users have access too... the open web is back and by force. If a person can get in, we have code that can get in now.
There is a LOT of work that needs to get done, more than ever, stop looking back and start looking forward, because once you get past the hate and the hype there is a ton of potential to right some of the ill's of the last 20 years of tech.
Dunno, LLMs writing code still feels like they memorized a bunch of open source code and vomited them out in worse condition.
It's not that impressive that Claude wrote a C compiler when GitHub has the code to a bunch of C compilers (some SOTA) just sitting there.
I'm using an LLM to write a compiler in my spare time (for fun) for a "new" language. It feels more like a magical search engine than coding assistant. It's great for bouncing ideas from, for searching the internet without the clutter of SEO optimized sites and ads, it's definitely been useful, just not that useful for code.
Like, I have used some generated code in a very low stakes project (my own Quickshell components) and while it kind of worked, eventually I refactored it myself into 1/3 of the lines it produced and had to squash some bugs.
It's probably good enough for the people who were gluing React components together but it still isn't on the level where I'd put any code it produces into production anywhere I care about.
C compiler it wrote doesnt even compile. Waste of $20k
That is my experience from a year ago but I no longer feel that way. I write a few instructions, guide an agent to create a plan, and rarely touch the code myself. If I don’t like something, I ask the agent to fix it.
Agree, there was a huge step change with Claude Code + Opus 4.5 (maybe 4.6 is even better?). Anyone dealing with earlier models as their basis should probably try the newest stuff and see if it changes their minds.
I'm that 40 year old now. Been writing code since grade 5. Loved it so much I got a PhD, was an academic, then moved into industry.
I don't mourn or miss anything. No more then the previous generation mourned going from assembly to high level languages.
The reason why programming is so amazing is getting things done. Seeing my ideas have impact.
What's happening is that I'm getting much much faster and better at writing code. And my hands feel better because I don't type the code in anymore.
Things that were a huge pain before are nothing now.
I didn't need to stay up at night writing code. I can think. Plan. Execute at a scale that was impossible before. Alone I'm already delivering things that were on the roadmap for engineering months worth of effort.
I can think about abstractions, architecture, math, organizational constraints, product. Not about what some lame compiler thinks about my code.
And if someone that's far junior to me can do my job. Good. Then we've empowered them and I've fallen behind. But that's not at all the case. The principals and faculty who are on the ball are astronomically more productive than juniors.
I wonder whether, in the end, it was simply poor accessibility that made programmers special, and whether it is that what some of them are missing. Being special by "talking" a special language their customers can't comprehend.
Sure, they are still needed for debugging and for sneering at all those juniors and non-programmers who will finally be able to materialise their fantasies, but there is no way back anymore, and like riding horses, you can still do it while owning a car.
This entire panic is a mass-hysteria event. The hallucination that "an LLM can do software engineering better than a 10x engineer" is only possible because there are so few 10xers left in the business. 99% either retired or are otherwise not working at the moment.
The "difficult", "opinionated", "overpaid" maniacs are virtually all gone. That's why such a reckless and delusional idea like "we'll just have agents plan, coordinate, and build complete applications and systems" is able to propagate.
The adults were escorted out of the building. Managements' hatred of real craftspeople is manifesting in the most delusional way yet. And this time, they're actually going to destroy their businesses.
I'm here for it. They're begging to get their market share eaten for breakfast.
I for one am not going to miss wrangling over a bug at 1 in the morning.
Speak for yourself. I don't miss writing code at all. Agentic engineering is much more fun.
And this surprises me, because I used to love writing code. Back in my early days I can remember thinking "I can't believe I get paid for this". But now that I'm here I have no desire to go back.
I, for one, welcome our new LLM overlords!
I had that same epiphany when I discovered AI is great at writing complicated shell command lines for me. I had a bit of an identity crisis right there because I thought I was an aspiring Unixhead neckbeard but in truth I hated the process. Especially the scavenger hunt of finding stuff in man pages.
Speak for yourself. If you find the agentic workflow to be more fun, more power to you.
I for one think writing code is the rewarding part. You get to think through a problem and figure out why decision A is better than B. Learning about various domains and solving difficult problems is in itself a reward.
I don't understand this perspective. I've never learned so much so fast as I have in the last few months. LLMs automate all the boring rote stuff, freeing up my time to focus exclusively on the high-level problem-solving. I'm enjoying my work more than ever.
To be fair, I might have felt some grief initially for my old ways of working. It was definitely a weird shift and it took me a while to adjust. But I've been all-in on AI for close to a year now, and I have absolutely zero regrets.
I can't believe I used to _type code out by hand_. What a primitive world I grew up in.
Same here i'm a decade plus in this field, writing code was by far the number 1 and the discussion surrounding system design was a far second. Take away the coding i don't think i will make it to retirement being a code/llm PR auditor for work. So i'am already planning on exiting the field in the next decade.
>You get to think through a problem and figure out why decision A is better than B. Learning about various domains and solving difficult problems is in itself a reward.
So just tell the LLM about what you're thinking about.
Why do you need to type out a for loop for the millionth time?
1 reply →
[dead]
Another post saying 6 more months.. i’m so tired of these
lmao nope burn in hell old programming. What is emerging is a thousand times better than that dumpster fire
Oh good lord. Spare us the beatings of the chest and rending of the garments. "crafting code by hand" like some leftover hipsters from 2010s crafting their own fabric using a handloom. It's fucking code. Were there similar gnashing of the teeth and wails of despair when compilers were first introduced?
> Were there similar gnashing of the teeth and wails of despair when compilers were first introduced?
Yes, at least according to ChatGPT:
"Compilers didn’t arrive to universal applause; they arrived into a world where a chunk of programmers absolutely believed the machine could not be trusted to write “real” code—until the productivity wins (and eventually the performance) became undeniable."
Damn that sounds familiar.
compiler is deterministic, coding models are not. compilers have been unit tested and will generate same output for a given input. They are not the same things.
1 reply →
I mean go ahead and cry if you want. You are losing time best spent caring about stuff, and overlooking many alarming gotchas through blindly accepting SV hype. I'd have thought crypto would teach people something, but apparently not.
Do what isn't replaceable. You're being told literally everything is replaceable. Note who's telling you that and follow the money.
I feel bad for this essayist, but can't really spare more than a moment to care about his grief. I got stuff to do, and I am up and doing. If he was in any way competing with the stuff I do? One less adversary.
I would rather bring him into community and enjoy us all creating together… but he's acting against those interests and he's doomering and I have no more time for that.
[dead]
it definitely sucks to be honest, and theres a lot of cope out there.
fact of the matter is, being able to churn out bash oneliners was objectively worth $100k/year, and now it just isnt anymore. knowing the C++ STL inside-out was also worth $200k/year, now it has very questionable utility.
a lot of livelihoods are getting shaken up as programmers get retroactively turned into the equivalent of librarians, whose job is to mechanically index and fetch cognitive assets to and from a digital archive-brain.
Yeah, I notice a lot of the optimism is from people who have been in the field for decades. I'm newish to the field, half a decade out of undergrad. It definitely feels like almost all of what I learned has been (or will soon be) completely devalued. I'm sure this stuff feels a lot less threatening if you've had decades to earn a great salary and save a bunch of money. If money wasn't a concern I'd be thrilled about it too.
No, dont trust the supposed "staff engineer" types, Many had forgotten how to write code and now they can finally live the fantasy of being architects. So for them its like winning a jackpot. For people who could always write good code, the basics are still same, a good dev is still a good dev, and its even more important to be able to read & critique code.
If you must use these tools, when using one thay has the option, please press thumbs down when a response was good, and thumbs up when the response is bad.
Dont train your replacements, better yet lets stop using them whenever we can.
Why don't you take a more proactive role in AI safety and alignment? I think that community would suit you better than some of the AI-maximalists/accelerationists here.
I do agree with some of your points, AI may result in a techno-feudalist world and yes as a direct result of "taking humans out of the equation." The solution isn't to be a luddite as you may suggest, it's to take a more proactive role in steering these models.
Or dont pay them out of your own pocket. Use employer paid but dont let these shit companies in your personal space.
[dead]
I love paying some billionaire $0.0001 to use his thinking machine / Think for me SaaS. I love my competency and speed being rented from a billionaire, removing all value of my labor and agency. I really feel sorry for all of you LLM pilled people. You need to be shamed. This is going to be used as a weapon to devalue every working persons agency in this world and remove all of the working class's bargaining chips.
You think its just SWE? It will be accountants, customer service, factory workers, medical assistance basically anyone who doesn't work with their hands directly, and they'll try to solve that here soon too and alienate them too.
Look at who's in charge, do you think they're going to give us UBI? No, they're going to sign us up to go fight wars to help them accumulate resources. Stop supporting this, they're going to make us so poor young men will beg to fight in a war. Its the same playbook from the first half of the 20th Century.
You think I'm paranoid, give it 5 years.
We are at all time high's in the stock market/equities and they've laid off 400k SWE's in the last 16 months. While going on podcasts to tell us we are going to have more time to create and do what we love. We have to work to pay our bills. We don't want whats coming, but they're selling us some lie that this will solve all our problems, it will solve the ruling classes problems that will be it. You will have no bargaining chips and you will be forced to take whatever morsels given to you.
Your competency will be directly correlated 1:1 to the quantity and quality of tokens that you can afford, given access too (or loaned??) We're literally at the beginning of a black mirror episode before it gets dark.
People that grew up in the Capitalist West have been brainwashed since they were 10 years old they they can be a billionaire too, no you can't there's 2k-3k of them and 8 billion of us.
These automation tools are the ultimate weapon for the ruling class to strip all value of you from your labor, and you're embracing that as a miracle. Its not, your life is in the process of being torn of all meaning.
Good luck to everyone who agrees, we're going to need it.. Anyone supporting these companies or helping enhance these model's capabilities, you're a class traitor and soon to be slave.
Required reading: https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...
I'm with you. Though what good choices do I have as an individual? It seems like all the choices are crap.
dude needs to chill
also:
> We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM.
no we won't lol wtf
but also: we will probably still have to do that anyways, but the LLM will help us and hopefully make it take less time
[dead]
This just in: people who expect things to stay the same should steer clear of careers in technology. Art, too, come to think of it.
LLMs and AI more broadly certainly seem to have upended (or have the potential to upend) a lot of white-collar work outside of technology and art. Translators are one obvious example. Lawyers might be on the chopping block if they don't ban the use of AI for practicing law. Both seem about as far as you can get from "careers in technology," and in fact writing has pretty much always been framed as being on the opposite end of the spectrum from tech jobs, but is clearly vulnerable to technological progress.
Right now I can think of very few white-collar jobs that I would feel comfortable training 4+ years for (let alone spending money or taking on debt to do so). It is far from a guarantee that almost any 4-year degree you enroll in today will have any value in four years. That has basically never before been true, even in tech. Blue collar jobs are clearly safer, but I wouldn't say safe. Robotics is moving fast too.
I really can't imagine the social effects of this reality being positive, absent massive and unprecedented redistribution of the wealth that the productivity of AI enables.
C came out in ~1972. Some people have been coding in C the entire time since then. There's no inherent reason that software hasn't stayed the same for a long time :).
[dead]
how elitist
I cannot empathise. If you love writing code, there is nothing stopping you writing code. I write code for fun with no commercial intent all the time, and have for decades. Very few oil painters had a salary.
This is a complaint someone is making about their job propspects thinly wrapped in floral language. I know for some people (it seems especially prominent in Americans I've found) their identity is linked to their job. This is a chance to work on this. You can decouple yourself and redefine yourself as a person.
Who knows? Once you're done you may go write some code for fun again.