Comment by bdcravens
3 days ago
I'm almost 50, and have been writing code professionally since the late 90s. I can pretty much see projects in my head, and know exactly what to build. I also get paid pretty well for what I do. You'd think I'd be the prototype for anti-AI.
I'm not.
I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.
I liken AI development to a developer somewhere between junior and mid-level, someone I can given a paragraph or two of thought out instructions and have them bang out an hour of work. (The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern, but a separate problem to solve)
I love AI for speed running through all the boring stuff and getting to the good parts.
In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.
Companies need to realise that AI to go faster is great, but there's still a cognitive impact on the people. A little respite from the hardcore stuff is genuinely useful sometimes. Taking all of that away will be bad for people.
That said, some devs hate the boring easy bits and will thrive. As with everything, individuals need to be managed as individuals.
That makes me think of https://store.steampowered.com/app/2262930/Bombe/ which is a version of Minesweeper where instead of clicking on squares you define (parametric!) rules that propagate information around the board automatically. Your own rules skip all the easy parts for you. As a result, every challenge you get is by definition a problem that you've never considered before. It's fun, but also exhausting.
I remember listening to a talk about Candy Crush and how they designed the game to have a few easy levels in between the hard ones, to balance feeling like you're improving while also challenging players. If all the levels get progressively harder, then a lot of people lose motivation to keep playing.
Oooohhh....
That looks like plenty of hours of fun! Thanks for the link :)
I just tried that game and it looks very interesting but the interface is cryptic.
Interesting point.
There's also the fact that, while you're coding the easy stuff, your mind is thinking about the hard stuff, looking things up, seeing how they articulate. If you're spending 100% of your time on hard stuff, you might be hurting these preliminaries.
I haven't yet found a "boring, easy" problem that an AI could generate more easily than Vim.
1 reply →
This makes no sense. Yes, having time to think about the hard part is good, but just because you’re not doing the boilerplate anymore doesn’t mean you can’t do the thinking part anymore! See how absurd it sounds when you actually describe it this way?
4 replies →
> Devs often want the inherent safety of the boring, easy stuff for a while.
Part of the problem is that in big orgs, you need to show consistent progress in order to not get put on some PIP and kicked out of the company. There are performance review cycles and you have to show something continuously.
That ONLY works if you have boring, easy work. It's easy to deliver consistent progress on that.
Interesting and difficult work is nice only if you are trusted to try your best and given the freedom to fail. That's the nature of hard problems; progress in those domains is very sudden and Poissonian and not consistent by nature. If you're going to be judged on your ability to be sub-Poissonian and consistent, and get put on a PIP for not succeeding at it one review cycle (and possibly risking income that you use to put a roof over your head or feed your family) it's not worth the career risk to try difficult things.
Not saying this is the way I think, it's just the reality of how things often work in big orgs, and one of the reasons I dislike many big orgs.
> Devs often want the inherent safety of the boring, easy stuff for a while
That's matches my experience. In my first job, every time a new webapp project has been starting it was fun. Not because of challenges or design, but simply because of the trivial stuff done for n-th time - user accounts, login, password reset, admin panel. Probably should have been automated at this point, but we got away with reinventing the wheel each time.
>AI changes the job to be a constant struggle with hard problems
I find this hilarious. From what I've seen watching people do it, it changes the job from deep thought and figuring out a good design to pulling a lever on a slot machine and hoping something good pops out.
The studies that show diminished critical thinking have matched what i saw anecdotally pairing with people who vibe coded. It replaced deep critical thinking with a kind of faith based gambler's mentality ("maybe if i tell it to think really hard it'll do it right next time...").
The only times ive seen a notable productivity improvement is when it was something not novel that didnt particularly matter if what popped out was shit - e.g. a proof of concept, ad hoc app, something that would naturally either work or fail obviously, etc. The buzz people get from these gamblers' highs when it works seems to make them happier than if they didnt use it at all though.
Which was my original point. Not that the outcome is shit. So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Most of it is so basic and boilerplate you really can't look at it and know if it was machine- or human-generated. Why shouldn't that work get cranked out in seconds instead of hours? Then we can do the actual work we're paid to do.
To pair this with the comment you're responding to, the decline in critical thinking is probably a sign that there's many who aren't as senior as their paycheck suggests. AI will likely lead to us being able to differentiate between who the architects/artisans are, and who the assembly line workers are. Like I said, that's not a new problem, it's just that AI lays that truth bare. That will have an effect generation over generation, but that's been the story of progress in pretty much every industry for time eternal.
4 replies →
I think there are two kinds of uses for these tools:
1) you try to explain what you want to get done
2) you try to explain what you want to get done and how to get it done
The first one is gambling, the second one has very small failure rate, at worst, the plan it presents shows it's not getting the solution you want it to do.
2 replies →
This is an interesting take on it. I've been using Cursor at work and notice that days where I have it generate all the "easy parts", I end up feeling much more mentally exhausted than if I spend the day writing everything by hand.
That tracks. Being in flow on a task you're confident about is a low arousal, satisfying state.
Monitoring AI output on any task is high arousal, low satisfaction, unless you're constantly prompting for quick wins.
> AI changes the job to be a constant struggle with hard problems.
Very true. I think AI (especially Claude Code) forced me to actually think hard about the problem at hand before implementing the solution. And more importantly, write down my thoughts before they fleet away from my feeble mind. A discipline that I wished I had before.
That's strange, I've never thought it can be done this way. Normally I'd read the docs, maybe sketch up some diagrams, then maybe take a walk while thinking on how to solve the problem, and by the time I got back to the screen I'd already have a good idea on how to do it.
These days the only difference is that I feed my ideas to a few different LLMs to have "different opinions". Usually they're crap but sometimes they present something useful that I can implement.
Yes I can see why some devs might prefer the safety of the boring and the familiar stuff. But the employers aren’t going to care about that. They’re going to hire and retain the devs who are more productive in the new era.
On the flip side, there have been lots of times where I personally didn’t have a lot of time to deeply research a topic (read papers, build prototypes of different ideas, etc) due to lack of time and resources. If all of the boring stuff is gone, and building prototypes is also 3x faster maybe what will end up happening is we can now use all of this free time to try lots of different ideas because the cost of exploration has been lowered.
I think you're describing things we already knew long before this era of AI. Less code is better code, and the vast majority of bugs come from the devs who "hate the boring easy bits".
I disagree that this has anything to do with people needing a break. All code eventually has to be reviewed. Regardless of who or what wrote it, writing too much of it is the problem. It's also worth considering how much more code could be eliminated if the business more critically planned what they think they want.
These tensions have existed even before computers and in all professions.
That's crazy to me. I solve problems. I'm not a janitor or tradesman, you bring me in to envision and orchestrate solutions that bring bottom line value. I live to crack hard nuts, if I never have to bother with rigging again I'll be so happy.
This is exactly why people hate AI. It disrupts the comfort of easy coding.
The main challenge with any creative effort, including and especially programming, is motivation. "Easy coding" gives you small mental wins that build your dopamine circuits and give you something to chase. When people say that "AI takes the fun out of coding" they mean that they're not getting those rewards anymore. It might make coding easier (though I'm not sure it actually does, ultimately), but in the process it takes away the motivation.
The ones who are excited about this are the ones who are motivated by the product. When AI can whip up some half-baked solution it sure looks like you can focus on the product and "get someone to code it up for you". But unless it's a well-understood and previously executed solution, you're going to run into actual technical problems and have to fix them. But your motivation to deal with the irritating pedantrics of the modern computing stack (which are the same as all technology ever, with orders of magnitude more parts) hasn't been built up. There's no beneficial flywheel, just a fleet of the Sorceror's Apprentice mindless brooms that you hope you can get work enough to ship.
> In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.
The issue of senior-juniors has always been a problem; AI simply means they're losing their hiding spots.
I'm just slightly younger than you, but have the exact same sentiment. Hell, even moreso maybe, because what I realized is that "writing code to implement interesting ideas" is not really what I enjoy - it's coming up with the interesting ideas and experimenting with them. I couldn't care less about writing the code - and I only did it because I had to...if I wanted to see my idea come to life.
AI has also been a really good brainstorming partner - especially if you prompt it to disable sycophancy. It will tell you straight up when you are over-engineering something.
It's also wonderful at debugging.
So I just talk to my computer, brainstorm architectures and approaches, create a spec, then let it implement it. If it was a bad idea, we iterate. The iteration loop is so fast that it doesn't matter.
Did you end up regretting a design choice, but normally you'd live with it because so much code would have to be changed? Not with Agentic coding tools - they are great at implementing changes throughout the entire codebase.
And its so easy to branch out to technologies you're not an expert in, and still be really effective as you gain that expertise.
I honestly couldn't be happier than I am right now. And the tools get better every week, sometimes a couple times a week.
> I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.
I'm in the same boat (granted, 10 years less) but can't really relate with this. By the time any part becomes boring, I start to automate/generalize it, which is very challenging to do well. That leaves me so little boring work that I speed run through it faster by typing it myself than I could prompt it.
The parts in the middle – non-trivial but not big picture – in my experience are the parts where writing the code myself constantly uncovers better ways to improve both the big picture and the automation/generalization. Because of that, there are almost no lines of code that I write that I feel I want to offload. Almost every line of code either improves the future of the software or my skills as a developer.
But perhaps I've been lucky enough to work in the same place for long. If I couldn't bring my code with me and had to constantly start from scratch, I might have a different opinion.
> By the time any part becomes boring, I start to automate/generalize it, which is very challenging to do well. That leaves me so little boring work that I speed run through it faster by typing it myself than I could prompt it.
The two aren't mutually exclusive. You can use AI to build your tooling. (Unless it's of sufficient complexity or value that you need to do the work yourself)
The time spent on the tooling is very low. Using AI for that would be like renting a flamethrower because couple of times a year I like to go camping and light a fire. I'd rather just use a lighter.
>> developer somewhere between junior and mid-level,
Analogies to humans don't work that well. AI is super-human in some respects while also lacking the ability to continually work toward a goal over long periods of time. AI can do very little on its own - just short / scoped / supervised tasks.
However, sometimes the situation is reversed, AI is the teacher who provides some examples on how to do things or provides hints on how to explore a new area and knows how others have approached similar things. Then, sometimes, AI is an astute code reviewer, typically providing valuable feedback.
Anyway, I've stopped trying anthropomorphize AI and simply try to reason about it based on working with it. That means combinations of direct ChatGPT usage with copy / paste / amend type workflows, async style / full PR style usage, one-shot "hail Mary" type throw away PRs just to establish an initial direction as well as PR reviews of my own code. I'm using AI all the time, but never anything like how I would work with another human.
I have a couple of niche areas of non-coding interest where I'm using AI to code. It is so amazing to write rust and just add `todo!(...)` through out the boiler plate. The AI is miserable at implementing domain knowledge in those niche areas, but now I can focus on describing the domain knowledge (in real rust code because I can't describe it precisely enough in English + pseudo code), and then say "fill in the todos, write some tests make sure it compiles, and passes linting", verify the tests check things properly and I'm done.
I've struggled heavily trying to figure out how to get it to write the exactly correct 10 lines of code that I need for a particularly niche problem, and so I've kind of given up on that, but getting it to write the 100 lines of code around those magic 10 lines saves me so much trouble, and opens me up to so many more projects.
I have a similar view of AI.
I find it best as a "personal assistant," that I can use to give me information -sometimes, highly focused- at a moment's notice.
> The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern
I think it's a very real problem. I am watching young folks being frozen out of the industry, at the very beginning of their careers. It is pretty awful.
I suspect that the executives know that AI isn't yet ready to replace senior-levels, but they are confident that it will, soon, so they aren't concerned that there aren't any more seniors being crafted from youngsters.
Would suck, if they bet wrong, though…
Exactly. I tend to like Hotz, but by his description, every developer is also "a compiler", so it's a useless argument.
My life quality (as a startup cofounder wearing many different hats across the whole stack) would drop significantly if Cursor-like tools [1] were taken away from me, because it takes me a lot of mental effort to push myself to do the boring task, which leads to procrastination, which leads to delays, which leads to frustration. Being able to offload such tasks to AI is incredibly valuable, and since I've been in this space from "day 1", I think I have a very good grasp on what type of task I can trust it to do correctly. Here are some examples:
- Add logging throughout some code
- Turn a set of function calls that have gotten too deep into a nice class with clean interfaces
- Build a Streamlit dashboard that shows some basic stats from some table in the database
- Rewrite this LLM prompt to fix any typos and inconsistencies - yeah, "compiling" English instructions into English code also works great!
- Write all the "create index" lines for this SQL table, so that <insert a bunch of search usecases> perform well.
[1] I'm actually currently back to Copilot Chat, but it doesn't really matter that much.
> Add logging throughout some code
That's one of the thing that I wouldn't delegate to LLM. Logging is like a report of things that happens. And just like a report, I need relevant information and the most useful information.
...
A lot of these use cases actually describes the what. But the most important questions is always the why. Why is it important to you? Or to the user? That's when things have a purpose and not be just toys.
Code with logging is "self reporting". Adding logging statements is not reporting itself. Adding `logger.error(f"{job} failed")` is not reporting itself, and LLMs are perfectly capable of adding such statements in applicable places.
As to why, it's because I'm building an app with a growing userbase and need to accommodate to their requirements and build new features to stay ahead of the competition. Why you decided I'm describing a toy project is beyond me.
1 reply →
Exactly. If you know how the whole thing works end to end, AI makes you incredibly dangerous. Anyone who specializes or never really learned how everything works is at a huge disadvantage.
However. There's also good news. AI is also an amazing tool for learning.
So what I see AI doing is simply separating people who want to put effort forth and those who don't.
> However. There's also good news. AI is also an amazing tool for learning.
Absolutely. For example, I've been learning Autodesk Fusion, and after establishing a small foundation through traditional learning techniques, I've been able to turbocharge my learning by asking precise questions to AI.
I image this really sucks for those whose business model relied on gatekeeping knowledge. (like training companies)
I agree. Having AI write your entire application is equivalent to having it write a song for your band, or even generate audio so you don't have to do record it yourself.
If you aren't talented enough to write or record your own music, you aren't really a musician.
If you have a quick question about music theory and you want a quick answer, AI can be a benefit.
Yeah, I don't buy this. Creating art is in no way equivalent to making logic to perform a task.
This resonates with me. I'm also around the same age and have the same amount of experience.
I love AI and use it for both personal and work tasks for two reasons:
1. It's a way to bounce around ideas without (as much) bias as a human. This is indispensable because it gives you a fast feedback mechanism and validates a path.
2. It saves me typing and time. I give it one-shot, "basic work" to do and it's able to do accomplish at least 80% of what I'd say is complete. Although it may not be 100% it's still a net positive given the amount of time it saves me.
It's not lost on me that I'm effectively being trained to always add guardrails, be very specific about the instructions, and always check the work of AI.
> I can pretty much see projects in my head, and know exactly what to build.
This is where AI actually helps - you have a very precise vision of what you want, but perhaps you've forgotten about the specific names of certain API methods, etc. Maybe you don't want to implement all the cases by hand. Often validating the output can take just seconds when you know what it is you're looking for.
The other part of making the output do what you want is the ability to write a prompt that captures the most essential constraints of your vision. I've noticed the ability to write and articulate ideas well in natural language terms is the actual bottleneck for most developers. It takes just as much practice communicating your ideas as it does anything else to get good at it.
Yes, unfortunately the boring parts are what junior devs used to do so the senior devs could work on the good stuff. Now that AI is doing the boring stuff nobody has to hire those pesky jr developers anymore. Yay?
The problem is that junior developers are what we make senior developers with— so in 15 years, this is going to be yet another thing that the US used to be really good at, but is no longer capable of doing, just like many important trades in manufacturing. The manufacturers were all only concerned with their own immediate profit and made the basic sustainability of their workforce, let alone the health of the trades that supported their industries, a problem for everyone else to take care of. Well, everyone else did the same thing.
> The problem is that junior developers are what we make senior developers with— so in 15 years
In 15 years senior developers will not be needed as well. Anyway no company is obliged to worry about 15 years timescale
And nobody is obligated to make sure they aren’t walking off of a cliff.
Most people don’t share your confidence that we will replace senior engineers and I’d gobsmacked if we could. Just like the magical ‘automation’ can’t replace the people that actually make the physical things that the machines use to do their jobs, or fix the machines, no matter how good it gets. But the quantitatively-minded MBAs just kept kicking the can down the road and assumed it was someone else’s problem that the end of the road was approaching. It wasn’t their problem that there would be a problem in 30 years, and then it wasn’t their problem when it was 10 years, and now that we’re standing at the edge of a cliff, they’re realizing that it’s everybody’s problem and it’s going to be a hell of a lot more painful than if they’d had an extremely modest amount of foresight.
Now, US manufacturers are realizing that all of their skilled laborers are retiring or dying, and there isn’t enough time to transfer the more complex knowledge sets, like Tool and Die making, to a new set of apprentices. Many of these jobs are critical not only to national security, but also our country’s GDP because the things we do actually make are very useful, very specialized, and very expensive. Outsourcing jobs like making parts for fighter jets is really something we don’t want shipped overseas unless we want to see those parts pop up on aliexpress. If nobody is responsible for it and nobody wants to fund the government to fix it, but it is a real problem, it doesn’t take a genius to see the disconnect there.
It’s yet another place where we know our own capacity as a society is shrinking and hoping that ??? (Ai? Robots? Fusion?) will fix it before it’s too late. I never thought programming would join elder-care in this category though, that came as a surprise.
Would love to see a project you built with the help of AI, can you share any links?
Most of my work is for my employer, but the bigger point is that you wouldn't be able to tell my "AI work" from my other work because I primarily use it for the boring stuff that is labor-intensive, while I work on the actual business cases. (Most of my work doesn't fall under the category of "web application", but rather, backend and background-processing intensive work that just happens to have an HTML front-end)
https://github.com/williamcotton/webpipe
Shhh, WIP blog post (on webpipe powered blog)
https://williamcotton.com/articles/introducing-web-pipe
Yes, I wrote my own DSL, complete with BDD testing framework, to write my blog with. In Rust!
My blog source code written in webpipe:
http://github.com/williamcotton/williamcotton.com
100% agree. I am interested in seeing how this will change how I work. I'm finding that I'm now more concerned with how I can keep the AI busy and how I can keep the quality of outputs high. I believe it has a lot to do with how my projects are structured and documented. There are also some menial issues (e.g. structuring projects to avoid merge conflicts becoming bottlenecks)
I expect that in a year my relationship with AI will be more like a TL working mostly at the requirements and task definition layer managing the work of several agents across parallel workstreams. I expect new development toolchains to start reflecting this too with less emphasis on IDEs and more emphasis on efficient task and project management.
I think the "missed growth" of junior devs is overblown though. Did the widespread adoption of higher-level really hurt the careers of developers missing out on the days when we had to do explicit memory management? We're just shifting the skillset and removing the unnecessary overhead. We could argue endlessly about technical depth being important, but in my experience this hasn't ever been truly necessary to succeed in your career. We'll mitigate these issues the same way we do with higher-level languages - by first focusing on the properties and invariants of the solutions outside-in.
An important skill for software developers is the ability to reason about what the effects of their coce will be, over all possible conditions and inputs, as opposed to trial and error limited to specific inputs, or (as is the case with non-deterministic LLMs) limited to single executions. This skill is independent of whether you are coding in assembly or are using higher-level languages and tooling. Using LLMs exactly doesn’t train that skill, because the effective unpredictability of their results largely prevents any but the most vague logical reasoning about the connection between the prompt and the specific output.
> Using LLMs exactly doesn’t train that skill
I actually think this is one skill LLMs _do_ train, albeit for an entirely different reason. Claude is fairly bad at considering edge cases in my experience, so I generally have to prompt for them specifically.
Even for entirely “vibe-coded” apps I could theoretically have created without knowing any programming syntax, I was successful only because I knew about possible edge cases.
> I love AI for speed running through all the boring stuff and getting to the good parts.
But the issue is some of that speedrunning sometimes takes so much time, it becomes inefficient. It's slowly improving (gpt5 is incredible), but sometimes it get stuck on really mundane issue, and regress endlessly unless I intervene. And I am talking about straightforwars functional code.
What’s the tooling you’re using, and the workflow you find yourself drawn to that boosts productivity?
I've used many different ones, and find the result pretty similar. I've used Copilot in VS Code, Chat GPT stand-alone, Warp.dev's baked in tools, etc. Often it's a matter of what kind of work I'm doing, since it's rarely single-mode.
> I can pretty much see projects in my head, and know exactly what to build.
I think you’re the best case support for AI coding. You know clearly what you want, so you know clearly what you don’t want. So if you had decent verbal dexterity you could prompt the AI model and manage to accomplish what you intended.
A lot of programming problems / programmer contexts don’t match that situation. Which is the problem with universalizing the potency of AI / benefits of AI coding.
same. 50’s, coding since the 90’s, make more $$ than I can spend in 3 lifetimes. one constant thing in my 3 decade career has been that absolute best people I worked with all had one common thread - absolute laziness. sounds strange but every other trait of a great SWE has not been universal except laziness.
the laziness manifest itself into productivity as crazy as this sounds. how? lazy people find a way to automate repetitive tasks. what I have learned from these over the years is that anything you do twice has to find a way to be automated as third time is around the corner :)
what does this have to do with AI? the AI has taken automation to another level allowing us to automate so much of our work that was not previously possible. I found myriad of ways to use AI and several of my best (lazy) co-workers have as well. I cannot imagine doing my work anymore without it, not because of any “magic” but because my lazy ass will be able to do all the things that I have automated out
Yes! Once I've figured out that this problem is best solved using parser combinators, and that I have a good idea of how to model the transformation, I'm so glad I can delegate work to the LLM code gen and focus on the specification, test cases, corner cases, etc;
I don't think thats contrary to the article's claim: the current tools are so bad and tedious to use for repetitive work that AI is helpful with a huge amount of it.
+1. I agree with George when he says that AI is many times being used as a replacement for poor tooling, but you know what? So be it.
That‘s exactly why i like AI too. I even let them play roles like „junior dev“, „product owner“ or „devops engineer“ and orchestrate them, to play together as a team - with guidance from me (usually the „solution architect“ or „investor“)! This „team“ achieves in weeks what we usually needed months for - for 2.40€/h*role!
This sounds a bit like creating a Humanoid robot to do dishes instead of a machine specifically designed to do dishes.
I can't tell if you are being sarcastic but this sounds absurd. Why let the AI be junior, why not an expert?
This persona driven workflow is so weird to me. Feels like stuck in old ways.
Avoiding context bloat and scoping the chain of thought
Per @vgr, LLMs are an old person’s technology. (I, too, am an old person.)
I have a similar relation to AI with programming -- and my sense is very many HN readers do as well, evidenced not least by the terrific experience report from antirez [1]. Yet it is rare to see such honest and open statements even here. Instead, HN is full of endless anti-AI submissions on the front page where the discussion underneath is just an echo chamber of ill-substantiated attacks on 'AI hype' and where anything else is down-voted.
It's what is, to me, so bizarre about the present moment: certainly investment is exceptionally high in AI (and of course use), but the dominant position in the media is precisely such a strange 'anti-AI hype' that positions itself as a brave minority position. Obviously, OpenAI/Altman have made some unfortunate statements in self-promotion, but otherwise I genuinely can't think of something I've read that expresses the position attacked by the anti-AI-ers -- even talk of 'AGI' etc comes from the AI-critical camp.
In a sense, the world seems divided into three: obvious self-promotion from AI companies that nobody takes seriously, ever-increasingly fervent 'AI critique', and the people who, mostly silent, have found modern AI with all its warts to be an incomparably useful tool across various dimensions of their life and work. I hope the third camp becomes more vocal so that open conversations about the ways people have found AI to be useful or not can be the norm not the exception.
[1] https://antirez.com/news/154
It’s hard to see how the current rate of progress is compatible with, 30 years from now, it being good business sense to pay human professionals six figure salaries. Opinions then split: the easiest option is pure denial, to assume that the current rate of progress doesn’t exist. Next easiest is to assume that progress will halt soon, then that we will be granted the lifestyle of well paid professionals when unsupervised AI can do our job for cheaper, then that Altman will at least deign to feed us.
antirez's success doesn't mean anything since antirez is far, far above average as a developer and is able to wield AI very effectively because of his talents and knowledge. Most developers are not remotely close to antirez's skill level (half are below average, by definition) and are likely going to suffer the problems that skeptics are already seeing.
> developer somewhere between junior and mid-level
Why the insistence on anthropomorphizing what is just a tool? It has no agency, does not 'think' in any meaningful manner, it is just pattern matching on a vast corpus of training data. That's not to say it can't be very useful - as you seem to have found - but it is still just a tool.
This isn’t necessarily anthropomorphizing, as from a company’s point of view, or uncharitably even from a tech leads point of view, developers are also just tools. The point being made is that LLMs (supposedly) fulfill those developer roles.
It's less about what the tool is, and more about the kind of work we often assign to less experienced developers. Pattern matching in meatspace is still pattern matching.
It's not anthropomorphising though, is it? It's just a comparison of the tool's ability. Like talking about the horsepower of an engine.
But we didn't do this with spreadsheets or word processors - and those tools really did replace small armies of clerical and secretarial workers.
2 replies →