AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'

2 days ago (theregister.com)

> “I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?”

In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great. We discovered that reasoning and critical thinking is impossible without a foundational knowledge about what to be critical about. I think the same can be said about software development.

  • I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.

    The most damning example I have about Swedish school system is anecdotal: by attending Saturday school, I never had to study math ever in the Swedish school. (same for my Asian classmates) when I finished 9th grade Japanese school curriculum taught ONLY one day per week (2h), I had learned all of advanced math in high school and never had to study math until college.

    The focus on "no one left behind == no one allowed ahead" also meant that young me complaining math was boring and easy didn't persuade teachers to let me go ahead, but instead, they allowed me to sleep during the lecture.

    • > no one left behind == no one allowed ahead

      It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)

      Teachers in my county were heavily discouraged from failing anyone, because pass rate became a target instead of a metric. They couldn't even give a 0 for an assignment that was never turned in without multiple meetings with the student and approval from an administrator.

      The net result was classes always proceeded at the rate of the slowest kid in class. Good for the slow kids (that cared), universally bad for everyone else who didn't want to be bored out of their minds. The divide was super apparent between the normal level and honors level classes.

      I don't know what the right answer is, but there was an insane amount of effort spent on kids who didn't care, whose parents didn't care, who hadn't cared since elementary school, and always ended up dropping out as soon as they hit 18. No differentiation between them, and the ones who really did give a shit and were just a little slow (usually because of a bad home life).

      It's hard to avoid leaving someone behind when they've already left themselves behind.

      18 replies →

    • >I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.

      I’m curious, could you share your Saturday school‘s system? I’m very interested in knowing what a day of class was like, the general approach, etc.

      8 replies →

    • I have as much of a fundamental issue with “Saturday school” for children as I do with professionals thinking they should be coding on their days off. When do you get a chance to enjoy your childhood?

      7 replies →

    • It's better to leave no one behind than to focus solely on those ahead. Society needs a stable foundation and not more ungrateful privileged people.

      23 replies →

  • Most of what I remember of my high school education in France was: here are the facts, and here is the reasoning that got us there.

    The exams were typically essay-ish (even in science classes) where you either had to basically reiterate the reasoning for a fact you already knew, or use similar reasoning to establish/discover a new fact (presumably unknown to you because not taught in class).

    Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.

    • I don't know if I have critical thinking or not. But I often question - WHY is this better? IS there any better way? WHY it must be done such a way or WHY such rule exists?

      For example in electricity you need at least that amount of cross section if doing X amount of amps over Y length. I want to dig down and understand why? Ohh, the smaller the cross section, the more it heats! Armed with this info I get many more "Ohhs": Ohh, that's why you must ensure the connections are not loose. Oohhh, that's why an old extension cord where you don't feel your plug solidly clicks in place is a fire hazard. Ohh, that's why I must ensure the connection is solid when joining cables and doesn't lessen cross section. Ohh, that's why it's a very bad idea to join bigger cables with a smaller one. Ohh, that's why it is a bad idea to solve "my fuse is blowing out" by inserting a bigger fuse but instead I must check whether the cabling can support higher amperage (or check whether device has to draw that much).

      And yeah, this "intuition" is kind of a discovery phase and I can check whether my intuition/discovery is correct.

      Basically getting down to primitives lets me understand things more intuitively without trying to remember various rules or formulas. But I noticed my brain is heavily wired in not remembering lots of things, but thinking logically.

      6 replies →

    • On the contrary, the French "dissertation" exercise requires to articulate reasoning and facts, and come up with a plan for the explanation. It is the same kind of thinking that you are required to produce when writing a scientifically paper.

      It is however not taught very well by some teachers, who skirt on explaining how to properly do it, which might be your case.

      2 replies →

    • > Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.

      Why do you say so? Even just stating this probably means you are one or a few steps further...

      3 replies →

    • I’ve heard many bad things said of the Beaujolais Nouveau, and of my sense of taste for liking it, but this is the first time I’ve seen its critical-thinking skills questioned.

      In its/your/our defense, I think it’s a perfectly smart wine, and young at heart!

      1 reply →

    • > the same critical thinking skills as a bottle of Beaujolais Nouveau

      I'm loving this expression. May I please adopt it?

      1 reply →

  • > In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great.

    I'm not sure I'd agree that it's been outright "not great". I myself am the product of that precise school-system, being born in 1992 in Sweden (but now living outside the country). But I have vivid memories of some of the classes where we talked about how to learn, how to solve problems, critical thinking, reasoning, being critical of anything you read in newspapers, difference between opinions and facts, how propaganda works and so on. This was probably through year/class 7-9 if I remember correctly, and both me and others picked up on it relatively quick, and I'm not sure I'd have the same mindset today if it wasn't for those classes.

    Maybe I was just lucky with good teachers, but surely there are others out there who also had a very different experience than what you outline? To be fair, I don't know how things are working today, but at least at that time it actually felt like I had use of what I was thought in those classes, compared to most other stuff.

  • This is, in my opinion, quite accurate.

    In the world of software development I meet a breed of Swedish devs younger than 30 that can't write code very well, but who can wax Jira tickets and software methodologies and do all sort of things to get them into a management position without having to write code. The end result is toxic teams where the seniors and the devs brought from India are writing all the code while all the juniors are playing software architect, scrum master an product owners.

    Not everybody is like that; seniors tend to be reliable and practical, and some juniors with programming-related hobbies are extremely competent and reasonable. But the chunk of "waxers" is big enough to be worrying.

  • I have heard that in Netherlands there used to be (not sure if it is still there) a system where you have for example 4 rooms of children. Room A contains all children that are ahead of rooms B, C, D. If a child from room B learns pretty quickly - the child is moved to room A. However, if the child leaves behind the other children in room B - that child is moved in room C. Same for room C - those who can not catch up are moved to room D. In this way everyone is learning at max capacity. Those who can learn faster and better are not slowed down by others who can not (or do not want to) keep the pace. Everyone is happy - children, teachers, parents, community.

  • > The results has been...not great.

    Sweden is the 19th country in the PISA scores. And it is in the upper section on all education indexes. There has been a world decline on scores, but has nothing to do with the Swedish education system. (That does not mean that Sweden should not continue monitoring it and bringing improvements)

    From Swedish news: https://www.sverigesradio.se/artikel/swedish-students-get-hi...

    - Swedish students skills in maths and reading comprehension have taken a drastic downward turn, according to the latest PISA study.

    - Several other countries also saw a decline in their PISA results, which are believed to be a consequence of the Covid-19 pandemic.

    • Considering our past and the Finnish progress (they considered following us in the 80s/90s as they had done but stopped), 19th is an disappointment.

      Having teenagers that's been through most of the primary and secondary schools I kind agree with GP, especially when it comes to math,etc.

      Teaching concepts and ideas is _great_, and what we need to manage with advanced topics as adults. HOWEVER, if the foundations are shaky due to too little repetition of basics (that is seemingly frowned upon in the system) then being taught thinking about some abstract concepts doesn't help much because the tools to understand them aren't good enough.

    • One should note that from the nineties onwards we put a large portion of our kids' education on the stock exchange and in the hands of upper class freaks instead of experts.

  • I think there’s a balance to be had. My country (Spain) is the very opposite, with everything from university access to civil service exams being memory focused.

    The result is usually bottom of the barrel in the subjects that don’t fit that model well, mostly languages and math - the latter being the main issue as it becomes a bottleneck for teaching many other subjects.

    It also creates a tendency for people to take what they learn as truth, which becomes an issue when they use less reputable sources later in life - think for example a person taking a homeopathy course.

    Lots of parroting and cargo culting paired with limited cultural exposition due to monolingualism is a bad combination.

  • > what to be critical about

    Media can fill that gap. People should be critical about global warming, antivax, anti israel, anti communism, racism, hate, whitr man, anti democracy, russia, china, trump...

    This thing is bad, imhate it, problem solved! Modern critical thinking is pretty simple!

    In future goverment can provide daily RSS feed, of things to be critical about. You can reduce national schooling system to a single vps server!

  • The problem is, in a capitalist society, who is going to be the company that will donate their time and money to teaching a junior developer who will simply go to another company for double the pay after 2 years?

  • I think that’s a disingenuous take. Earlier in the piece the AWS CEO specifically says we should teach everyone the correct ways to build software despite the ubiquity of AI. The quote about creative problem solving was with respect to how to hire/get hired in a world where AI can let literally anyone code.

  • > The results has been...not great.

    Well, I kind of disagree. The results are bad mainly because we have a mass immigration from low education countries with extremely bad cultures.

    If you look at the numbers, it's easy to say swedes are stupid when in the real sense, ethnic swedes do very well in school.

  • Here is the thing though.

    You can’t teach critical thinking like that.

    You need to teach hard facts and then people can learn critical thinking inductively from the hard facts with some help.

I completely agree.

On a side note.. ya’ll must be prompt wizards if you can actually use the LLM code.

I use it for debugging sometimes to get an idea, or a quick sketch up of an UI.

As for actual code.. the code it writes is a huge mess of spaghetti code, overly verbose, with serious performance and security risks, and complete misunderstanding of pretty much every design pattern I give it..

  • I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?

    Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.

    • > B2B SaaS

      Perhaps that's part of it.

      People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.

      21 replies →

    • Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that. Lines of code is a debit not a credit

      17 replies →

    • There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.

      If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

      38 replies →

    • It's interesting how LLM enthusiasts will point to problems like IDE, context, model etc. but not the one thing that really matters:

      Which problem are you trying to solve?

      At this point my assumption is they learned that talking about this question will very quickly reveal that "the great things I use LLMs for" are actually personal throwaway pieces, not to be extended above triviality or maintained over longer than a year. Which, I guess, doesn't make for a great sales pitch.

      3 replies →

    • And also ask: "How much money do you spend on LLMs?"

      In the long run, that is going to be what drives their quality. At some point the conversation is going to evolve from whether or not AI-assisted coding works to what the price point is to get the quality you need, and whether or not that price matches its value.

    • I deal with a few code bases at work and the quality differs a lot between projects and frameworks.

      We have 1-2 small python services based on Flask and Pydantic, very structured and a well-written development and extension guide. The newer Copilot models perform very well with this, and improving the dev guidelines keep making it better. Very nice.

      We also have a central configuration of applications in the infrastructure and what systems they need. A lot of similarly shaped JSON files, now with a well-documented JSON schema (which is nice to have anyway). Again, very high quality. Someone recently joked we should throw these service requests at a model and let it create PRs to review.

      But currently I'm working in Vector and it's Vector remap language... it's enough of a mess that I'm faster working without any copilot "assistance". I think the main issue is that there is very little VRL code out in the open, and the remaps depend on a lot of unseen context, which one would have to work on giving to the LLM. Had similar experiences with OPA and a few more of these DSLs.

    • > It's like we live in different worlds.

      There is the huge variance in prompt specificity as well as the subtle differences inherent to the models. People often don't give examples when they talk about their experiences with AI so it's hard to get a read on what a good prompt looks like for a given model or even what a good workflow is for getting useful code out of it.

      9 replies →

    • My AI experience has varied wildly depending on the problem I'm working on. For web apps in Python, they're fantastic. For hacking on old engineering calculation code written in C/C++, it's an unmitigated disaster and an active hindrance.

      5 replies →

    • It’s not just you, I think some engineers benefit a lot from AI and some don’t. It’s probably a combination of factors including: AI skepticism, mental rigidity, how popular the tech stack is, and type of engineering. Some problems are going to be very straightforward.

      I also think it’s that people don’t know how to use the tool very well. In my experience I don’t guide it to do any kind of software pattern or ideology. I think that just confuses the tool. I give it very little detail and have it do tasks that are evident from the code base.

      Sometimes I ask it to do rather large tasks and occasionally the output is like 80% of the way there and I can fix it up until it’s useful.

      6 replies →

    • I think its down to language and domain more than tools.

      No model ive tried can write, usefully debug or even explain cmake. (It invents new syntax if it gets stuck, i often have to prompt multiple AI to know if even the first response in the context was made-up)

      My luck with embedded c has been atrocious for existing codebase (burning millions of tolkens), but passable for small scripts. (Arduino projects)

      My experience with python is much better. Suggesting relevant libraries and functions, debugging odd errors, or even making small script on its own. Even the original github copilot which i got access to early was excellent on python.

      Alot of people that seem to have fully embraced agentic vibe-coding seem to be in the web or node.js domain. Which I've not done myself since pre-AI.

      I've tried most (free or trial) major models or schemes in hope that i find any of them useful, but not found much use yet.

    • > It's like we live in different worlds.

      We probably do, yes. the Web domain compared to a cybersecurity firm compared to embedded will have very different experiences. Because clearly there's a lot more code to train on for one domain than the other (for obvious reasons). You can have colleagues at the same company or even same team have drastically different experiences because they might be in the weeds on a different part of tech.

      > I then carefully review and test.

      If most people did this, I would have 90% less issues with AI. But as we expect, people see shortcuts and use them to cut corners, not give more times to polish the edges.

    • What tech stack do you use?

      Betting in advance that it's JavaScript or Python, probably with very mainstream libraries or frameworks.

      11 replies →

    • As a practical example, I've recently tried out v0's new updated systems to scaffold a very simple UI where I can upload screenshots from videogames I took and tag them.

      The resulting code included an API call to run arbitrary SQL queries against the DB. Even after pointing this out, this API call was not removed or at least secured with authentication rules but instead /just/hidden/through/obscur/paths...

    • It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.

    • I agree, it's like they looked at GPT 3.5 one time and said "this isn't for me"

      The big 3 - Opus 4.1 GPT5 High, Gemini 2.5 Pro

      Are astonishing in their capabilities, it's just a matter of providing the right context and instructions.

      Basically, "you're holding it wrong"

    • Do you not think part of it is just whether employers permit it or not? My conglomerate employer took a long time to get started and has only just rolled out agent mode in GH Copilot, but even that is in some reduced/restricted mode vs the public one. At the same time we have access to lots of models via an internal portal.

      1 reply →

    • I am also constantly astonished.

      That said, observing attempts by skeptics to “unsuccessfully” prompt an LLM have been illuminating.

      My reaction is usually either:

      - I would never have asked that kind of question in the first place.

      - The output you claim is useless looks very useful to me.

    • I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.

      If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).

    • B2B SaaS in most cases are sophisticated masks over some structured data, perhaps with great ux, automation and convenience, so I can see LLMs be more successful there, even so because there is more training data and many processes are streamlined. Not all domains are equal, go try develop a serious game, not the yet another simple and broken arcade, with llms and you'll have a different take

    • It really depends, and can be variable, and this can be frustrating.

      Yes, I’ve produced thousands of lines of good code with an LLM.

      And also yes, yesterday I wasted over an hour trying to define a single docker service block for my docker-compose setup. Constant hallucination, eventually had to cross check everything and discover it had no idea what it was doing.

      I’ve been doing this long enough to be a decent prompt engineer. Continuous vigilance is required, which can sometimes be tiring.

    • GitHub copilot, Microsoft copilot, Gemini, loveable, gpt, cursor with Claude models, you name it.

    • Lines of code is not a useful metric for anything. Especially not productivity.

      The less code I write to solve a problem the happier I am.

    • It could be because your job is boilerplate derivatives of well solved problems. Enjoy the next 1 to 2 years because yours is the job Claude is coming to replace.

      Stuff Wordpress templates should have solved 5 years ago.

    • Honestly the best way to get good code at least with typescript and JavaScript is to have like 50 eslint plugins

      That way it constantly yells at sonnet 4 to get the code at least in a better state.

      If anyone is curious I have a massive eslint config for typescript that really gets good code out of sonnet.

      But before I started doing this the code it wrote was so buggy and it was constantly trying to duplicate functions into separate files etc

  • I agree. AI is a wonderful tool for making fuzzy queries on vast amounts of information. More and more I'm finding that Kagi's Assistant is my first stop before an actual search. It may help inform me about vocabulary I'm lacking which I can then go successfully comb more pages with until I find what I need.

    But I have not yet been able to consistently get value out of vibe coding. It's great for one-off tasks. I use it to create matplotlib charts just by telling it what I want and showing it the schema of the data I have. It nails that about 90% of the time. I have it spit out close-ended shell scripts, like recently I had it write me a small CLI tool to organize my Raw photos into a directory structure I want by reading the EXIF data and sorting the images accordingly. It's great for this stuff.

    But anything bigger it seems to do useless crap. Creates data models that already exist in the project. Makes unrelated changes. Hallucinates API functions that don't exist. It's just not worth it to me to have to check its work. By the time I've done that, I could have written it myself, and writing the code is usually the most pleasurable part of the job to me.

    I think the way I'm finding LLMs to be useful is that they are a brilliant interface to query with, but I have not yet seen any use cases I like where the output is saved, directly incorporated into work, or presented to another human that did not do the prompting.

    • Have you tried Opus? It's what got me past using LLMs only marginally. Standard disclaimers apply in that you need to know what it's good for and guide it well, but there's no doubt at this point it's a huge productivity boost, even if you have high standards - you just have to tell it what those standards are sometimes.

      1 reply →

  • I just had Claude Sonnet 4 build this for me: https://github.com/kstenerud/orb-serde

    Using the following prompt:

        Write a rust serde implementation for the ORB binary data format.
    
        Here is the background information you need:
    
        * The ORB reference material is here: https://github.com/kstenerud/orb/blob/main/orb.md
        * The formal grammar dscribing ORB is here: https://github.com/kstenerud/orb/blob/main/orb.dogma
        * The formal grammar used to describe ORB is called Dogma.
        * Dogma reference material is here: https://github.com/kstenerud/dogma/blob/master/v1/dogma_v1.0.md
        * The end of the Dogma description document has a section called "Dogma described as Dogma", which contains the formal grammar describing Dogma.
    
        Other important things to remember:
    
        * ORB is an extension of BONJSON, so it must also implement all of BONJSON.
        * The BONJSON reference material is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.md
        * The formal grammar desribing BONJSON is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.dogma
    

    Is it perfect? Nope, but it's 90% of the way there. It would have taken me all day to build all of these ceremonious bits, and Claude did it in 10 minutes. Now I can concentrate on the important parts.

    • First and foremost, it’s 404. Probably a mistake, but I chuckled a bit when someone says "AI build this thing and it’s 90% there" and then posts a dead link.

      1 reply →

  • What tooling are you using?

    I use aider and your description doesn't match my experience, even with a relatively bad-at-coding model (gpt-5). It does actually work and it does generate "good" code - it even matches the style of the existing code.

    Prompting is very important, and in an existing code base the success rate is immensely higher if you can hint at a specific implementation - i.e. something a senior who is familiar with the codebase somewhat can do, but a junior may struggle with.

    It's important to be clear eyed about where we are here. I think overall I am still faster doing things manually than iterating with aider on an existing code base, but the margin is not very much, and it's only going to get better.

    Even though it can do some work a junior could do, it can't ever replace a junior human... because a junior human also goes to meetings, drives discussions, and eventually becomes a senior! But management may not care about that fact.

  • The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions. I use the Duck Duck Go AI a lot to answer questions. I trust it about as far as I can throw the datacenter it resides in, but it's useful for quickly verifiable things. Especially stuff like syntax and command line options for various programs.

    • > The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions.

      One thing I can guarantee you is that this won't last. No sane MBA will ignore that revenue stream.

      Image hosting services, all over again.

      10 replies →

  • It's one of those you get what you put in kind of deals.

    If you spend a lot of time thinking about what you want, describing the inner workings, edge cases, architecture and library choices, and put that into a thoughtful markdown, then maybe after a couple of iterations you will get half decent code. It certainly makes a difference between that and a short "implement X" prompt.

    But it makes one think - at that point (writing a good prompt that is basically a spec), you've basically solved the problem already. So LLM in this case is little more than a glorified electric typewriter. It types faster than you, but you did most of the thinking.

    • Right, and then after you do all the thinking and the specs, you have to read and understand and own every single line it generated. And speaking for myself, I am no where near as good at thinking through code I am reviewing as thinking through the code I am writing.

      Other people will put up PRs full of code they don't understand. I'm not saying everyone who is reporting success with LLMs are doing that, but I hear it a lot. I call those people clowns, and I'd fire anyone who did that.

      8 replies →

  • I’ve built 2 SaaS applications with LLM coding one of which was expanded and release to enterprise customers and is in good use today - note I’ve got years of dev experience and I follow context and documentation prompts and I’m using common LLM languages like typescript and python and react and AWS infra

    Now it requires me to fully review all code and understand what the LLM is doing at the functional, class level and api level- in fact it works better at the method or component level for me and I had a lot of cleanup work (and lots of frustration with the models) on the codebase but overall there’s no way that I could equal the velocity I have now without it

    • I think the other important step is to reject code your engineers submit that they can't explain for a large enterprise saas with millions of lines of code. I myself reject I'd say 30% of the code the LLMs generate but the power is in being able to stay focused on larger problems while rapidly implementing smaller accessory functions that enable that continued work without stopping to add another engineer to the task.

      I've definitely 2-4X'd depending on the task. For small tasks I've definitely 20X'd myself for some features or bugfixes.

  • I do frontend work (React/TypeScript). I barely write my own code anymore, aside from CSS (the LLMs have no aesthetic sensibilities). Just prompting with Gemini 2.5 Pro. Sometimes Sonnet 4.

    I don't know what to tell you. I just talk to the thing in plain but very specific English and it generally does what I want. Sometimes it will do stupid things, but then I either steer it back in the direction I want or just do it myself if I have to.

  • I agree with the article but also believe LLM coding can boost my productivity and ability to write code over long stretches. Sure getting it to write a whole feature, high opportunity of risk. But getting it to build out a simple api with examples above and below it, piece of cake, takes a few seconds and would have taken me a few minutes.

  • The bigger the task, the more messy it'll get. GPT5 can write a single UI component for me no problem. A new endpoint? If it's simple, no problem. The risk increases as the complexity of the task does.

    • I break complex task down into simple tasks when using ChatGPT just like I did before ChatGPT with modular design.

  • AI is really good at writing tests.

    AI is also pretty good if you get it to do small chunks of code for you. This means you come with the architecture, the implementation details, and how each piece is structured. When I walk AI through each unit of code I find the results are better, and it's easier for me to address issues as I progress.

    This may seem some what redundant, though. Sometimes it's faster to just do it yourself. But, with a toddler who hates sleep I've found I've been able to maintain my velocity... Even on days I get 3 hrs of sleep.

  • The AI agents tend to fail for me with open ended or complex tasks requiring multiple steps. But I’ve found it massively helpful if you have these two things: 1) a typed language… better if strongly typed 2) your program is logically structured and follows best practices and has hierarchical composition.

    The agents are able to iterate and work with the compiler until it gets it right and the combination of 1 and 2 means there’s fewer possible “right answers” to whatever problem I have. If i structure my prompte to basically fill in the blanks of my code in specific areas it saves a lot of time. Most of what I prompt is something already done, and usually 1 google search away. This saves me the time to search it up, figure out whatever syntax I need, etc.

  • I don't code every day and am not an expert. Supposedly the sort of casual coder that LLMs are supposed to elevate into senior engineers.

    Even I can see they have big blind spots. As the parent said I get overly verbose code that does run, but is no where near the best solution. Well, for really common problems and patterns I usually get a good answer. Need a more niche problem solved?You better brush up your Googling skills and do some research if you care about code quality.

  • If you actually believe this, you're either using bad models or just terrible at prompting and giving proper context. Let me know if you need help, I use generated code in every corner of my computer every day

  • My favourite code smell that LLMs love to introduce is redundant code comments.

    // assign "bar" to foo

    const foo = "bar";

    They love to do that shit. I know you can prompt it not to. But the amount of PRs I'm reviewing these days that have those types of comments is insane.

  • I see LLM coding as hinting on steroids. I don't trust it to actually write all of my code, but sometimes it can get me started, like a template.

  • The code LLMs write is much better than mine. Way less shortcuts and spaghetti. Maybe that means that I am a lousy coder but the end result is still better.

  • I haven't had that experience, but I tend to keep my prompts very focused with a tightly limited scope. Put a different way, if I had a junior or mid level developer, and I wanted them to create a single-purpose class of 100-200 lines at most, that's how I write my prompts.

  • Likewise with Powershell. It's good to give you an approach or some ideas, but copy/paste fails about 80% of the time.

    Granted, I may be a inexpert prompter, but at the same time, I'm asking for basic things, as a test, and it just fails miserably most of the time.

  • I've been pondering this for a while. I think there's an element of dopamine that LLMs bring to the table. They probably don't make a competent senior engineer much more productive if at all, but there's that element of chance that we don't get a lot of in this line of work.

    I think a lot of us eventually arrive at a point where our jobs get a bit boring and all the work starts to look like some permutation of past work. If instead of going to work and spending two hours adding some database fields and writing some tests, you had the opportunity to either:

    A) Do the thing as usual in the predictable two hours

    B) Spend an hour writing a detailed prompt as if you were instructing a junior engineer on a PIP to do it, and doing all the typical cognitive work you'd have done normally and then some, but then instead of typing out the code in the next hour, you have a random chance to either press enter, and tada the code has been typed and even kinda sorta works, after this computer program was "flibbertigibbeting" for just 10 minutes. Wow!

    Then you get that sweet dopamine hit that tells you you're a really smart prompt engineer who did a two hour task in... cough 10 minutes. You enjoy your high for a bit, maybe go chat with some subordinate about how great your CLAUDE.md was and if they're not sure about this AI thing it's just because they're bad at prompt engineering.

    Then all you have to do is cross your t's and dot your i's and it's smooth sailing from there.Except, it's not. Because you (or another engineer) will probably find architectural/style issues when reviewing the code that you explicitly told it to follow, but it ignored, and you'll have to fix those. You'll also probably be sobering up from your dopamine rush by now, and realize that you have to either review all the other lines of AI generated code, which you could have just correctly typed once.

    But now you have to review with an added degree of scrutny, because you know it's really good at writing text that looks beautiful, but is ever so slightly wrong in ways that might even slip through code review and cause the company to end up in the news.

    Alternatively, you could yolo and put up an MR after a quick smell, making some other poor engineer do your job for you (you're a 10x now, you've got better things to do anyway). Or better yet, just have Claude write the MR, and don't even bother to read it. Surely nobody's going to notice your "acceptance critera" section says to make sure the changes have been tested on both Android and Apple, even though you're building a microservice for an AI-powered smart fridge (mostly just a fridge, except every now and then it starts shooting ice cubes across the room at mach 3). Then three months later when someone, who never realized there are three different identical "authenticate," spends an hour scratching their head about why the code they're writing is not doing anything (because it's actually running another redundant function that nobody ever seems to catch in MR review because they're not reflected in a diff.

    But yeah, that 10 minute AI magic trick sure felt good. There are times when work is dull enough that option B sounds pretty good, and I'll dabble. But yeah, I'm not sure where this AI stuff leads but I'm pretty confident it won't taking over our jobs any time soon (an ever-increasing quota of H1Bs and STEM opt student visas working for 30% less pay, on the other hand, might).

  • It's just that being the dumbest thing we ever heard still doesn't stop some people from doing it anyway. And that goes for many kinds of LLM application.

  • I think it has a lot to do with skill level. Lower skilled developers seem to feel it gives them a lot of benefit. Higher skilled developers just get frustrated looking at all the errors in produces.

  • I hate to admit it, but it is the prompt (call it context if ya like, includes tools). Model is important, window/tokensz are important, but direction wins. Also codebase is important, greenfield gets much better results, so much so that we may throw away 40 years of wisdom designed to help humans code amongst each other and use design patterns that will disgust us.

  • Could the quality of your prompt be related to our differing outcome? I have decades of pre-AI experience and I use AI heavily. If I let it go off on its own its not as good as constraining and hand-holding it.

  • > ya’ll must be prompt wizards

    Thank you, but I don’t feel that way.

    I’d ask you a lot of details…what tool, what model, what kind of code. But it’d probably take a lot to get to the bottom of the issue.

  • Not only a prompt wizard, you need to know what prompts are bad or good and also use bad/lazy prompts to your advantage

  • Sounds like you are using it entirely wrong then...

    Just yesterday I uploaded a few files of my code (each about 3000+ lines) into a gpt5 project and asked in assistance in changing a lot of database calls into a caching system, and it proceeded to create a full 500 line file with all the caching objects and functions I needed. Then we went section through section of the main 3000+ line file to change parts of the database queries into the cached version. [I didn't even really need to do this, it basically detected everything I would need changing at once and gave me most of it, but I wanted to do it in smaller chunks so I was sure what was going on]

    Could I have done this without AI? Sure.. but this was basically like having a second pair of eyes and validating what I'm doing. And saving me a bunch of time so I'm not writing everything from scratch. I have the base template of what I need then I can improve it from there.

    All the code it wrote was perfectly clean.. and this is not a one off, I've been using it daily for the last year for everything. It almost completely replaces my need to have a junior developer helping me.

    • You mean like it turned on Hibernate or it wrote some custom rolled in app cache layer?

      I usually find these kinds of caching solutions to be extremely complicated (well the cache invalidating part) and I'm a bit curious what approach it took.

      You mention it only updated a single file so I guess it's not using any updates to the session handling so either sticky sessions are not assumed or something else is going on. So then how do you invalidate the app level cache for a user across all machine instances? I have a lot of trauma from the old web days of people figuring this out so I'm really curious to hear about how this AI one shot it in a single file.

      1 reply →

    • I know, I don't understand what problems people are having with getting usable code. Maybe the models don't work well with certain languages? Works great with C++. I've gotten thousands of lines of clean compiling on the first try and obviously correct code from ChatGPT, Gemini, and Claude.

      I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction. I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do.

      6 replies →

    • How large is that code-base overall? Would you be able to let the LLM look at the entirety of it without it crapping out?

      It definitely sounds nice to go and change a few queries, but did it also consider the potential impacts in other parts of the source or in adjacent running systems? The query itself here might not be the best example, but you get what I mean.

At least one CEO seems to get it. Anyone touting this idea of skipping junior talent in favor of AI is dooming their company in the long run. When your senior talent leaves to start their own companies, where will that leave you?

I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning. Any time I use it I can hear my algebra teacher telling me not to use a calculator or I won’t learn anything.

But overall I’m starting to feel like AI is simply the natural culmination of US economic policy for the last 45 years: short term gains for the top 1% at the expense of a healthy business and the economy in the long term for the rest of us. Jack Welch would be so proud.

  • > When your senior talent leaves to start their own companies, where will that leave you?

    The CEO didn't express any concerns about "talent leaving". He is saying "keep the juniors" but he's implying "fire the seniors". This is in line with long standing industry trends and it's confirmed by the flowing quote from the OP:

    >> [the junior replacement] notion led to the “dumbest thing I've ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

    He is pushing for more of the same, viewing competence and skill as threats and liability to be "fixed". He's warning the industry to stay the course and keep the dumbing-down game moving as fast as possible.

    • Well that's even stupider. What do you do when your juniors get better at using your tools?

      The 2010's tech boom happened because big tech knew a good engineer is worth their weight in gold, and not paying them well meant they'd be headhunted after as little as a year of work. What's gonna happen when this repeats) if we're assuming AI makes things much more efficient)?

      ----

      And that's my kindest interpretation. One that assumes that a junior and senior using a prompt will have a very close gap to begin with. Even seniors seem to struggle right now with current models working at scale on Legacy code.

      1 reply →

    • 100%, and this is him selling the new batch of AWS agent tools. If your product requirements + “Well Architected” NFRs are expressed as input, AWS wants to run it and extract your cost of senior engineers as value for him.

    • Also fits in very well to amazon's famously low average tenure/ hiring practices.

  • I think AI has overall helped me learn.

    There are lots of personal projects that I have wanted to build for years but have pushed off because the “getting started cost” is too high, I get frustrated and annoyed and don’t get far before giving up. Being able to get the tedious crap out of the way lowers the barrier to entry and I can actually do the real project, and get it past some finish line.

    Am I learning as much as I would had I powered through it without AI assistance? Probably not, but I am definitely learning more than I would if I had simply not finished (or even started) the project at all.

    • What was your previous approach? From what I've seen, a lot of people are very reluctant about picking a book or read through a documentation before they try stuff. And then they got exposed to "cryptic" error message and then throw the towel.

      2 replies →

  • > At least one CEO seems to get it.

    > (…)

    > I’m not even sure AI is good for any engineer

    In that case I’m not sure you really agree with this CEO, who is all-in on the idea of LLMs for coding, going so far as to proudly say 80% of engineers at AWS use it and that that number will only rise. Listen to the interview, you don’t even need ten minutes.

  • > I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning.

    Yes, but when there are certain mundane things in that discovery that are hindering my ability to get work done, AI can be extremely useful. It can be incredibly helpful in giving high level overviews of code bases or directing me to parts of codebases where certain architecture lives. Additionally, it exposes me to patterns and ideas I hadn't originally thought of.

    Now, if I just take whatever is spit out by AI as gospel, then I'd be inclined to agree with you in saying AI is bad, but if you use it correctly, like any other tool, it's fantastic.

  • The whole premise is just silly of thinking we don't need juniors is just silly. If there's no juniors, eventually there will be no seniors. AI slop ain't gonna un-slop itself.

  • > When your senior talent leaves to start their own companies, where will that leave you?

    In the case of Amazon with a shit ton of money to throw at a team of employees to crush your little startup?

    • Imagine you have shit ton of money but only agents that generate 10% bad code? You crushing or beating anyone..

  • You also risk senior talent who stay but doesn't want to change or adopt, at least with any urgency. AI will accelerate that journey of discovery and learning, so juniors are going to learn super fast.

    • That’s still to be determined. Blindly accepting code suggestions thrown at you without understanding them is not the same thing as learning.

    • >will accelerate that journey of discovery and learning,

      Okay, but what about work output? That's seems to be the only thing business cares about.

      Also, maybe it's the HN bias but I don't see this notion where old engineers are rejecting this en masse. More younger people will embrace it. But most younger people haven't mucked in legacy code yet (the lifeblood of any businesses).

In the last few months we have worked with startups who have vibe coded themselves into an abyss. Either because they never made the correct hires in the first place or they let technical talent go. [1]

The thinking was that they could iterate faster, ship better code, and have an always on 10x engineer in the form of Claude code.

I've observed perfectly rational founders become addicted to the dopamine hit as they see Claude code output what looks like weeks or years of software engineering work.

It's overgenerous to allow anyone to believe AI can actually "think" or "reason" through complex problems. Perhaps we should be measuring time saved typing rather than cognition.

[1] vibebusters.com

  • Shush please. I wasn't old enough to cash in on the Y2K contracting boons; I'm hoping the vibe coding 200k LOC b2b AI slop "please help us scale to 200 users" contracting gigs will be lucrative.

  • Completely agree, software developers need to be using agentic coding as a writing tool not as a thinking tool.

  • As if startups before LLMs were creating great code. Right now on the front page, a YC company is offering a “Founding Full Stack Engineer” $100K-$150K. What quality of code do you think they will end up with?

    https://www.ycombinator.com/companies/text-ai/jobs/OJBr0v2-f...

    • Notably, that is a company that... adds AI to group chats. Startups offering crap salaries with a vague promise of equity in a vague product idea with no moat are a dime a dozen, and have been well before LLMs came around.

      15 replies →

  • Give it a year or 2. Its not like 2 years ago everyone wasn't saying it would be 10+ years before AI can do what it does now.

    • >Its not like 2 years ago everyone wasn't saying it would be 10+ years before AI can do what it does now.

      So far I don't see that notion disproved. Ai still doesn't truly "reason with" nor understand the data it outputs.

> I think the skills that should be emphasized are how do you think for yourself?

Independent thinking is indeed the most important skill to have as a human. However, I sympathize for the younger generations, as they have become the primary target of this new technology that looks to make money by completely replacing some of their thinking.

I have a small child and took her to see a disney film. Google produced a very high quality long form advert during the previews. The ad portrays a lonely young man looking for something to do in the evening that meets his explicit preferences. The AI suggests a concert, he gets there and locks eyes with an attractive young woman.

Sending a message to lonely young men that AI will help reduce loneliness. The idea that you don't have to put any effort into gaining adaptive social skills to cure your own loneliness is scary to me.

The advert is complete survivor bias. For each success in curing your boredom, how many failures are there with lonely young depressed men talking to their phone instead of friends?

Critical thinking starts at home with the parents. Children will develop beliefs from their experience and confirm those beliefs with an authority figure. You can start teaching mindfulness to children at age 7.

Teaching children mindfulness requires a tremendous amount of patience. Now the consequence for lacking patience is outsourcing your Childs critical thinking to AI.

  • You should read the story The Perfect Match from the book Paper Menagerie and other stories by Ken Liu, it goes into what you mentioned about Google.

> “How's that going to work when ten years in the future you have no one that has learned anything,”

Pretty obvious conclusion that I think anyone who's thought seriously about this situation has already come to. However, I'm not optimistic that most companies will be able to keep themselves from doing this kind of thing, because I think it's become rather clear that it's incredibly difficult for most leadership in 2025 to prioritize long-term sustainability over short-term profitability.

That being said, internships/co-ops have been popular from companies that I'm familiar with for quite a while specifically to ensure that there are streams of potential future employees. I wonder if we'll see even more focus on internships in the future, to further skirt around the difficulties in hiring junior developers?

If AI is truly this effective, we would be selling 10x-10Kx more stuff, building 10x more features (and more quickly), improving quality & reliability 10x. There would be no reason to fire anyone because the owners would be swimming in cash. I'm talking good old-fashioned greed here.

You don't fire people if you anticipate a 100x growth. Who cares about saving 0.1% of your money in 10 years? You want to sell 100x / 1000x/ 10000x more .

So the story is hard to swallow. The real reason is as usual, they anticipate a downturn and want to keep earnings stable.

  • Exactly. If the AI can multiply everyone's power by hundred or thousand, you want to keep all people who make a positive contribution (and only get rid of those who are actively harmful). With sufficiently good AI, perhaps the group of juniors you just fired could have created a new product in a week.

    • even within the AI-paradigm, you could keep the juniors to validate and test the AI generated code. You still need some level of acceptance testing for the increased production. And the juniors could be producing automation engineering at or above the level of the product code they were producing prior to AI. A win win ( more production & more career growth)

      In other words, none of these stories make any sense, even if you take the AI superpower at face value.

He wants educators to instead teach “how do you think and how do you decompose problems”

Ahmen! I attend this same church.

My favorite professor in engineering school always gave open book tests.

In the real world of work, everyone has full access to all the available data and information.

Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.

Doing this is called "engineering". And this is what this professor taught.

  • In undergrad I took an abstract algebra class. It was very difficult and one of the things the teacher did was have us memorize proofs. In fact, all of his tests were the same format: reproduce a well-known proof from memory, and then complete a novel proof. At first I was aghast at this rote memorization - I maybe even found it offensive. But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it! Moreover, producing the novel proofs required the same kinds of "components" and now because they were "installed" in my brain I could use them more intuitively. (Looking back I'd say it enabled an efficient search of a tree of sequences of steps).

    Memorization is not a panacea. I never found memorizing l33t code problems to be edifying. I think it's because those kinds of tight, self-referential, clever programs are far removed from the activity of writing applications. Most working programmers do not run into a novel algorithm problem but once or twice a career. Application programming has more the flavor of a human-mediated graph-traversal, where the human has access to a node's local state and they improvise movement and mutation using only that local state plus some rapidly decaying stack. That is, there is no well-defined sequence for any given real-world problem, only heuristics.

    • Memorizing is a super power / skill. I work in a ridiculously complex environment and have to learn and know so much. Memorizing and spaced repetition are like little islands my brain can start building bridges between. I used to think memorizing was anti-first principles, but it is just good. Our brains can memorize so much if we make them. And then we can connect and pattern matching using higher order thinking.

      4 replies →

    • Hmmm... It's the other way around for me. I find it hard to memorise things I don't actually understand.

      I remember being given a proof of why RSA encryption is secure. All the other students just regurgitated it. It made superficial sense I guess.

      However, I could not understand the proof and felt quite stupid. Eventually I went to my professor for help. He admitted the proof he had given was incomplete (and showed me why it still worked). He also said he hadn't expected anyone to notice it wasn't a complete proof.

      9 replies →

    • During my elementary school years, there was a teacher who told me that I didn't need to memorize it as long as I understand them. I taught he was the coolest guy ever.

      Only when I got late twenties, I realized how wrong he was. Memorization and understanding go hand in hand, but if one of them has to come first than it's memorization. He probably said that because that was what kids (who were forced to do rote memorization) wanted to hear.

      16 replies →

    • My controversial education hot take: Pointless rote memorization is bad and frustrating, but early education could use more directed memorization.

      As you discovered: A properly structured memorization of carefully selected real world material forces you to come up with tricks and techniques to remember things. With structured information (proofs in your case) you start learning that the most efficient way to memorize is to understand, which then reduces the memorization problem into one of categorizing the proof and understanding the logical steps to get from one step to another. In doing so, you are forced to learn and understand the material.

      Another controversial take (for HN, anyway) is that this is what happens when programmers study LeetCode. There’s a meme that the way to interview prep is to “memorize LeetCode”. You can tell who hasn’t done much LeetCode interviewing if they think memorizing a lot of problems is a viable way to pass interviews. People who attempt this discover that there are far too many questions to memorize and the best jobs have already written their own questions that aren’t out of LeetCode. Even if you do get a direct LeetCode problem in an interview, a good interview will expect you to explain your logic, describe how you arrived at the solution, and might introduce a change if they suspect you’re regurgitating memorized answers.

      Instead, the strategy that actually works is to learn the categories of LeetCode style questions, understand the much smaller number of algorithms, and learn how to apply them to new problems. It’s far easier to memorize the dozen or so patterns used in LeetCode problems (binary search, two pointers, greedy, backtracking, and so on) and then learn how to apply those. By practicing you’re not memorizing the specific problems, you’re teaching yourself how to apply algorithms.

      Side note: I’m not advocating for or against LeetCode, I’m trying to explain a viable strategy for today’s interview format.

      5 replies →

    • Fortunately that was not my experience in abstract algebra. The tests and homework were novel proofs that we hadn't seen in class. It was one of my favorite classes / subjects. Someone did tell me in college that they did the memorization thing in German Universities.

      Code-wise, I spent a lot of time in college reading other people's code. But no memorization. I remember David Betz advsys, Tim Budd's "Little Smalltalk", and Matt Dillon's "DME Editor" and C compiler.

      1 reply →

    • I would wager some folks can memorize without understanding? I do think memorization is underrated, though.

      There is also something to the practice of reproducing something. I always took this as a form of "machine learning" for us. Just as you get better at juggling by actually juggling, you get better at thinking about math by thinking about math.

      1 reply →

    • Interesting I had the same problem and suffered in grades back in school simply because I couldn't memorize much without understanding. However, I seemed to be the only one because every single other student, including those with top grades, were happy to memorize and regurgitate. I wonder how they're doing now.

    • My abstract algebra class had it exactly backwards. It started with a lot of needless formalism culminating in galois theory. This was boring to most students as they had no clue why the formalism was invented in the first place.

      Instead, I wished it showed how the sausage was actually made in the original writings of galois [1]. This would have been far more interesting to students, as it showed the struggles that went into making the product - not to mention the colorful personality of the founder.

      The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.

      [1] https://www.ams.org/notices/201207/rtx120700912p.pdf

      1 reply →

    • Depends on the subject - I can remember multiple subjects where the teacher would give you a formula to memorise without explaining why or where it came from. You had to take it as an axiom. The teachers also didn't say - hey, if you want to know why did we arrive to this, have a read here, no, it was just given.

      Ofc you could also say that's for the student to find out, but I've had other things on my mind

    • >Memorization is not a panacea.

      It is What you memorize that is important, you can't have a good discussion about a topic if you don't have the facts and logic of the topic in memory. On the other hand using memory to paper over bad design instead of simplifying or properly modularizing it, leads to that 'the worst code I have seen is code I wrote six months ago' feeling.

    • Your comment about memorizing as part of understanding makes a lot of sense to me, especially as one possible technique to get get unstuck in grasping a concept.

      If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?

      4 replies →

    • Is it the memorisation that had the desired effect or the having to come up with the novel proofs? Many schools seem to do the memorising part, but not the creating part.

    • I find it's helpful to have context to frame what I'm memorizing to help me understand the value.

    • Indeed, not just math. Biology requires immense amounts of memorization. Nature is littered with exceptions.

    • > But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it!

      This may be true of mathematical proofs, but it surely must not be true in general. Memorizing long strings of digits of pi probably isn’t much easier if you understand geometry. Memorizing famous speeches probably isn’t much easier if you understand the historical context.

      3 replies →

    • It's funny, because I had the exact opposite experience with abstract algebra.

      The professor explained things, we did proofs in class, we had problem sets, and then he gave us open-book semi-open-professor take-home exams that took us most of a week to do.

      Proof classes were mostly fine. Boring, sometimes ridiculously shit[0], but mostly fine. Being told we have a week for this exam that will kick our ass was significantly better for synthesizing things we'd learned. I used the proofs we had. I used sections of the textbook we hadn't covered. I traded some points on the exam for hints. And it was significantly more engaging than any other class' exams.

      [0] Coming up with novel things to prove that don't require some unrelated leap of intuition that only one student gets is really hard to do. Damn you Dr. B, needing to figure out that you have to define a third equation h(x) as (f(x) - g(x))/(f(x) + g(x)) as the first step of a proof isn't reasonable in a 60 minute exam.

    • memorization + application = comprehension. Rinse and repeat.

      Whether leet code or anything else.

    • Mathematics pedagogy today is in a pretty sorrowful state due to bad actors and willful blindness at all levels that require public trust.

      A dominant majority in public schools starting late 1970s seems to follow the "Lying to Children" approach which is often mistakenly recognized as by-rote teaching but are based in Paulo Freire's works that are in turn based on Mao's torture discoveries from the 1950s.

      This approach contrary to classical approaches leverages torturous process which seems to be purposefully built to fracture and weed out the intelligent individual from useful fields, imposing sufficient thresholds of stress to impose PTSD or psychosis, selecting for and filtering in favor of those who can flexibly/willfully blind/corrupt themselves.

      Such sequences include Algebra->Geometry->Trigonometry where gimmicks in undisclosed changes to grading cause circular trauma loops with the abandonment of Math-dependent careers thereafter, similar structures are also found in Uni, for Economics, Business, and Physics which utilize similar fail-scenarios burning bridges where you can't go back when the failure lagged from the first sequence, and you passed the second unrelated sequence. No help occurs, inducing confusion and frustration to PTSD levels, before the teacher offers the Alice in Wonderland Technique, "If you aren't able to do these things, perhaps you shouldn't go into a field that uses it". (ref Kubark Report, Declassified CIA Manual)

      Have you been able to discern whether these "patterns" as you've called them aren't just the practical reversion to the classical approach (Trivium/Quadrivium)? Also known as the first-principles approach after all the filtering has been done.

      To compare: Classical approaches start with nothing but a useful real system and observations which don't entrench false assumptions as truth, which are then reduced to components and relationships to form a model. The model is then checked for accuracy against current data to separate truth from false in those relationships/assertions in an iterative process with the end goal being to predict future events in similar systems accurately. The approach uses both a priori and a posteriori components to reasoning.

      Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together). Upon the next iteration one must unlearn the false parts while relearning the true parts (but we can't really unlearn, we can only strengthen or weaken) which in turn creates inconsistent mental states imposing stress (torture). This is repeated in an ongoing basis often circular in nature (structuring), and leveraging psychological blindspots (clustering), with several purposefully structured failings (elements) to gatekeep math through torturous process which is the basis for science and other risky subject matter. As the student progresses towards mastery (gnosis), the systems become increasingly more useful. One must repeatedly struggle in their sessions to learn, with the basis being if you aren't struggling you aren't learning. This mostly uses a faux a priori reasoning without properties of metaphysical objectivity (tied to objective measure, at least not until the very end).

      If you don't recognize this, an example would be the electrical water pipe pressure analogy. Diffusion of charge in-like materials, with Intensity (Current) towards the outermost layer was the first-principled approach pre-1978 (I=V/R). The Water Analogy fails when the naive student tries to relate the behavior to pressure equations that ends up being contradictory at points in the system in a number of places introducing stumbling blocks that must be unlearned.

      Torture being the purposefully directed imposition of psychological stress beyond a individuals capacity to cope towards physiological stages of heightened suggestability and mental breakdown (where rational thought is reduced or non-existent in the intelligent).

      It is often recognized by its characteristic subgroups of Elements (cognitive dissonance, a lack of agency to remove oneself and coercion/compulsion with real or perceived loss or the threat thereof), Structuring (circular patterns of strictness followed by leniency in a loop, fractionation), and Clustering (psychological blindspots).

      11 replies →

  • It's the core problem facing the hiring practices in this field. Any truly competent developer is a generalist at heart. There is value to be had in expertise, but unless you're dealing with a decade(s) old hellscape of legacy code or are pushing the very limits of what is possible, you don't need experts. You'd almost certainly be better off with someone who has experience with the tools you don't use, providing a fresh look and cover for weaknesses your current staff has.

    A regular old competent developer can quickly pick up whatever stack is used. After all, they have to; Every company is their own bespoke mess of technologies. The idea that you can just slap "15 years of React experience" on a job ad and that the unicorn you get will be day-1 maximally productive is ludicrous. There is always an onboarding time.

    But employers in this field don't "get" that. For regular companies they're infested by managers imported from non-engineering fields, who treat software like it's the assembly line for baking tins or toilet paper. Startups, who already have fewer resources to train people with, are obsessed with velocity and shitting out an MVP ASAP so they can go collect the next funding round. Big Tech is better about this, but has it's own problems going on and it seems that the days of Big Tech being the big training houses is also over.

    It's not even a purely collective problem. Recruitment is so expensive, but all the money spent chasing unicorns & the opportunity costs of being understaffed just get handwaved. Rather spend $500,000 on the hunt than $50,000 on training someone into the role.

    And speaking of collective problems. This is a good example of how this field suffers from having no professional associations that can stop employers from sinking the field with their tragedies of the commons. (Who knows, maybe unions will get more traction now that people are being laid off & replaced with outsourced workers for no legitimate business reason.)

    • > Rather spend $500,000 on the hunt than $50,000 on training someone into the role.

      Capex vs opex, that's the fundamental problem at heart. It "looks better on the numbers" to have recruiting costs than to have to set aside a senior developer plus paying the junior for a few months. That is why everyone and their dog only wants to hire seniors, because they have the skillset and experience that you can sit their ass in front of any random semi fossil project and they'll figure it out on their own.

      If the stonk analysts would go and actually dive deep into the numbers to look at hiring side costs (like headhunter expenses, employee retention and the likes), you'd see a course change pretty fast... but this kind of in-depth analysis, that's only being done by a fair few short-sellers who focus on struggling companies and not big tech.

      In the end, it's a "tragedy of the commons" scenario. It's fine if a few companies do that, it's fine if a lot of companies do that... but when no one wants to train juniors any more (because they immediately get poached by the big ones), suddenly society as a whole has a real and massive problem.

      Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.

      6 replies →

    • I can’t think of another career where management continuously does not understand the realities of how something gets built. Software best practices are on their face orthogonal to how all other parts of a business operate.

      How does marketing operate? In a waterfall like model. How does finance operate? In a waterfall like model. How does product operate? Well you can see how this is going.

      Then you get to software and it’s 2 week sprints, test driven development etc. and it decidedly works best not on a waterfall model, but shipping in increments.

      Yet the rest of the business does not work this way, it’s the same old top down model as the rest.

      This I think is why so few companies or even managers / executives “get it”

      3 replies →

    • >For regular companies they're infested by managers imported from non-engineering fields

      Someone's cousin, lets leave it at that, someones damn cousin or close friend, or anyone else with merely a pulse. I've had interviews where the company had just been turned over from people that mattered, and you. could. tell.

      One couldn't even tell me why the project I needed to do for them ::rolleyes::, their own code boilerplate(which they said would run), would have runtime issues and I needed to self debug it to even get it to a starting point.

      Its like, Manager: Oh heres this non-tangential thing that they tell me you need to complete before I can consider you for the positon.... Me: Oh can I ask you anything about it?.... Manager: No

    • Could not agree more. Whenever I hear monikers like "Java developer" or "python developer" as a job description I roll my eyes slightly.

  • Isn't that happening already? Half the usual CS curriculum is either math (analysis, linear algebra, numerical methods) or math in anything but name (computability theory, complexity theory). There's a lot of very legitimate criticism of academia, but most of the times someone goes "academia is stupid, we should do X" it turns out X is either:

    - something we've been doing since forever

    - the latest trend that can be picked up just-in-time if you'll ever need it

    • I've worked in education in some form or another for my entire career. When I was in teacher education in college . . . some number of decades ago . . . the number one topic of conversation and topic that most of my classes were based around was how to teach critical thinking, effective reasoning, and problem solving. Methods classes were almost exclusively based on those three things.

      Times have not changed. This is still the focus of teacher prep programs.

    • Parent comment is literally praising an experience they had in higher education, but your only takeaway is that it must be facile ridicule of academia.

      1 reply →

    • In CS, it's because it came out of math departments in many cases and often didn't even really include a lot of programming because there really wasn't much to program.

      2 replies →

  • When I was in college the philosophy program had the marketing slogan: “Thinking of a major? Major in thinking”.

    Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding. Of course I’m biased as a dual cs/philosophy major but it’s very rare I’m looking for someone who can just write a lot of code. Especially juniors as analytical thinking is way harder to teach than how to program.

    • > Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding.

      The humanities, especially the classic texts, cover human interaction and communication in a very compact form. My favorite sources are the Bible, Cicero, and Machiavelli. For example Machiavelli says if you do bad things to people do them at once, while good things you should spread out over time. This is common sense. Once you catch the flavor of his thinking it's pretty easy to work other situations out for yourself, in the same why that good engineering classes teach you how to decompose and solve technical problems.

    • The #1 problem in almost all workplaces is communication related. In almost all jobs I've had in 25-30 years, finding out what needs to be done and what is broken -- is much harder than actually doing it.

      We have these sprint planning meetings and the like where we throw estimates on the time some task will take but the reality is for most tasks it's maybe a couple dozen lines of actual code. The rest is all what I'd call "social engineering" and figuring out what actually needs to be done, and testing.

      Meanwhile upper management is running around freaking out because they can't find enough talent with X years of Y [language/framework] experience, imagining that this is the wizard power they need.

      The hardest problem at most shops is getting business domain knowledge, not technical knowledge. Or at least creating a pipeline between the people with the business knowledge and the technical knowledge that functions.

      Anyways, yes I have 3/4 a PHIL major and it actually has served me well. My only regret is not finishing it. But once I started making tech industry cash it was basically impossible for me to return to school. I've met a few other people over the years like me, who dropped out in the 90s .com boom and then never went back.

      2 replies →

    • This is also why I went into the Philosophy major - knowing how to learn and how to understand is incredibly valuable.

      Unfortunately in my experience, many, many people do not see it that way. It's very common for folks to think of philosophy as "not useful / not practical".

      Many people hear the word "philosophy" and mentally picture "two dudes on a couch recording a silly podcast", and not "investigative knowledge and in-depth context-sensitive learning, applied to a non-trivial problem".

      It came up constantly in my early career, trying to explain to folks, "no, I actually can produce good working software and am reasonably good at it, please don't hyper-focus on the philosophy major, I promise I won't quote Scanlon to you all day."

      3 replies →

    • Many top STEM schools have substantial humanities requirements, so I think they agree with you.

      At Caltech they require a total of at least 99 units in humanities or social sciences. 1 Caltech unit is 1 hour of work a week for each week of the term, and a typical class is 9 units consisting of 3 hours of classwork a week and 6 hours of homework and preparation.

      That basically means that for 11 of the 12 terms that you are there for a bachelor's degree, you need to be taking a humanities or social sciences class. They require at least 4 of those to be in humanities (English, history, history and philosophy of science, humanities, music, philosophy, and visual culture), and at least 3 to be in social sciences (anthropology, business economics and management, economics, law, political science, psychology, and social science).

      At MIT they have similar, but more complicated, requirements. They require humanities, art, and social sciences, and they require that you pick at least one subject in one of those and take more than one course in it.

    • I worked for someone who I believe was undergrad philosophy and then got a masters in CS.

    • On a related note, the most accomplished people I've met didn't have degrees in the fields where they excelled and won awards. They were all philosophy majors.

      Teaching people to think is perhaps the world's most under-rated skill.

      3 replies →

    • I would say you have some bias.

      yes, sometimes you need people who can grasp the tech and talk to managers. They might be intermediaries.

      But don't ignore the nerdy guys who have been living deeply in a tech ecosystem all their lives. The ones who don't dabble in everything. (the wozniaks)

  • A professor in my very first semester called "crazy finger syndrome" the attempts to go straight to the code without decomposing the problem from a business or user perspective. It was a long time ago. It was a CS curriculum

    I miss her jokes against anxious nerds that just wanted to code :(

    Don't forget the rise of boot camps where some educators are not always aligned with some sort of higher ethical standards.

    • > "crazy finger syndrome" - the attempts to go straight to the code without decomposing the problem from a business or user perspective

      Years ago I started on a new team as a senior dev, and did weeks of pair programming with a more junior dev to intro me to the codebase. His approach was maddening; I called it "spray and pray" development. He would type out lines or paragraphs of the first thing that came to mind just after sitting down and opening an editor. I'd try to talk him into actually taking even a few minutes to think about the problem first, but it never took hold. He'd be furiously typing, while I would come up with a working solution without touching a keyboard, usually with a whiteboard or notebook, but we'd have to try his first. This was c++/trading, so the type-compile-debug cycle could be 10's of minutes. I kept relaying this to my supervisor, but after a few months of this he was let go.

    • I make a point to solve my more difficult problems with pen and paper drawings and/or narrative text before I touch the PC. The computer is an incredibly distracting medium to work with if you are not operating under clear direction. Time spent on this forum is a perfect example.

  • Memorization and closed book tests are important for some areas. When seconds are counting the ER doctor cannot go look up how to treat a heart attack. That doctor also needs to know now only how to treat the common heart attack, but how to recognize this isn't the common heart attack but the 1 in 10,000 not a heart attack but has exactly the same symptoms as a heart attack case and give it the correct treatment.

    However most of us are not in that situation. It is better for us to just look up those details as we need them because it gives us more room to handle a broader variety of situations.

    • Humans will never outcompete ai in that regard however. Industry will eventually optimize for humans and ai separately: ai will know a lot and think quickly, humans will provide judgement and legal accountability. We’re already on this path.

  • Speaking with a relative who is a doctor recently it’s interesting how much each of our jobs are “troubleshooting”.

    Coding, doctors, plumber… different information, often similar skill sets.

    I worked a job doing tech support for some enterprise level networking equipment. It was the late 1990s and we were desperate for warm bodies. Hired a former truck driver who just so happened to do a lot of woodworking and other things.

    Great hire.

  • Everyone going through STEM needs to see the movie Hidden Figures for a variety of reasons, but one bit stands out as poignant: I believe it was Katherine Johnson, who is asked to calculate some rocket trajectory to determine the landing coordinates, thinks on it a bit and finally says, "Aha! Newton's method!" Then she runs down to the library to look up how to apply Newton's method. She had the conceptual tools to find a solution, but didn't have all the equations memorized. Having all the equations in short term memory only matters in a (somewhat pathological) school setting.

  • My favorite professor in my physics program would say, "You will never remember the equations I teach. But if you learn how the relationships are built and how to ask questions of those relationships, then I have done my job." He died a few years ago. I never was able to thank him for his lessons.

  • Being resourceful is an extremely valuable skill in the real world, and basically shut out of the education world.

    Unlike my teachers, none of my bosses ever put me in an empty room with only a pencil and a sheet of paper to solve given problems.

  • > My favorite professor in engineering school always gave open book tests.

    My experience as a professor and a student is that this doesn't make any difference. Unless you can copy verbatim the solution to your problem from the book (which never happens), you better have a good understanding of the subject in order to solve problems in the allocated time. You're not going to acquire that knowledge during your test.

    • My experience as a professor and a student is that this doesn't make any difference.

      Exactly the point of his test methodology.

      What he asked of students on a test was to *apply* knowledge and information to *unique* problems and create a solution that did not exist in any book.

      I only brought 4 things to his tests --- textbook, pencil, calculator and a capable, motivated and determined brain. And his tests revealed the limits of what you could achieve with these items.

    • Isn't this an argument for why you should allow open book tests rather than why you shouldn't? It certainly removes some pressure to remember some obscure detail or formula.

    • Isn't that just an argument for always doing open book tests, then? Seems like there's no downside, and as already mentioned, it's closer to how one works in the real world.

  • During some of the earlier web service development days, one would find people at F500 skating by in low-to-mid level jobs just cutting and pasting between spreadsheets, things would take them hours could be done in seconds, and with lower error rates, with a proper data interface.

    Very anecdotally, but I hazard that most of these types of low-hanging fruit, low-value add roles are much less common since they tended to be blockers for operational improvement. Six-sigma, Lean, various flavors of Agile would often surface these low performers up and they either improved or got shown the door between 2005 - 2020.

    Not that everyone is 100% all the time, every day, but what we are left with is often people that are highly competent at not just their task list but at their job.

  • I had a like minded professor in university, ironically in AI. Our big tests were all 3 day take home assignments. The questions were open ended, required writing code, processing data and analyzing results.

    I think the problem with this is that it requires the professor to mentally fully engage when marking assignments and many educators do not have the capacity and/or desire to do so.

  • It depends what level the education is happening at. Think of it like students being taught how to do for loops but are just copying and pasting AI output. That isn't learning. They aren't building the skills needed to debug when the AI gets something wrong with a more complicated loop, or understand the trade offs of loops vs recursion.

    Finding the correct balance for a given class it hard. Generally, the lower level the education, the more it should be closed books because the more it is about being able to manually solve the smaller challenges that are already well solved so you build up the skills needed to even tackle the larger challenges. The higher the education level, the more it is about being able to apply those skills to then tackle a problem, and one of those skills is being able to pull relevant formulas and such from the larger body of known formulas.

  • Agreed coming from the ops world also.

    I've had a frustrating experience the past few years trying to hire junior sysadmins because of a real lack of problem solving skills once something went wrong outside of various playbooks they memorized to follow.

    I don't need someone who can follow a pre-written playbook, I have ansible for that. I need someone that understands theory, regardless of specific implementations, and can problem solve effectively so they can handle unpredictable or novel issues.

    To put another way, I can teach a junior the specifics of bind9 named.conf, or the specifics of our own infrastructure, but I shouldn't be expected to teach them what DNS in general is and how it works.

    But the candidates we get are the opposite - they know specific tools, but lack more generalized theory and problem solving skills.

  • Same here! I always like to say that software engineering is 50% knowing the basics (How to write/read code, basic logic) and 50% having great research skills. So much of our time is spent finding documentation and understanding what it actually means as opposed to just writing code.

  • You cannot teach "how to to think". You have to give students thinking problems to actually train thinking. Those kinds of problems can increasingly be farmed off to AI, or at least certain subproblems in them.

    I meam, yes, to an extent you can teach how to think: critical thinking and logic are topics you can teach and people who take their teaching to heart can become better thinkers. However, those topics cannot impart creativity. Critical thinking is called exactly that because it's about tools and skills for separating bad thinking from good thinking. The skill of generating good thinking probably cannot be taught; it can only be improved with problem-solving practice.

  • > In the real world of work, everyone has full access to all of the available data and information.

    In general, I also attend your church.

    However, as I preached in that church, I had two students over the years.

    * One was from an African country and told me that where he grew up, you could not "just look up data that might be relevant" because internet access was rare.

    * The other was an ex US Navy officer who was stationed on a nuclear sub. She and the rest of the crew had to practice situations where they were in an emergency and cut off from the rest of the world.

    Memorization of considerable amounts of data was important to both of them.

  • Each one of us has a mental toolbox that we use to solve problems. There are many more tools that we don’t have in our heads that we can look up if we know how.

    The bigger your mental toolbox the more effective you will be at solving the problems. Looking up a tool and learning just enough to use it JIT is much slower than using a handy tool that you already masterfully know how to use.

    This is as true for physical tools as for programming concepts like algorithms and data structures. In the worst case you won’t even know to look for a tool and will use whatever is handy, like the proverbial hammer.

  • People have been saying that since the advent of formal education. Turns out standardized education is really hard to pull off and most systems focus on making the average good enough.

    It’s also hard to teach people “how to think” while at the same time teaching them practical skills - there’s only so many hours in a day, and most education is setup as a way to get as many people as possible into shape for taking on jobs where “thinking” isn’t really a positive trait, as it’d lead to constant restructuring and questioning of the status quo

  • While there’s no reasonable way to disagree with the sentiment, I don’t think I’ve ever met anyone who can “think and decompose problems” who isn’t also widely read, and knows a lot of things.

    Forcing kids to sit and memorize facts isn’t suddenly going to make them a better thinker, but much of my process of being a better thinker is something akin to sitting around and memorizing facts. (With a healthy dose of interacting substantively and curiously with said facts)

  • > Everyone has full access to all of the available data and information

    Ahh, but this is part of the problem. Yes, they have access, but there is -so much- information, it punches through our context window. So we resort to executive summaries, or convince ourselves that something that's relevant is actually not.

    At least an LLM can take full view of the context in aggregate and peel out signal. There is value there, but no jobs are being replaced

    • >but no jobs are being replaced

      I agree that an LLM is a long way from replacing most any single job held by a human in isolation. However, what I feel is missed in this discussion is that it can significantly reduce the total manpower by making humans more efficient. For instance, the job of a team of 20 can now be done by 15 or maybe even 10 depending on the class of work. I for one believe this will have a significant impact on a large number of jobs.

      Not that I'm suggesting anything be "stopped". I find LLM's incredibly useful, and I'm excited about applying them to more and more of the mundane tasks that I'd rather not do in the first place, so I can spend more time solving more interesting problems.

  • Also, some problems don't have enough data for a solution. I had a professor that gave tests where the answer was sometimes "not solvable." Taking these tests was like sweating bullets because you were not sure if you're just too dumb to solve the problem, or there was not enough data to solve the problem. Good times!

  • One of my favorite things about Feynman interviews/lectures is often his responses are about how to think. Sometimes physicists ask questions in his lectures and his answer has little to do with the physics, but how they're thinking about it. I like thinking about thinking, so Feynman is soothing.

  • I agree with the overall message, but I will say that there is still a great deal of value in memorisation. Memorising things gives you more internal tools to think in broader chunks, so you can solve more complicated problems.

    (I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)

  • Agree, hopefully this insight / attitude will become more and more prevalent.

    For anyone looking for resources, may we recommend:

    * The Art of Doing Science and Engineering by Richard Hamming (lectures are available on YouTube as well)

    * Measurement by Paul Lockhart (for teaching mindset)

  • Talk is cheap. Good educators cost money, and America famously underpays (and under-appreciates) its teachers. Does he also support increasing taxes on the wealthy?

  • Even more broadly, it's "critical thinking," which definitely seems to be on the decline (though I'm sure old people have said this for millennia)

  • Have there been studies about abilities of different students to memorize information? I feel this is under-studied in the world of memorizing for exams

  • It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.

    Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.

  • wanted to chime in on the educational system. in the west, we have the 'banking system' which treat a student as a bank account and knowledge as currency, hence the dump more info into ppl to make them sm0rt attitude.

    in developing areas, they actually implement more modern models commonly, as its newer and free to implement newer things.

    those newer models focus more on exactly this. teach a person how to go through the process of finding solutions. rather than 'knowing a lot to enable the process of thinking'.

    not saying what is better or worse, but reading this comment and article it reminds me of this.

    a lot of people i see, they know tons of interesting things, but anything outside of their knowledge is a complete mystery.

    all the while ppl from developing areas learn to solve issues. alot of individuals from there also, get out of their poverty and do really well for themselves.

    ofcourse, this is a generalization and doesnt hold up in all cases. but i cant help think about it.

    a lot of my colleagues dont know how to solve problems simply because they dont RTFM. they rely on knowledge from their education which is already outdated before they even sign up.. i try to teach them to RTFM. it seems hopeless. they look at me , downwards, because i have no papers. but if shit hits the fan, they come to me. solve the prolbem.

    a wise guy i met once said (likely not his words). there are 2 type of ppl. those who think in problems, and those who think in solutions.

    id related that to education, not prebaked human properties.

So to summarize:

My boss said we were gonna fire a bunch of people “because AI” as part of some fluff PR to pretend we were actually leaders in AI. We tried that a bit, it was a total mess and we have no clue what we’re doing, I’ve been sent out to walk back our comments.

  • Boss->VP: "We need to fire people because AI"

    VP->Public: "We'll replace all our engineers with AI in two years"

    Boss->VP: "I mean we need to fire VPs because AI"

    VP->Public: "Replacing people with AI is stupid"

  • They are still not hiring junior engineers

    • Well they’re just trying to reduce headcount overall to get the expenses for AWS in better shape and work through some bloat. The “we’re doing layoffs because of AI” story wasn’t sticking though so looks like now they’re backtracking that story line.

Most people don't notice but there has been a inflation in headcounts over the years now. This happened around the time microservices architecture trend took over.

All of sudden to ensure better support and separation of concerns people needed a team with a manager for each service. If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.

When I started out huge C++ and Java code bases were pretty much the norm, and it was also one of the reasons why things were hard and barrier to entry high. In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.

To me its these kind of places that are in real trouble. There is not enough work to justify keeping dozens to even hundreds of teams, their managements and their hierarchies all working for quite literally doing nothing.

  • Its almost an everyday song that I hear, that big companies are full of hundreds or thousands of employees doing nothing.

    I think sometimes the definition of work gets narrowed to a point so infinitesimal that everyone but the speaker is just a lazy nobody.

    There was an excellent article on here about working at enterprise scale. My experience has been similar. You get to do work that feels really real, almost like school assignments with instant feedback and obvious rewards when you're at a small company. When I worked at big companies it all felt like bullshit until I screwed it up and a senator was interested in "Learning more" (for example).

    The last few 9s are awful hard to chase down and a lot of the steps of handling edge case failures or features are extremely manual.

  • > In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.

    You're committing the classic fallacy around microservices here. The services themselves are simpler. The whole software is not.

    When you take a classic monolith and split it up into microservices that are individually simple, the complexity does not go away, it simply moves into the higher abstractions. The complexity now lives in how the microservices interact.

    In reality, the barrier to entry on monoliths wasn't that high either. You could get "low productivity employees" (I'd recommend you just call them "novices" or "juniors") to do the work, it'd just be best served with tomato sauce rather than deployed to production.

    The same applies to microservices. You can have inexperienced devs build out individual microservices, but to stitch them together well is hard, arguably harder than ye-olde-monolith now that Java and more recent languages have good module systems.

  • There are two freight trains currently smashing into each other:

    1.) Elon fired 80% of twitter and 3 years later it still hasn't collapsed or fallen into technical calamity. Every tech board/CEO took note of that.

    2.) Every kid and their sister going to college who wants a middle class life with generous working conditions is targeting tech. Every teenage nerd saw those over employed guys making $600k from their couch during the pandemic.

    • On the other hand while yes it's still running, twitter is mostly not releasing new features, and has completely devolved into the worst place on the internet. Not to mention most accounts now actually are bots like Elon claimed they were 3 years ago.

      3 replies →

    • I hope the tech boards and CEOs don’t miss the not very subtle point that twitter has very quickly doubled in size in 2 years and is still growing after the big layoff and they had to scramble to fix some notable mistakes they made when firing that many people. 80% is already a hugely misleading marketing number.

      Edit: huh, what’s with the downvote, is this wrong? Did I overstate it? Here’s the data: https://www.demandsage.com/twitter-employees/

      2 replies →

  • Well yeah... computers are really powerful. you don't need docker swarm or any other newfangled thing. Just perl and apache and mysql and you can ship to tens of millions of users before you hit scaling limits.

  • > If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.

    I think it depends on the industry. In safety critical systems, you need to be testing, making documentation, architectural artifacts, meeting with customers, etc

    There's not that much idle time. Unless you mean idle time actually writing code and that's not always a full time job.

  • I think most people misunderstand the relationship between business logic, architecture and headcount.

    Big businesses don’t inherently require the complexity of architecture they have. There is always a path-dependent evolution and vestigial complexity proportional to how large and fast they grew.

    The real purpose of large scale architecture is to scale teams much moreso than business logic. But why does headcount grow? Is it because domains require it? Sure that’s what ambitious middle managers will say, but the real reason is you have money to invest in growth (whether from revenue or from a VC). For any complex architecture there is usually a dramatically simpler one that could still move the essential bits around, it just might not support the same number of engineers delineated into different teams with narrower responsibilities.

    The general headcount growth and architecture trajectory is therefore governed by business success. When we’re growing we hire and we create complex architecture to chase growth in as many directions as possible. Eventually when growth slows we have a system that is so complex it requires a lot of people just to understand and maintain—even if the headcount is longer justified those with power in the human structure will bend over backwards to justify themselves. This is where the playbook changes and a private equity (or Elon) mentality is applied to just ruthlessly cut and force the rest of the people how to keep the lights on.

    I consider advances in AI and productivity orthogonal to all this. It will affect how people do their jobs, what is possible, and the economics of that activity, but the fundamental dynamics of scale and architectural complexity will remain. They’ll still hire more people to grow and look for ways to apply them.

  • It would be sad if you are correct. Your company might not be able to justify keeping dozens and hundreds of teams employed, but what happens when other companies can't justify paying dozens and hundreds of teams who are the customers buying your product? Those who gleefully downsize might well deserve the market erosion they cause.

  • This is blatantly incorrect. Before microservices became the norm you still had a lot of teams and hiring, but the teams would be working with the same code base and deployment pipeline. Every company that became successful and needed to scale invented their own bespoke way to do this; microservices just made it a pattern that could be repeatedly applied.

  • I think that the starting point is that productivity/developer has been declining for a while, especially at large companies. And this leads to the "bloated" headcount.

    The question is why. You mention microservices. I'm not convinced.

    Many think it is "horizontals". Possible, these taxes add up it is true.

    Perhaps it is cultural? Perhaps it has to do with the workforce in some manner. I don't know and AFAIK it has not been rigorously studied.

Looks like the AWS CEO has changed religion. A year back, he was aboard the ai-train - saying AI will do all coding in 2 years [1]

Finally, the c-suite is getting it.

[1] https://news.ycombinator.com/item?id=41462545

  • He didn't actually say that. He said it's possible that within 2 years developers won't be writing much code, but he goes on to say:

    "It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build, because that's going to be more and more of what the work is as opposed to sitting down and actually writing code...."

    https://www.businessinsider.com/aws-ceo-developers-stop-codi...

    If you read the full remarks they're consistent with what he says here. He says "writing code" may be a skill that's less useful, which is why it's important to hire junior devs and teach them how to learn so they learn the skills that are useful.

    • He is talking his book. Management thinks it adds value in the non-coding aspects of the product - such as figuring out what customers need etc. I suggest management stays in their lanes, and not make claims on how coding needs to be done, leave that to the craftsmen actually coding.

      2 replies →

  • Theoretically, a large part of Amazon's worth is the skill of its workforce.

    Some subset of the population likes to pretend their workforce is a cost that provides less than zero value or utility, and all the value and utility comes from shareholders.

    But if this isn't true, and collective skill is worth value, then saying anyone can have that with AI at least has some headwind on your share price - which is all they care about.

    Does that offset a potential tailwind from slightly higher margins?

    I don't think any established company should be cheerleading that anyone can easily upset their monopoly with a couple of carefully crafted prompts.

    It was always kind of strange to me, and seemed as though they were telling everyone, our moat is gone, and that is good.

    If you really believed anyone could do anything with AI, then the risk of PEs collapsing would be high, which would be bad for the capital class. Now you have to correctly guess what's the next best thing constantly to keep your ROI instead of just parking it in save havens - like FAANG.

    • Amazon doesn't really work this way.

      Bedrock/Q is a great example of how Amazon works. If we throw $XXX at the problem and YYY SDEs at the problem we should be able to build Github Copilot, GPT-3, OpenRouter and Cursor ourselves instead of trying to competitively acquire and attract talent. The fact that Codewhisperer, Q and Titan barely get spoken about on HN or Twitter tells you how successful this is.

      But if you have that perspective then the equation is simple. If S3 can make 5 XXL features per year with 20 SDEs then if we adopt “Agentic AI” we should be able to build 10 XXL features with 10 SDEs.

      Little care is given to organizational knowledge, experience, vision etc. that is the value (in their mind) of leadership not ICs.

      1 reply →

    • Amazon would probably say its worth is the machinery around workers that allows it to plug in arbitrary numbers of interchangeable people and have them be productive.

  • It’s a requirement for the C-suite to always be aware of which way the wind is currently blowing.

    • It’s one thing to be aware of which way the wind is blowing, and quite another to let yourself be blown by the wind.

      Not to say that‘s what the AWS CEO is doing—maybe it is, maybe it isn’t, I haven’t checked—I’m just commenting on the general idea.

  • That's not necessarily inconsistent though - if you need people to guide or instruct the autonomy, then you need a pipeline of people including juniors to do that. Big companies worry about the pipeline, small companies can take that subsidy and only hire senior+, no interns, etc., if they want.

    • There is no pipeline though. The average tenure of a junior developer even at AWS is 3 years. Everyone knows that you make less money getting promoted to an L5 (mid) than getting hired in as one. Salary compression is real. The best play is always to jump ship after 3 years. Even if you like Amazon, “boomeranging” is still the right play.

      11 replies →

    • > That's not necessarily inconsistent though

      Pasting the quote for reference:

      > Amazon Web Services CEO Matt Garman claims that in 2 years coding by humans won't really be a thing, and it will all be done by networks of AI's who are far smarter, cheaper, and more reliable than human coders.

      Unless this guy speaks exclusively in riddles, this seems incredibly inconsistent.

  • To be fair, the two statements are not inconsistent. He can continue to hire junior devs, but their job might involve much less actual coding.

  • There's definitely a vibe shift underway. C-Suites are seeing that AI as a drop-in replacement for engineers is a lot farther off than initial hype suggested. They know that they'll need to attract good engineers if they want to stay competitive and that it's probably a bad idea to scare off your staff with saying that they'll be made irrelevant.

  • I'm not sure those are mutually exclusive? Modern coders don't touch Assembly or deal with memory directly anymore. It's entirely possible that AI leads to a world where typing code by hand is dramatically reduced too (it already has in a few domains and company sizes)

  • >Finally, the c-suite is getting it.

    It can only mean one thing: the music is about to stop.

  • He was right tho. AI is doing all the coding. That doesn’t mean you fire junior staff. Both can be true at once- you need juniors, and pretty much all code this days is AI-generated.

  • Man how would we live without these folks and their sensational acumen. Millions well spent!

  • He should face consequences for his cargo-cult thinking in the first place. The C-Suite isn't "getting" anything. They are simply bending like reeds in today's winds.

Might want to clarify things with your boss who says otherwise [1]? I do wish journalists would stop quoting these people unedited. No one knows what will actually happen.

[1]: https://www.shrm.org/topics-tools/news/technology/ai-will-sh...

  • I'm not sure those statements are in conflict with each other.

    “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.” - Matt Garman

    "We will need fewer people doing some of the jobs that are being done today” - Amazon CEO Andy Jassy

    Maybe they differ in degree but not in sentiment.

  • > quoting these people unedited

    If you're quoting something, the only ethical thing to do is as verbatim as possible and with a sufficient amount of context. Speeches should not be cleaned up to what you think they should have said.

    Now, the question of who you go to for quotes, on the other hand .. that's how issues are really pushed around the frame.

    • By unedited I mean, take the message literally and quote it to support a narrative that isn’t clear or consistent. (even internally among Amazon leadership)

  • I very much believe that anything AWS says on the corporate level is bullshit.

    From the perspective of a former employee. I knew that going in though. I was 46 at the time, AWS was my 8th job and knowing AWS’s reputation from 2nd and 3rd hand information, I didn’t even entertain an opportunity that would have forced me to relocate.

    I interviewed for a “field by design” role that was “permanently remote” [sic].

    But even those positions had an RTO mandate after I already left.

    • There's what AWS leadership says and then there's what actually sticks.

      There's an endless series of one pagers with this idea or that idea, but from what I witnessed first hand, the ones that stuck were the ones that made money.

      Jassy was a decent guy when I was there, but that was a decade ago. A CEO is a PR machine more than anything else, and the AI hype train has been so strong that if you do anything other than saying AI is the truth, the light and the way, you lose market share to competitors.

      AI, much like automation in general, does allow fewer people to do more, but in my experience, customer desires expand to fill a vacuum and if fewer people can do more, they'll want more to the point that they'll keep on hiring more and more people.

I would bet that anyone who's worked with these models extensively would agree.

I'll never forget the sama AGI posts before o3 launched and the subsequent doomer posting from techies. Feels so stupid in hindsight.

  • The AGI doomerism was a marketing strategy. Now everyone gets what AI is and we're just watching a new iteration on search, ai's read all the docs.

  • It was always stupid, but no one is immune to hype, just different types of hype.

    Especially with the amount of money that was put into just astroturfing the technology as more than it is.

  • ChatGPT is better than any junior developer I’ve ever worked with. Junior devs have always been a net negative for the first year or so.

    From a person who is responsible for delivering projects, I’ve never thought “it sure would be nice if I had a few junior devs”. Why when I can poach an underpaid mid level developer for 20% more?

    • I've never had a junior dev be a "net negative." Maybe you're just not supervising or mentoring them at all? The first thing I tell all new hires under me is that their job is to solve more problems than they create, and so far it's worked out.

      9 replies →

    • I've never worked with a junior developer that was incapable of learning or following instructions unless I formatted them in a specific way.

      2 replies →

    • The most impressive folks Ive worked with are almost always straight out of school. It's before they've developed confidence about their skills and realized they can be more successful by starting their own business. People who get promoted three times in just 5 years sort of good.

      1 reply →

> Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.

You really want to believe, maybe even need to believe, that anyone who comes up with this idea in their head has never written a single line of code in their life.

It is on its face absurd. And yet I don't doubt for a second that Garman et al. have to fend off legions of hacks who froth at the mouth over this kind of thing.

  • Time to apply the best analogy I've ever heard.

    > "Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs." -- Bill Gates

    Do we reward the employee who has added the most weight? Do we celebrate when the AI has added a lot of weight?

    At first, it seems like, no, we shouldn't, but actually, it depends. If a person or AI is adding a lot of weight, but it is really important weight, like the engines or the main structure of the plane, then yeah, even though it adds a lot of weight, it's still doing genuinely impressive work. A heavy airplane is more impressive than a light weight one (usually).

    • I just can’t resist myself when airplanes come up in discussion.

      I completely understand your analogy and you are right. However just to nitpick, it is actually super important to have a weight on the airplane at the right place. You have to make sure that your aeroplane does not become tail heavy or it is not recoverable from a stall. Also a heavier aeroplane, within its gross weight, is actually safer as the safe manoeuverable speed increases with weight.

      10 replies →

    • Progress on airplanes is often tracked by # of engineering drawings released, which means that 1000s of little clips, brackets, fittings, etc. can sometimes misrepresent the amount of engineering work that has taken place compared to preparing a giant monolithic bulkhead or spar for release. I have actually proposed measuring progress by part weight instead of count to my PMs for this reason

    • > the best analogy I've ever heard.

      It’s an analogy that gets the job done and is targeted at non-tech managers.

      It’s not perfect. Dead code has no “weight” unless you’re in a heavily storage-constrained environment. But 10,000 unnecessary rivets has an effect on the airplane everywhere, all the time.

      7 replies →

  • This reminds me of a piece on folklore.org by Andy Hertzfeld[0], regarding Bill Atkinson. A "KPI" was introduced at Apple in which engineers were required to report how many lines of code they had written over the week. Bill (allegedly) claimed "-2000" (a completely, astonishingly negative report), and supposedly the managers reconsidered the validity of the "KPI" and stopped using it.

    I don't know how true this is in fact, but I do know how true this is in my work - you cannot apply some arbitrary "make the number bigger" goal to everything and expect it to improve anything. It feels a bit weird seeing "write more lines of code" becoming a key metric again. It never worked, and is damn-near provably never going to work. The value of source code is not in any way tied to its quantity, but value still proves hard to quantify, 40 years later.

    0. https://www.folklore.org/Negative_2000_Lines_Of_Code.html

  • Given the way that a lot of AI coding actually works, it’s like asking what percent of code was written by hitting tab to autocomplete (intellisense) or what percent of a document benefited from spellcheck.

    • While most of us know the next word guessing is how it works in reality…

      That sentiment ignores the magic of how well this works. There are mind blowing moments using AI coding, to pretend that it’s “just auto correct and tab complete” is just as deceiving as “you can vibe code complete programs”.

  • All that said, I'm very keen on companies telling me how much of their codebase was written by AI.

    I just won't use that information in quite the excitable, optimistic way they offer it.

    • I want to have the model re-write patent applications, and if any portion of your patent filing was replicated by it your patent is denied as obvious and derivative.

    • "...just raised a $20M Series B and are looking to expand the team and products offered. We are fully bought-in to generative AI — over 40% of our codebase is built and maintained by AI, and we expect this number to continue to grow as the tech evolves and the space matures."

      "What does your availability over the next couple of weeks look like to chat about this opportunity?"

      1 reply →

  • Something I wonder about the percent of code - I remember like 5-10 years ago there was a series of articles about Google generating a lot of their code programmatically, I wonder if they just adapted their code gen to AI.

    I bet Google has a lot of tools to say convert a library from one language to another or generate a library based on an API spec. The 30% of code these LLMs are supposedly writing is probably in this camp, not net novel new features.

  • When I see these stats, I think of all the ways "percentage of code" could be defined.

    I ask an AI 4 times to write a method for me. After it keeps failing, I just write it myself. AI wrote 80% of the code!

In academia the research pipeline is this

Undergraduate -> Graduate Student -> Post-doc -> Tenure/Senior

Some exceptions occur for people getting Tenure without post doc or people doing some other things like taking undergraduate in one or two years. But no one expect that we for whole skip the first two and then get any senior researchers.

The same idea applies anywhere, the rule is that if you don't have juniors then you don't get seniors so better prepare your bot to do everything.

As always, the truth is somewhere in the middle. AI is not going to replace everyone tomorrow, but I also don't think we can ignore productivity improvements from AI. It's not going to replace engineers completely now or in the near future, but AI will probably reduce the number of engineers needed to solve a problem.

I do not agreed. Its was not even worth without llm. Junior will always take a LOT of time from seniors. and when the junior become good enough, he will find another job. and the senior will be stuck in this loop.

junior + llm, it even worse. they become prompt engineers

I'm a technical co-founder rapidly building a software product. I've been coding since 2006. We have every incentive to have AI just build our product. But it can't. I keep trying to get it to...but it can't. Oh, it tries, but the code it writes is often overly complex and overly-verbose. I started out being amazed at the way it could solve problems, but that's because I gave it small, bounded, well-defined problems. But as expectations with agentic coding rose, I gave it more abstract problems and it quickly hit the ceiling. As was said, the engineering task is identifying the problem and decomposing it. I'd love to hear from someone who's used agentic coding with more success. So far I've tried Co-pilot, Windsurf, and Alex sidebar for Xcode projects. The most success I have is via a direct question with details to Gemini in the browser, usually a variant of "write a function to do X"

  • > As was said, the engineering task is identifying the problem and decomposing it.

    In my experience if you do this and break the problem down into small pieces, the AI can implement the pieces for you.

    It can save a lot of time typing and googling for docs.

    That said, once the result exceeds a certain level of complexity, you can't really ask it to implement changes to existing code anymore, since it stops understanding it.

    At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.

    So, my upshot is so far that it works great for small projects and for prototyping, but the gain after a certain level of complexity is probably quite small.

    But then, I've also find quite some value in using it as a code search engine and to answer questions about the code, so maybe if nothing else that would be where the benefit comes from.

    • > At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.

      Appreciate you saying this because it is my biggest gripe in these conversations. Even if it makes me faster I now have to put time into reading the code multiple times because I have to internalize it.

      Since the code I merge into production "is still my responsibility" as the HN comments go, then I need to really read and think more deeply about what AI wrote as opposed to reading a teammate's PR code. In my case that is slower than the 20% speedup I get by applying AI to problems.

      I'm sure I can get even more speed if I improve prompts, when I use the AI, agentic vs non-agentic, etc. but I just don't think the ceiling is high enough yet. Plus I am someone who seems more prone to AI making me lazier than others so I just need to schedule when I use it and make that time as minimal as possible.

Are we trying to guilt trip corporations to do socially responsible thing regarding young workers skill acquisition?

Haven't we learned that it almost always ends up in hollow PR and marketing theater?

Basically the solution to this is extending education so that people entering workforce are already at senior level. Of course this can't be financed by the students, because their careers get shortened by longer education. So we need higher taxes on the entities that reap the new spoils. Namely those corporations that now can pass on hiring junior employees.

Makes sense. Instead of replacing junior staff, they should be trained to use AI to get more done in less time. In next 2-3 years they will be experts doing good work with high productivity.

Two things that will hurt us in the long run, working from home and AI. I'm generally in favour of both, but with newbies it hurts them as they are not spending enough face to face time with seniors to learn on the job.

And AI will hurt them in their own development and with it taking over the tasks they would normally cut their teeth on.

We'll have to find newer ways of helping the younger generation get in the door.

  • A weekly 1 hour call, where pair programming/ exploration of an on-going issue, technical idea would be enough to replace face to face time with seniors. This has been working great for us, at a multi billion dollar profitable public company thats been fully remote.

  • I would argue that just being in the office or not using AI doesn't guarantee any better learning of younger generations. Without proper guidance a junior would still struggle regardless of their location or AI pilot.

    The challenge now is for companies, managers and mentors to adapt to more remote and AI assisted learning. If a junior can be taught that it's okay to reach out (and be given ample opportunities to do so), as well as how to productively use AI to explain concepts that they may feel too scared to ask because they're "basics", then I don't see why this would hurt in the long run.

Junior staff will be necessary but you'll have to defend them from the bean-counters.

You need people who can validate LLM-generated code. It takes people with testing and architecture expertise to do so. You only get those things by having humans get expertise through experience.

>teach “how do you think and how do you decompose problems”

That's rich coming from AWS!

I think he meant "how do you think about adding unnecessary complexity to problems such that it can enable the maximum amount of meetings, design docs and promo packages for years to come"!

A lot of companies that have stopped hiring junior employees are going to be really hurting in a couple of years, once all of their seniors have left and they have no replacements trained and ready to go.

If AI is so great and had PhD level skills (Musk) then logic says you should be replacing all of your _senior_ developers. That is not the conclusion they reached which implies that the coding ability is not that hot. Q.E.D.

Finally someone from a top position said this. After all the trash the CEOs have been spewing and sensationalizing every AI improvement, for a change, a person in a non-engineering role speaks the truth.

Unfortunately, this is the kind of view that is at once completely correct and anathema to private equity because they can squeeze a next quarter return by firing a chunk of the labor force.

Yesterday, I was asked to scrape data from a website. My friend used ChatGPT to scrape data but didn't succeded even spent 3h+. I looked website code and understand with my web knowledge and do some research with LLM. Then I described how to scrape data to LLM it took 30 minutes overall. The LLM cant create best way but you can create with using LLM. Everything is same, at the end of the day you need someone who can really think.

  • LLM's can do anything, but the decision tree for what you can do in life is almost infinite. LLM's still need a coherent designer to make progress towards a goal.

  • Or you could’ve used xpath and bs4 and have been done in an hour or two and have more understandable code.

    • it is not that easy, there is lazy loading in the page that is triggered by scroll of specific sections. You need to find clever way, no way to scrape with bs4, so tough with even selenium.

Bravo.. Finally a voice of reason.

As someone who works in AI, any CEO who says that AI is going to replace junior workers has no f*cking clue what they are talking about.

Current generation of AI agents are great at writing a block of codes. Similar to writing a great paragraph. Know your tools.

Agree. AI is, so far, almost as good as StackOverflow, except it lies confidently and generates questionable code.

AWS CEO says what he has to say to push his own agenda and obviously to align himself with the most currently popular view.

AWS is a very infrastructure intensive project with extremely tight SLAs, and no UI, makes a lot of sense.

Point is nobody has figured out how much AI can replace humans. People. There is so much of hype out there as every tech celebrity sharing their opinions without responsibility of owning them. We have to wait & see. We could change courses when we know the reality. Until then, do what we know well.

My respect for people that take this approach is very high. This is the right way to approach integration of technology.

Can SOME people's jobs be replaced by AI. Maybe on paper. But there are tons of tradeoffs to START with that approach and assume fidelity of outcome.

Perhaps I'm too cynical about messages coming out of FAANG. But I have a feeling they are saying things to placate the rising anger over mass layoffs, h1b abuse, and offshoring. I hope I'm wrong.

Did a double take at Berman bring described as an AI investor. He does invest but a more appropriate description would be "AI YouTuber".

I don't mean that as a negative, he's doing great work explaining AI to (dev) masses!

It is too late it is already happening. The evolution of tech field is people being more experienced and not AI. But AI will be there for questions and easy one liners. Properly formalized documentation, even TLDRs.

The cost of not hiring and training juniors is trying to retain your seniors while continuously resetting expectations with them about how they are the only human accountable for more and more stuff.

Of corse it is... You should replace your senior staff with AI ;) Juniors will just prompt it then....

Agreed.

LLMs are actually -the worst- at doing very specific repetitive things. It'd be much more appropriate for one to replace the CEO (the generalist) rather than junior staff.

I heard from several sources that AWS has a mandate to put GenAI in everything and force everyone to use it so... yeah.

Maybe source of "AI replacing junior staff" is the statement AWS CEO made during a private meeting with client.

Not often article makes me want to buy shares. Like... never. This article is spot on.

Sometimes, the C suite is there because they're actually good at their jobs.

junior engineers aren't hired to get tons of work done; they're hired to learn, grow, and eventually become senior engineers. ai can't replace that, but only help it happen faster (in theory anyway).

No one's getting replaced, but you may not hire that new person that otherwise would have been needed. Five years ago, you would have hired a junior to crank out UI components, or well specc'd CRUD endpoints for some big new feature initiative. Now you probably won't.

  • > well specc'd CRUD endpoints

    I’m really tired of this trope. I’ve spent my whole career on “boring CRUD” and the number of relational db backed apps I’ve seen written by devs who’ve never heard of isolation levels is concerning (including myself for a time).

    Coincidentally, as soon as these apps see any scale issues pop up.

  • On the other hand, that extra money can be used to expand the business in other ways, plus most kids coming out of college these days are going to be experts in getting jobs done with AI (although they will need a lot of training in writing actual secure and maintainable code).

    • Even the highest ranking engineers should be experts. I don’t understand why there’s this focus on juniors as the people who know AI best.

      Using AI isn’t rocket science. Like you’re talking about using AI as if typing a prompt in English is some kind of hard to learn skill. Do you know English? Check. Can you give instructions? Check. Can you clarify instructions? Check.

      1 reply →

    • > plus most kids coming out of college these days are going to be experts in getting jobs done with AI

      “You won’t lose your job to AI, you’ll lose it to someone who uses AI better than you do”

  • Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.

    At least in my personal case, struggling with renewal at Virgin Broadband, multiple humans wasted probably an hour of everyone's time overall on the phone bouncing me around departments, unable to comprehend my request, trying to upsell and pitch irrelevant services, applying contextually inappropriate talking scripts while never approaching what I was asking them in the first place. Giving up on those brainless meat bags and engaging with their chat bot, I was able to resolve what I needed in 10 minutes.

    • Its strange you have to write this.

      In India most of the banks now have apps that do nearly all the banking you can do by visiting a branch personally. To that extent this future is already here.

      When I had to close my loan and had to visit a branch nearly a few times, the manager tells me, significant portion of his people's time now goes into actual banking- which according to him was selling products(fixed deposits, insurances, credit cards) and not customer support(which the bank thinks is not its job and has to because there is no other alternative to it currently).

    • > Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.

      In IT, if at a minimum, AI would triage the problem intelligently (and not sound like a bot while doing it), that would save my more expensive engineers a lot more time.

    • This is mostly because CS folks are given such sales and retention targets; and while I’ve never encountered a helpful support bot even in the age of LLMs, I presume in your case the company management was just happy to have a support bot talking to people without said metrics.

No it isn't, he's lying. Sorry guys.

Claude code is better than a junior programmer by a lot and these guys think it only gets better from there and they have people with decades in the industry to burn through before they have to worry about retraining a new crop.

> “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”

Instead you should replace senior staff who make way more.

that was not the kicker....it was the ending comment...

Better learn how to learn as we are not training(or is that paying) you to learn...

LLM defenders with the "yOu cAn't cRiTiCiZe iT WiThOuT MeNtIoNiNg tHe mOdEl aNd vErSiOn, It mUsT Be a lAnGuAgE LiMiTaItOn" crack me up. I used code generation out of curiosity once, for a very simple script, and it fucked it up so badly I was laughing.

Please tell me which software you are building with AI so I can avoid it.

I mean I used Copilot / JetBrains etc. to work on my code base but for large scale changes it did so much damage that it took me days to fix it and actually slowed me down. These systems are just like juniors in their capabilities, actually worse because junior developers are still people and able to think and interact with you coherently over days or weeks or months, these models aren’t even at that level I think.

Rather than AI that can function as many junior coders to enable a senior programmer to be more efficient.

Having AI function as a senior programmer for lots of junior programmers that helps them learn and limits the interruptions for human senior coders makes so much more sense.

The bubble has, if not burst, at least gotten to the stage where it’s bulging out uncomfortably and losing cohesion.

It's refreshing to finally see CEOs and other business leaders coming around to what experienced, skeptical engineers have been saying for this entire hype cycle.

I assumed it would happen at some point, but I am relieved that the change in sentiment has started before the bubble pops - maybe this will lesson the economic impact.

  • Yeah, the whole AI thing has very unpleasant similarities to the dot com bubble that burst to the massive detriment of the careers of the people that were working back then.

    • The parallels in how industry members talk about it is similar as well. No one denies that the internet boom was important and impactful, but it's also undeniable that companies wasted unfathomable amounts of cash for no return at the cost of worker well being.

Junior is just the lower rank, you will still have a lower rank, fine, do t call it junior

They're deliberately popping the bubble. If they'd actually thought this and cared, they'd have said it 2 years ago, before the layoffs.

Stop getting played.

but wait, what about my Nvidia stocks bro? Can we keep the AI hype going bro. Pls bro just make another AI assistant editor bro.

[flagged]

  • Time will show you are right and almost all the other dumbasses here at HN are wrong. Which is hardly surprising since they are incapable of applying themselves around their coming replacement.

    They are 100% engineers and 100% engineers have no ability to adapt to other professions. Coding is dead, if you think otherwise then I hope you are right! But I doubt it because so far I have only heard arguments to the counter that are obviously wrong and retarded.