← Back to context

Comment by TriangleEdge

5 months ago

This AI race is happening so fast. Seems like it to me anyway. As a software developer/engineer I am worried about my job prospects.. time will tell. I am wondering what will happen to the west coast housing bubbles once software engineers lose their high price tags. I guess the next wave of knowledge workers will move in and take their place?

My guess is that, yes, the software development job market is being massively disrupted, but there are things you can do to come out on top:

* Learn more of the entire stack, especially the backend, and devops.

* Embrace the increased productivity on offer to ship more products, solo projects, etc

* Be highly selective as far as possible in how you spend your productive time: being uber-effective can mean thinking and planning in longer timescales.

* Set up an awesome personal knowledge management system and agentic assistants

  • We have thousand of old systems to maintain. Not sure everything could be rewritten or maintained with only LLM. If an LLM builds a whole system on its own and is able to maintain and fix it then it’s not just us software developper who will suffer, it means nothing to sale or market, people will just ask an LLM to do something. No sure this is possible. ChatGPT gave me a list of commands for my ec2 instance and one of them when executed made me loose access to ssh. It didn’t warn me. So « blindly » following an LLM lead on a cascade of instructions on a massive scale and on a long period could also lead to massive bugs or corruption of datas. Who did not ask an LLM for some code, that contained mistakes and we had to point the mistakes to it. I doubt system will stay robust with full autonomy without any human supervision. But it’s a great tool to iterate and throw away code after testing ideas

  • > Learn more of the entire stack, especially the backend, and devops.

    I actually wonder about this. Is it better to gain some relatively mediocre experience at lots of things? AI seems to be pretty good at lots of things.

    Or would it be better to develop deep expertise in a few things? Areas where even smart AI with reasoning still can get tripped up.

    Trying to broaden your base of expertise seems like it’s always a good idea, but when AI can slurp the whole internet in a single gulp, maybe it isn’t the best allocation of your limited human training cycles.

  • Do you have any specific tips for the last point? I completely agree with it and have set up a fairly robust Obsidian note taking structure that will benefit greatly from an agentic assistant. Do you use specific tools or workframe for this?

    • What works well for me at the moment is to write 'books' - i.e use ai as a writing assistant for large documents. I do this because the act of compiling the info with ai assistance helps me to assimilate the knowledge. I use a combination of Chatgpt, perplexity and Gemini with notebook LM - to merge responses from separate LLMs, provide critical feedback on a response, or a chunk of writing, etc.

      This is a really accessible setup and is great for my current needs. Taking it to the next stage with agentic assistants is something I'm only just starting out on. I'm looking at WilmerAI [1] for routing ai workflows and Hoarder [2] to automatically ingest and categorize bookmarks, docs and RSS feed content into a local RAG.

      [1] https://github.com/SomeOddCodeGuy/WilmerAI

      [2] https://hoarder.app/

It seems to be slowing down actually. Last year was wild until around llama 3. The latest improvements are relatively small. Even the reasoning models are a small improvement over explicit planning with agents that we could already do before - it's just nicely wrapped and slightly tuned for that purpose. Deepseek did some serious efficiency improvements, but not so much user-visible things.

So I'd say that the AI race is starting to plateau a bit recently.

  • While I agree, you have to remember the dimensionality of the labor-skill space is. The was I see it is that you can imagine the capability of AI as a radius, and the amount of tasks it can cover is a sphere. Linear imporovements in performance causes cubic (or whatever the labor-skill dimensionality is) imporvement in task coverage.

    • I‘m not sure that’s true with the latest models. o3-mini is good at analytical tasks and coding, and it really sucks at prose. Sonnet 3.7 is good at thinking but lost some ability in creating diffs.

It has the potential to effect a lot more than just SV/The West Coast - in fact SV may be one of the only areas who have some silver lining with AI development. I think these models have a chance to disrupt employment in the industry globally. Ironically it may be only SWE's and a few other industries (writing, graphic design, etc) that truly change. You can see they and other AI labs are targeting SWEs in particular - just look at the announcement "Claude 3.7 and Code" - very little mention of any other domains on their announcement posts.

For people who aren't in SV for whatever reason and haven't seen the really high pay associated with being there - SWE is just a standard job often stressful with lots of learning required ongoing. The pain/anxiety of being disrupted is even higher then since having high disposable income to invest/save would of been less likely. Software to them would of been a job with comparable pay's to other jobs in the area; often requiring you to be degree qualified as well - anecdotally many I know got into it for the love; not the money.

Who would of thought the first job being automated by AI would be software itself? Not labor, or self driving cars. Other industries either seem to have hit dead ends, or had other barriers (regulation, closed knowledge, etc) that make it harder to do. SWE's have set an example to other industries - don't let AI in or keep it in-house as long as possible. Be closed source in other words. Seems ironic in hindsight.

  • What do you even do then as a student? I've asked this dozens of times with zero practical answers at all. Frankly I've become entirely numb to it all.

    • Be glad that you are empowered to pivot - I'm making the assumption you are still young being a student. In a disrupted industry you either want to be young (time to change out of it) or old (50+) - can retire with enough savings. The middle age people (say 15-25 years in the industry; your 35-50 yr olds) are most in trouble depending on the domain they are in. For all the "friendly" marketing IMO they are targeting tech jobs in general - for many people if it wasn't for tech/coding/etc they would never need to use an LLM at all. Anthrophic's recent stats as to who uses their products are telling - its mostly code code code.

      The real answer is either to pivot to a domain where the computer use/coding skills are secondary (i.e. you need the knowledge but it isn't primary to the role) or move to an industry which isn't very exposed to AI either due to natural protections (e.g. trades) or artifical ones (e.g regulation/oligopolies colluding to prevent knowledge leaking to AI). May not be a popular comment on this platform - I would love to be wrong.

      7 replies →

    • I'm sure lots of potential students / bootcampers are now not going into programming (or if they are, the smart ones try to go into niches like A.I and skip web/backend/android altogether). This will work against the numbers of jobs being reduced by A.I. It will take a few years though to play out , but at some point we will see smaller amounts of people trying to get into the field and applying for jobs, certainly for junior positions. We've already had ~ 2 bad years, a couple more like this will really dry out the numbers of newcomers. Less people coming in (than otherwise would have) means for every person who retires / leaves the industry there are less people to take his place. This situation is quite complex with lots of parameters that work in different directions so it's very early to try to get some kind of read on where this is going.

      As a new career I'd probably not choose SWE now. But if you've done 10 years already I'd ride it out, there is a good chance most of us will remain employed for many years to come.

      2 replies →

I'm not too concerned short to medium term. I feel there are just too many edge cases and nuances that are going to be missed by AI systems.

For example, systems don't always work in the way they're documented to. How is an AI going to differentiate cases where there's a bug in a service vs a bug in its own code? How will an AI even learn that the bug exists in the first place? How will an AI differentiate between someone reporting a bug and a hacker attempting to break into a system?

The world is a complex place and without ACTUAL artificial intelligence we're going to need people to at least guide AI in these tricky situations.

My advice would be to get familiar with using AI and new AI tools and how they fit into our usual workflows.

Others may disagree, but I don't think software engineers (at least ones the good ones) are going anywhere.

I think if models improve (but we don't get a full singularity) then jobs will increase.

e.g. if software is 5x less cost to make, demand will go up more than 5x as supply is highly limited now. Lots of companies want better software but it costs too much.

That will create more jobs.

They'll be more product management and human interaction and edge case testing and less typing. Although I think there'll be a bunch of very technical jobs to debug things when the models fail.

So my advice is learn skills that help make software useful to people and businesses - from user research to product management. As well as engineering.

  • the thing is that cost won't go down by 5x but much more.

    once the ai gets smart enough that it only requires an intern to make the prompt and solve the few mistakes, development cost will be worth nothing.

    there is only so much demand for software development.

Trade your labour for capitalism. Own the means of production. This translates to: build a startup.

[flagged]

  • >There is no intelligence here and Claude 3.7 cannot create anything novel.

    I wouldn't be surprised if people would continue to deny the actual intelligence of these models even in a scenario where they were able to solve the Riemann hypothesis.

    "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'" - cit

  • Even when I feel this, 90% of any novel thing I'm doing is still old gruntwork, and Claude lets me speed through that and focus all my attention on the interesting 10% (disclaimer: I'm at Anthropic)

    • Do you think the "deep research" feature that some AI companies have will ever apply to software? For example, I had to update Spring in a Java codebase recently. AI was only able to help mildly to figure out why I was seeing some errors, but that's it.

    • One can also steal directly from GitHub and strip the license to avoid this grunt work. LLMs automate the stealing.

  • How many novel things does a developer do at work as a percentage of their time?

    • that's because stacks/apis/ecosystems are super complicated and require lots of reading/searching to figure out how make things happen. Now this time will be reduced dramatically and devs time will shift on more novel things.

  • The threat is not autocomplete, it's translation.

    "translating" requirements into code is what most developers' jobs are.

    So "just" translation is a threat to job security of developers.

  • Build on top of stolen code, no less. HN hates to hear it but LLMs are a huge step back for software freedom because as long as they call it "AI" and as long as politicians don't understand it, it allows companies to launder GPL code and reuse it without credit and without giving users their rights.

  • This is BS and you are not listening and watching carefully.