Comment by abxyz
19 days ago
The author of that comment, an employee of Microsoft, goes on to say:
> It is my opinion that anyone not at least thinking about benefiting from such tools will be left behind.
The read here is: Microsoft is so abuzz with excitement/panic about AI taking all software engineering jobs that Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind". That's not the confidence inspiring the statement they intended it to be, it's the opposite, it underscores that this isn't the .net team "experimenting to understand the limits of what the tools" but rather the .net team trying to keep their jobs.
The "left behind" mantra that I've been hearing for a while now is the strange one to me.
Like, I need to start smashing my face into a keyboard for 10000 hours or else I won't be able to use LLM tools effectively.
If LLM is this tool that is more intuitive than normal programming and adds all this productivity, then surely I can just wait for a bunch of others to wear themselves out smashing the faces on a keyboard for 10000 hours and then skim the cream off of the top, no worse for wear.
On the other hand, if using LLMs is a neverending nightmare of chaos and misery that's 10x harder than programming (but with the benefit that I don't actually have to learn something that might accidentally be useful), then yeah I guess I can see why I would need to get in my hours to use it. But maybe I could just not use it.
"Left behind" really only makes sense to me if my KPIs have been linked with LLM flavor aid style participation.
Ultimately, though, physics doesn't care about social conformity and last I checked the machine is running on physics.
There's a third way things might go: on the way to "superpower for everyone", we go through an extended phase where AI is only a superpower in skilled hands. The job market bifurcates around this. People who make strong use of it get first pick of the good jobs. People not making effective use of AI get whatever's left.
Kinda like how word processing used to be an important career skill people put on their resumes. Assuming AI becomes as that commonplace and accessible, will it happen fast enough that devs who want good jobs can afford to just wait that out?
I'm willing to accept this as a possibility but the case analysis still doesn't make much sense to me.
If LLM usage is easy then I can't be left behind because it's easy. I'll pick it up in a weekend.
If LLM usage is hard AND I can otherwise do the hard things that LLMs are doing then I can't be left behind if I just do the hard things.
Still the only way I can be left behind is if LLM usage is nonsense or the same as just doing it yourself AND the important thing is telling managers that you've been using it for a long time.
Is the superpower bamboozling management with story time?
4 replies →
To be fair, word processing is a skill that that a majority of professionals continue to lack.
Law, civil service, academia and those who learnt enough LaTeX and HTML to understand text documents are in the minority.
1 reply →
If you're not using it where it's useful to you, then I still wouldn't say you're getting left behind, but you're making your job harder than it has to be. Anecdotally I've found it useful mostly for writing unit tests and sometimes debugging (can be as effective as a rubber duck).
It's like the 2025 version not not using an IDE.
It's a powerful tool. You still need to know when to and when not to use it.
> It's like the 2025 version not not using an IDE.
That's right on the mark. It will save you a little bit of work on tasks that aren't the bottleneck on your productivity, and disrupt some random tasks that may or may not be important.
It's makes so little difference that plenty of people in 2025 don't use an IDE, and looking at their performance from the outside one just can't tell.
Except that LLMs have less potential to improve your tasks and more potential to be disruptive.
You're right on the money. I've been amongst the most productive developers in every place I've worked at for the past 10 years while not using an IDE. AI is not even close to as revolutionary as it's being sold. Unfortunately, as always, the ones buying this crap are not the ones that actually do the work.
Even for writing tests, you have to proof-read every single line and triple check they didn't write a broken test. It's absolutely exhausting.
4 replies →
Yea, "using an IDE" is a very good analogy. IDEs are not silver bullets, although they no doubt help some engineers. There are plenty of developers, on the other hand, who are amazingly productive without using IDEs.
I feel like most people that swear by their AI are also the ones using text editors instead of full IDEs with actually working refactoring, relevant auto complete or never write tests
Tests are one of the areas where it performs least well. I can ask an LLM to summarize the functionality of code and be happy with the answer, but the tests it writes are the most facile unit tests, just the null hypothesis tests and the like. "Here's a test that the constructor works." Cool.
They are the exact same unit tests I never needed help to write, and the exact same unit tests that I can just blindly keep hitting tab to write with Intellij's NON-AI autocomplete.
This is Stephen Toub, who is the lead of many important .NET projects. I don't think he is worried about losing job anytime soon.
I think, we should not read too much into it. He is honestly exploring how much this tool can help him to resolve trivial issues. Maybe he was asked to do so by some of his bosses, but unlikely to fear the tool replacing him in the near future.
They don’t have any problem firing experienced devs for no reason. Including on the .NET team (most of the .NET Android dev team was laid off recently).
https://www.theregister.com/2025/05/16/microsofts_axe_softwa...
Perhaps they were fired for failing to show enthusiasm for AI?
I can definitely believe that companies will start (or have already started) using "Enthusiasm about AI" as justification for a hire/promote/reprimand/fire decision. Adherence to the Church Of AI has become this weird purity test throughout the software industry!
I love the fact that they seem to be asking it to do simple things because ”AI can do the simple boring things for us so we can focus on the important problems” and then it floods them with so many meaningless mumbo jumbo that they could have probably done the simple thing in a fraction of the time they take to keep correcting it continuously.
It is called experimentation. That is how people evaluate new technology. By trying to do small things with it first. And if it doesn't work well - retrying later, once bigger issues are fixed.
1 reply →
Didn't M$ just fire like 7000 people, many of which were involved in big important M$ projects? The CPython guys, for example.
Now, consider the game theory of saying "no" when your boss tells you to go play with the LLM in public.
Hot take: CPython is not an important project for Microsoft, and it is not lead by them. The faster CPython project had questionable acheivement on top of that.
Half of Microsoft (especially server-side) still runs on dotnet. And there are no real contributors outside of microsoft. So it is a vital project.
1 reply →
Anyone not showing open AI enthusiasm at that level will absolutely be fired. Anyone speaking for MS will have to be openly enthusiastic or silent on the topic by now.
TBF they are dogfooding this (good) but it's just not going well
"eating our own dogshit"
> Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind"
If they weren't experimenting with AI and coding and took a more conservative approach, while other companies like Anthropic was running similar experiments, I'm sure HN would also be critiquing them for not keeping up as a stodgy big corporation.
As long as they are willing to take risks by trying and failing on their own repos, it's fine in my books. Even though I'd never let that stuff touch a professional github repo personally.
exactly ignoring new technologies can be a death sentence for a company even one as large as Microsoft. even if this technology doesn't pay off its still a good idea to at least look into potential uses.
Only in very specific circumstances where there are clear moats to be built (mobile was one of these that Microsoft missed, but that's a PLATFORM in a way no AI product at the moment comes close to). As far as I can tell, there is no evidence of such a thing with the current applications of AI and I am unconvinced that there ever will be. It's just going to ride on top of previous platforms. So you may need some sort of service for customers that are interested, but having the absolute best AI story just isn't something customers are going to care about at the end of the day if it means they would have to say migrate clouds.
At the moment, I'd arguing doing much more than what say Apple is doing would be what is potentially catastrophic. Not doing anything would be minimally risky, and doing just a little bit would be the no risk play. I think Microsoft is making this mistake in a big way and will continue to lose market share over it and burn cash, albeit slowly since they are already giants. The point is, it's a giant that has momentum going in the opposite direction than what they want, and they are incapable of fixing the things causing it to go in that direction because their leadership has become delusional.
i dont think hey are mutually exclusive. jumping on board seems like the smart move if you're worried about losing your career. you also get to confirm your suspicions.