← Back to context

Comment by kruuuder

18 days ago

A comment on the first pull request provides some context:

> The stream of PRs is coming from requests from the maintainers of the repo. We're experimenting to understand the limits of what the tools can do today and preparing for what they'll be able to do tomorrow. Anything that gets merged is the responsibility of the maintainers, as is the case for any PR submitted by anyone to this open source and welcoming repo. Nothing gets merged without it meeting all the same quality bars and with us signing up for all the same maintenance requirements.

The author of that comment, an employee of Microsoft, goes on to say:

> It is my opinion that anyone not at least thinking about benefiting from such tools will be left behind.

The read here is: Microsoft is so abuzz with excitement/panic about AI taking all software engineering jobs that Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind". That's not the confidence inspiring the statement they intended it to be, it's the opposite, it underscores that this isn't the .net team "experimenting to understand the limits of what the tools" but rather the .net team trying to keep their jobs.

  • The "left behind" mantra that I've been hearing for a while now is the strange one to me.

    Like, I need to start smashing my face into a keyboard for 10000 hours or else I won't be able to use LLM tools effectively.

    If LLM is this tool that is more intuitive than normal programming and adds all this productivity, then surely I can just wait for a bunch of others to wear themselves out smashing the faces on a keyboard for 10000 hours and then skim the cream off of the top, no worse for wear.

    On the other hand, if using LLMs is a neverending nightmare of chaos and misery that's 10x harder than programming (but with the benefit that I don't actually have to learn something that might accidentally be useful), then yeah I guess I can see why I would need to get in my hours to use it. But maybe I could just not use it.

    "Left behind" really only makes sense to me if my KPIs have been linked with LLM flavor aid style participation.

    Ultimately, though, physics doesn't care about social conformity and last I checked the machine is running on physics.

    • There's a third way things might go: on the way to "superpower for everyone", we go through an extended phase where AI is only a superpower in skilled hands. The job market bifurcates around this. People who make strong use of it get first pick of the good jobs. People not making effective use of AI get whatever's left.

      Kinda like how word processing used to be an important career skill people put on their resumes. Assuming AI becomes as that commonplace and accessible, will it happen fast enough that devs who want good jobs can afford to just wait that out?

      7 replies →

  • If you're not using it where it's useful to you, then I still wouldn't say you're getting left behind, but you're making your job harder than it has to be. Anecdotally I've found it useful mostly for writing unit tests and sometimes debugging (can be as effective as a rubber duck).

    It's like the 2025 version not not using an IDE.

    It's a powerful tool. You still need to know when to and when not to use it.

    • > It's like the 2025 version not not using an IDE.

      That's right on the mark. It will save you a little bit of work on tasks that aren't the bottleneck on your productivity, and disrupt some random tasks that may or may not be important.

      It's makes so little difference that plenty of people in 2025 don't use an IDE, and looking at their performance from the outside one just can't tell.

      Except that LLMs have less potential to improve your tasks and more potential to be disruptive.

      7 replies →

    • Tests are one of the areas where it performs least well. I can ask an LLM to summarize the functionality of code and be happy with the answer, but the tests it writes are the most facile unit tests, just the null hypothesis tests and the like. "Here's a test that the constructor works." Cool.

      1 reply →

  • This is Stephen Toub, who is the lead of many important .NET projects. I don't think he is worried about losing job anytime soon.

    I think, we should not read too much into it. He is honestly exploring how much this tool can help him to resolve trivial issues. Maybe he was asked to do so by some of his bosses, but unlikely to fear the tool replacing him in the near future.

    • I love the fact that they seem to be asking it to do simple things because ”AI can do the simple boring things for us so we can focus on the important problems” and then it floods them with so many meaningless mumbo jumbo that they could have probably done the simple thing in a fraction of the time they take to keep correcting it continuously.

      2 replies →

    • Anyone not showing open AI enthusiasm at that level will absolutely be fired. Anyone speaking for MS will have to be openly enthusiastic or silent on the topic by now.

  • > Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind"

    If they weren't experimenting with AI and coding and took a more conservative approach, while other companies like Anthropic was running similar experiments, I'm sure HN would also be critiquing them for not keeping up as a stodgy big corporation.

    As long as they are willing to take risks by trying and failing on their own repos, it's fine in my books. Even though I'd never let that stuff touch a professional github repo personally.

    • exactly ignoring new technologies can be a death sentence for a company even one as large as Microsoft. even if this technology doesn't pay off its still a good idea to at least look into potential uses.

      1 reply →

  • i dont think hey are mutually exclusive. jumping on board seems like the smart move if you're worried about losing your career. you also get to confirm your suspicions.

This is important context given that it would be absurd for the managers to have already drawn a definitive conclusion about the models’ capabilities. An explicit understanding that the purpose of the exercise is to get a better idea of the current strengths and weaknesses of the models in a “real world” context makes this actually very reasonable.

  • So why in public, and why in the most ham-fisted way, and why on important infrastructure, and why in such a terrible integration that it can't even verify that things compile before opening a PR!

    In my org, we would have had to bypass precommit hooks to do this!