Comment by DougBTX

6 days ago

From the article:

> A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. These tools have rapidly evolved from experimental novelties to mission-critical development infrastructure, with teams across the globe relying on them daily to accelerate coding tasks.

That seemed high, what the actual report says:

> More than 97% of respondents reported having used AI coding tools at work at some point, a finding consistent across all four countries. However, a smaller percentage said their companies actively encourage AI tool adoption or allow the use of AI tools, varying by region. The U.S. leads with 88% of respondents indicating at least some company support for AI use, while Germany is lowest at 59%. This highlights an opportunity for organizations to better support their developers’ interest in AI tools, considering local regulations.

Fun that the survey uses the stats to say that companies should support increasing usage, while the article uses it to try and show near-total usage already.

I’d be pretty skeptical of any of these surveys about AI tool adoption. At my extremely large tech company, all developers were forced to install AI coding assistants into our IDEs and browsers (via managed software updates that can’t be uninstalled). Our company then put out press releases parading how great the AI adoption numbers were. The statistics are manufactured.

  • Yup, this happened at my midsize software company too

    Meanwhile no one actually building software that I have been talking to is using these tools seriously for anything that they will admit to, anyways

    More and more when I see people who are strongly pro-AI code and "vibe coding" I find they are either formerly devs that moved into management and don't write much code anymore, or people who have almost no dev experience at all and are absolutely not qualified to comment on the value of the code generation abilities of LLMs

    When I talk to people whose job is majority about writing code, they aren't using AI tools much. Except the occasional junior dev who doesn't have much of a clue

    These tools have some value, maybe. But it's nowhere near the hype would suggest

  • This is surprising to me. My company (top 20 by revenue) has forbidden us from using non-approved AI tools (basically anything other than our own ChatGPT / LLM tool). Obviously it can't truly be enforced, but they do not want us using this stuff for security reasons.

    • My company sells AI tools, so there’s a pretty big incentive for to promote their use.

      We have the same security restrictions for AI tools that weren’t created by us.

      1 reply →

  • I’ skeptical when I see “% of code generated by AI” metrics that don’t account for the human time spent parsing and then dismissing bad suggestions from AI.

    Without including that measurement there exist some rather large perverse incentives.

Even that quote itself jumps from "are using" to "mission-critical development infrastructure ... relying on them daily".

> This highlights an opportunity for organizations to better support their developers’ interest in AI tools, considering local regulations.

This is a funny one to see included in GitHub's report. If I'm not mistaken, github is now using the same approach as Shoplify with regards to requiring LLM use and including it as part of a self report survey for annual review.

I guess they took their 2024 survey to heart and are ready to 100x productivity.

I've tried coding with AI for the first time recently[1] so I just joined that statistic. I assume most people here already know how it works and I'm just late to the party, but my experience was that Copilot was very bad at generating anything complex through chat requests but very good at generating single lines of code with autocompletion. It really highlighted the strengths and shortcomings of LLM's for me.

For example, if you try adding getters and setters to a simple Rect class, it's so fast to do it with Copilot you might just add more getters/setters than you initially wanted. You type pub fn right() and it autocompletes left + width. That's convenient and not something traditional code completion can do.

I wouldn't say it's "mission critical" however. It's just faster than copy pasting or Googling.

The vulnerability highlighted in the article appears to only exist if you put code straight from Copilot into anything without checking it first. That sounds insane to me. It's just as untrusted input as some random guy on the Internet.

[1] https://www.virtualcuriosities.com/articles/4935/coding-with...

  • > it's so fast to do it with Copilot you might just add more getters/setters than you initially wanted

    Especially if you don't need getters and setters at all. It depends on you use case, but for your Rect class, you can just have x, y, width, height as public attributes. I know there are arguments against it, but the general idea is that if AI makes it easy to write boilerplate you don't need, then it made development slower in the long run, not faster, as it is additional code to maintain.

    > The vulnerability highlighted in the article appears to only exist if you put code straight from Copilot into anything without checking it first. That sounds insane to me. It's just as untrusted input as some random guy on the Internet.

    It doesn't sound insane to everyone, and even you may lower you standards for insanity if you are on a deadline and just want to be done with the thing. And even if you check the code, it is easy to overlook things, especially if these things are designed to be overlooked. For example, typos leading to malicious forks of packages.

    • Once the world is all run on AI generated code how much memory and cpu cycles will be lost to unnecessary code? Is the next wave of HN top stories “How we ditched AI code and reduced our AWS bill by 10000%”?

      1 reply →

  • Your IDE should already have facilities for generating class boilerplate, like package address and brackets and so on. And then you put in the attributes and generate a constructor and any getters and setters you need, it's so fast and trivially generated that I doubt LLM:s can actually contribute much to it.

    Perhaps they can make suggestions for properties based on the class name but so can a dictionary once you start writing.

    • IDE's can generate the proper structure and make simple assumptions, but LLM's can also guess what algorithms should look like generally. In the hands of someone who knows what they are doing I'm sure it helps produce more quality code than they otherwise would be capable of.

      I'm unfortunate that it has become used by students and juniors. You can't really learn anything from Copilot, just as I couldn't learn Rust just by telling it to write Rust. Reading a few pages of the book explained a lot more than Copilot fixing broken code with new bugs and the fixing the bugs by reverting its own code to the old bugs.

It might be fun if it didn't seem dishonest. The report tries to highlight a divide between employee curiosity and employer encouragement, undercut by their own analysis that most have tried them anyway.

The article MISREPRESENTS that statistic to imply universal utility. That professional developers find it so useful that they universally chose to make daily use of it. It implies that Copilot is somehow more useful than an IDE without itself making that ridiculous claim.

  • The article is typical security issue embellishment/blogspam. They are incentivized to make it seem like AI is a mission-critical piece of software, because more AI reliance means a security issue in AI is a bigger deal, which means more pats on the back for them for finding it.

    Sadly, much of the security industry has been reduced to a competition over who can find the biggest vuln, and it has the effect of lowering the quality of discourse around all of it.

  • And employers are now starting to require compliance with using LLMs regardless of employee curiosity.

    Shopify now includes LLM use in annual reviews, and if I'm not mistaken GitHub followed suit.

In some way, we reached 100% of developers, and now usage is expanding, as non-developers can now develop applications.

  • Wouldn't that then make those people developers? The total pool of developers would grow, the percentage couldn't go above 100%.

    • I mean, I spent years learning to code in school and at home, but never managed to get a job doing it, so I just do what I can in my spare time, and LLMs help me feel like I haven't completely fallen off. I can still hack together cool stuff and keep learning.

      1 reply →

    • Probably. There is a similar question: if you ask ChatGPT / Midjourney to generate a drawing, are you an artist ? (to me yes, which would mean that AI "vibe coders" are actual developers in their own way)

      13 replies →

I agree this sounds high, I wonder if "using Generative AI coding tools" in this survey is satisfied by having an IDE with Gen AI capabilities, not necessarily using those features within the IDE.