Comment by asdev

3 days ago

People who don't code(management, leadership) think AI will 10x the company but it's really a 40-60% boost. But engineers have to feign adopting this tools in fear of layoffs

> 40-60% boost

Where? What industry, what kind of projects? The only one where I can imagine it to be true is vulnerability research, and I imagine all the low-hanging fruit to be picked soon

  • Mine, easily. Senior (near staff) level embedded engineering.

    It will spin up a boilerplate uboot or BSP config no problem. I still go in and manually check and add peripherals, but opus 4.7 is terrifyingly smart.

    Need to modify or add a new peripheral, it's there no problem. Or in a bare metal project, I can point it at an STM32 cubemx starter repo and ask for a feature (set up the ADC on pins 4 and 7, ask me for parameters) and it's just done. I do in a day what would probably take me 2.

    It doesn't help me with reviewing others' work, or planning (I maintain that these are manual tasks). So yeah, I agree with the 40-60%. The parts of my job it helps, it really helps.

    • > I can point it at an STM32 cubemx starter repo and ask for a feature

      My experience is it will attempt read from the wrong memory block resulting in garbadge. But that's a while ago so maybe LLMs have gotten better.

      3 replies →

    • Yeah just had Codex/Gemini write me nrf52 bootloader that fit in under 4k flash sector size with OTA and DFU support (well, app does OTA download then the bootloader validates and decompresses the image). Works best if you let them use OpenOCD on a real device, then they can iterate until it starts working.

      I didn't even need that bootloader, just didn't like the fact that Adafruit one takes too much space :)

  • I work on an ETL platform and it definitely is a huge boost in certain things, but a drain in others.

    We started working on a new product a few months ago and it's really dangerous up front on an empty code base. It can quickly write more code than you can comfortably understand. The more serious danger is when three people are all doing that at once. I had to bring this up at meetings and try to get a better review culture going.

    Now that we're a few months in and changes are more targeted additions to an existing system we're happy with, it's _huge_ (which has been my experience on our existing product). I can drop a brief paragraph I speech-to-texted into my agent, give it a general starting place (where I imagine the issue/feature extension point is), and then tell it to do some research and propose a change. I'd guess it's about 50% of the time that I have to update it's implementation plan. Then I let it run (my favorite is setting this up before a meeting) and come back. Then we have to review the code and go from there.

    Definitely a 50%+ speed up in some cases, but not all. It's also great for problems that procrastinating, as it reduces friction so much.

Its not really a 60%. It accelerates a lot code creation. Save some time on admin tasks. That is it.

What's funny to me is the seeming lack of AI usage among management despite so much of their work being amenable to AI acceleration.

At my company(big name, AI beneficiary), middle management seems to mostly be concerned with shuffling chairs on the deck of the Titanic while they wait for their stock to fully vest. There is very little interest in improving anything, just an obsession with risk avoidance and performative sideshows whenever upper management wonders why execution is so poor.

  • At my company middle management is using Gemini to churn out reams of useless documents in lieu of anything approaching "program management" or similar

40% boost for smart engineers, for now.

People churning out slop is slowing me down and the full effects of it won't be felt for a while.

  • Yesterday, I had my first experience of a mid-level dev stuck on a problem, coming to me with Codex and Copilot summaries of what those tools thought the problem was, which turned out to be completely off-base.

    Codex was pretty sure something was wrong with the response object being returned by the endpoint in question. It turned out there was a conversion method applied to the endpoint response, which mutated its input. This method had been running w/o problems for a while, until the dev put it in a useEffect. At this point, React dev mode's policy of rendering everything twice kicked in, which caused the second pass through the conversion method to fail on the now-mutated input object.

    Codex never even hinted that the conversion method mutating the input could be a problem, nor anything about React dev mode rendering everything twice (specifically to catch problems like this). Apparently, neither of those came up much in its training data.

    My point is that this dev seems to have lost, in a few short months of writing everything with Codex, the ability to trace an error from its source (the error trace was being swallowed in a Codex-written catch block that spit out a generic error message). He was completely stuck and just kept doubling down on trying to get Codex to solve the problem, even checking with Copilot as a backup. I'm not optimistic about where this is headed.

  • the new bottleneck for development at work is code reviews. devs are creating whole features that would take months in only a couple weeks, but code reviewing that is a slow, painful process

    • This is why I'm not that excited about vibe coding. The bottleneck has always been understanding what the heck is going on.

      In my view you should 1) use AI as a tool to help you learn and 2) write boilerplate you could have easily written yourself. Getting it to think for you is counterproductive (at least until it replaces us entirely).