← Back to context

Comment by anonymousab

6 days ago

Or they work with languages, libraries, systems or problem areas where the LLMs fail to perform anywhere near as well as they do for you and me.

About libraries or systems unknown to AI: you can fine tune or RAG LLMs e.g. with a MCP server like Context7 about special knowledge/libraries to make it a more knowledgeable companion when it was not trained so well (or at all) about the topic you need for your work. Also own defined specs etc. help.

  • You need a good amount of example code to train it on. I find LLMs moderately useful for web dev, but fairly useless for embedded development. They'll pick up some project-specific code patterns, but they clearly have no concept of what it means to enable a pull-up on a GPIO pin.

Still haven’t seen an example. It’s always the same. People don’t want to give hints or context. The moment you start doing things properly it’s “oh no this is just a bad example. It still can’t do what u do”

  • My experience is the opposite. I've yet to see a single example of AI working well for non trivial work that I consider relevant, based on 15+ years of experience in this field. It's good for brainstorming, writing tests, and greenfield work / prototyping. Add business context more complicated than can be explained in a short sentence, or any nuance or novelty, and it becomes garbage pretty much instantly.

    Show me an AI agent adding a meaningful new feature or fixing a complicated bug in an existing codebase that serves the needs of a decent sized business. Or proposing and implementing a rearchitecture that simplifies such a codebase while maintaining existing behavior. Show me it doing a good job of that, without a prompt from an experienced engineer telling it how to write the code.

    These types of tasks are what devs spend their days actually doing, as far as coding is concerned (never mind the non coding work, which is usually the harder part of the job). Current AI agents simply can't do these things in real world scenarios without very heavy hand holding from someone who thoroughly understands the work being done, and is basically using AI as an incredibly fast typing secretary + doc lookup tool.

    With that level of hand holding, it does probably speed me up by anywhere from 10% to 50% depending on the task - although in hindsight it also slows me down sometimes. Net hours saved is anywhere from 0 to 10 per week depending on the week, erring more on the lower end of that distribution.