← Back to context

Comment by burnte

19 hours ago

> Agents are a boon for extraverts and neurotypical people.

As an extrovert the chances I'll use an AI agent in the next year is zero. Not even a billion to one but a straight zero. I understand very well how AI works, and as such I have absolutely no trust in it for anything that isn't easy/simple/solved, which means I have virtually no use for generative AI. Search, reference, data transformation, sure. Coding? Not without verification or being able to understand the code.

I can't even trust Google Maps to give me a reliable route anymore, why would I actually believe some AI model can code? AI tools are helpers, not workers.

>no trust in it for anything that isn't easy/simple/solved

I'm not sure what part of programming isn't generally solved thousands of times over for most languages out there. I'm only using it for lowly web development but I can tell you that it can definitely do it at a level that surprises me. It's not just "auto-complete" it's actually able to 'think' over code I've broken or code that I want improved and give me not just one but multiple paths to make it better.

  • In the case of programming is not quite as problematic with unsolved problems as much as others, like completeness. In the case of programming, it's context and understanding. It's great for small chunks of code but people think you can vibe code entire interactive applications with no programming knowledge, but LLMs simply don't understand, so they can't keep a cohesive idea of what the end goal is in mind. The larger the codebase it needs to work on the more likely it is to make catastrophic errors, create massive security flaws, or just generate nonfunctional code.

    Programming LLMs will become awesome when we create more narrowly targeted LLMs rather than these "train on everything" models.