← Back to context

Comment by georgeburdell

1 day ago

For me, I can’t get into using AI tools like Claude Code. As far as I go is chat style where I’m mostly in control. I enjoy the actual process of crafting code myself. For similar reasons, I could never be a manager.

Agents are a boon for extraverts and neurotypical people. If it gets to the point where the industry switches to agents, I’ll probably just find a new career

I strongly disagree agents are for extroverts.

I do agree it’s definetly a tool category with a unique set of features and am not surprised it’s offputting to some. But it’s appeal is definetly clear to me as an introvert.

For me LLM:s are just a computer interface you can program using natural language.

I think I’m slightly ADD. I love coding _interesting_ things but boring tasks cause extreme discomfort.

Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

It’s a great time to be a software engineer!

  • > For me LLM:s are just a computer interface you can program using natural language.

    I wish they were, but they're not that yet because LLMs aren't very good at logical reasonsing. So it's more like an attempt to program using natural language. Sometimes it does what you ask, sometimes not.

    I think "programming" implies that the machine will always do what you tell it, whatever the language, or reliably fail and say it can't be done because the "program" is contradictory, lacks sufficient detail, or doesn't have the necessary permissions/technical capabilities. If it only sometimes does what you ask, then it's not quite programming yet.

    > Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

    I wish that, too, were true, and maybe it will be someday soon. But if I need to manually review the agent's output, then it doesn't feel like offloading much aside from the typing. All the same concentration and thought are still required, even for the boring things. If I could at least trust the agent to tell me if it did a good job or is unsure that would have been helpful, but we're not even there yet.

    That's not to say the tools aren't useful, but they're not yet "programming in a natural language" and not yet able to "offload" stuff to.

    • > ... LLMs aren't very good at logical reasonsing.

      I'm curious about what experiences led you to that conclusion. IME, LLMs are very good at the type of logical reasoning required for most programming tasks. E.g. I only have to say something like "find the entries with the lowest X and highest Y that have a common Z from these N lists / maps / tables / files / etc." and it spits out mostly correct code instantly. I then review it and for any involved logic, rely on tests (also AI-generated) for correctness, where I find myself reviewing and tweaking the test cases much more than the business logic.

      But then I do all that for all code anyway, including my own. So just starting off with a fully-fleshed out chunk of code, which typically looks like what I'd pictured in my head, is a huge load off my cognitive shoulders.

      16 replies →

    • You can view Claude Code as a non-deterministic compiler where you input english and get functioning code on the other end.

      The non-determinism is not as much as a problem because you are reading over the results and validating that what it is created matches what you tell it to do.

      I'm not talking about vibe-coding here, I'm grabbing the steering wheel with both hands because this car allows me to go faster than if I was driving myself, but sometimes you have to steer or brake. And the analogy favors Claude Code here because you don't have to react in milliseconds while programming.

      TL;DR: if you do the commit you are responsible for the code it contains.

      3 replies →

    • So here's a transcript of a recent Claude Code interaction I had, trying to build an MP3 organizer that relies on heuristics and fuzzy matching to make sense out of all the random MP3s I downloaded in the 90s. This all happened in-between prompts: I gave it a correct example and an incorrect example and asked it why it was giving the incorrect example.

      ● Let me trace through what happens with the "Jordan Rudess" example:

        1. Directory: Jordan Rudess/
        2. file_rel_path would be Path("Jordan Rudess") (after removing MP3 Disk prefix)
        3. If Jordan Rudess matches in path component 0, scope_path would be Path("Jordan Rudess")
        4. _path_is_in_scope checks if "Jordan Rudess".startswith("Jordan Rudess") → True
      
        This looks correct. Let me check the inference logic:
      

      ● Read(mp3_organizer_v2.py) ⎿ Read 15 lines (ctrl+r to expand)

      ● The inference priority looks correct - path_component_match is indeed the highest priority. But I think I see the issue now. Let me look at the actual logic flow in the main organize method:

      ● Read(mp3_organizer_v2.py) ⎿ Read 10 lines (ctrl+r to expand)

      ● The flow looks right. Let me examine the actual problem more carefully. Looking back at the _apply_path_component_matching method, I think there might be an issue with when the matching happens. Let me check the logic:

      ● Update Todos ⎿ Check _path_is_in_scope logic for correctness Verify artist inference priority logic works with path_component_match Test and fix any remaining issues with artist propagation

      ● I think I see the issue now. The problem is that the current code only looks for matches in files that already have file_id3 tags, but it needs to propagate those matches to all files in the directory. However, the current implementation is still only applying matches to files that have matching candidates, not ALL files. Let me check the exact logic:

      ● Read(mp3_organizer_v2.py) ⎿ Read 8 lines (ctrl+r to expand)

      ● Actually, looking at this more carefully, the logic should work. The matched_artists dictionary contains the matches found from ANY file that had a file_id3 tag matching a path component, and then it applies that match to ALL files in the file_infos list via the loop at line 737.

        Wait, let me understand the scoping issue better. Let me check what file_rel_path and scope_path would actually be in your example:
      

      ● Read(mp3_organizer_v2.py) ⎿ Read 5 lines (ctrl+r to expand)

      ● Ah! I think I see a potential issue. The file_rel_path is calculated as the directory path, not the individual file path. So for all files in the same directory, file_rel_path will be the same. This should actually work correctly.

        But let me double-check the scope matching logic. If the directory is "Jordan Rudess" and the scope_path is also "Jordan Rudess", then _path_is_in_scope should return
        True.
      
        Let me debug this by adding some debug output to see what's actually happening. But first, let me look at a potential issue: maybe the match isn't being found in the first place?
      

      It's doing a pretty complex chain of step-by-step reasoning here, looking in detail at the code. It ended up by printing out more debug info and having me re-run it again, then paste in the debug info, then add even more debug info and run it again. But it did eventually get the bug, which was non-trivial to identify (it was an aliasing problem where Claude was mutating state on a list and that list was shared with all the other files in the directory).

      1 reply →

  • > For me LLM:s are just a computer interface you can program using natural language.

    Sort of. You still can't get a reliable output for the same input. For example, I was toying with using ChatGPT with some Siri shortcuts on my iPhone. I do photography on the side, and finding a good time for lighting for photoshoots is a usecase I use a lot so I made a shortcut which sends my location to the API along with a prompt to get the sunset time for today, total amount of daylight, and golden hour times.

    Sometimes it works, sometimes it says "I don't have specific golden hour times, but you can find those on the web" or a useless generic "Golden hour is typically 1 hour before sunset but can vary with location and season"

    Doesn't feel like programming to me, as I can't get reproducible output.

    I could just use the LLM to write some API calling script from some service that has that data, but then why bother with that middle man step.

    I like LLMs, I think they are useful, I use them everyday but what I want is a way to get consistent, reproducible output for any given input/prompt.

    • For things where I don't want creativity, I tell it to write a script.

      For example, "write a comprehensive spec for a script that takes in the date and a location and computes when golden hour is." | "Implement this spec"

      That variability is nice when you want some creativity, e.g. "write a beautiful, interactive boids simulation as a single file in html, css, and JavaScript."

      Words like "beautiful" and interactive" are open to interpretation, and I've been happy with the different ways they are interpreted.

  • >I think I’m slightly ADD. I love coding _interesting_ things but boring tasks cause extreme discomfort. >Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

    I agree and I feel that having LLM's do boilerplate type stuff is fantastic for ADD people. The dopamine hit you get making tremendous progress before you get utterly bored is nice. The thing that ADD/ADHD people are the WORST at is finishing projects. LLM will help them once the thrill of prototyping a green-field project is over.

    • Seconding this. My work has had the same problem - by the time I've got things all hooked up, figured out the complicated stuff - my brain (and body) clock out and I have to drag myself through hell to get to 100%. Even with ADHD stimulant medication. It didn't make it emotionally easier, just _possible_ lol.

      LLMs, particularly Claude 4 and now GPT-5 are fantastic at working through these todo lists of tiny details. Perfectionism + ADHD not a fun combo, but it's way more bearable. It will only get better.

      We have a huge moat in front of us of ever-more interesting tasks as LLMs race to pick up the pieces. I've never been more excited about the future of tech

      1 reply →

    • I'm kind of in this cohort. While in the groove, yea, things fly but, inevitably, my interest wanes. Either something too tedious, something too hard (or just a lot of work). Or, just something shinier shows up.

      Bunch of 80% projects with, as you mentioned, the interesting parts finished (sorta -- you see the line at the end of the tunnel, it's bright, just don't bother finishing the journey).

      However, at the same time, there's conflict.

      Consider (one of) my current projects, I did the whole back end. I had ChatGPT help me stand up a web front end for it. I am not a "web person". GUIs and what not are a REAL struggle for me because on the one hand, I don't care how things look, but, on the other, "boy that sure looks better". But getting from "functional" to "looks better" is a bottomless chasm of yak shaving, bike shedding improvements. I'm even bad at copying styles.

      My initial UI was time invested getting my UI to work, ugly as it was, with guidance from ChatGPT. Which means it gave me ways to do things, but mostly I coded up the actual work -- even if it was blindly typing it in vs just raw cut and paste. I understood how things were working, what it was doing, etc.

      But then, I just got tired of it, and "this needs to be Better". So, I grabbed Claude and let it have its way.

      And, its better! it certainly looks better, more features. It's head and shoulders better.

      Claude wrote 2-3000 lines of javascript. In, like, 45m. It was very fast, very responsive. One thing Claude knows is boiler plate JS Web stuff. And the code looks OK to me. Imperfect, but absolutely functional.

      But, I have zero investment in the code. No "ownership", certainly no pride. You know that little hit you get when you get Something Right, and it Works? None of that. Its amazing, its useful, its just not mine. And that's really weird.

      I've been striving to finish projects, and, yea, for me, that's really hard. There is just SO MUCH necessary to ship. AI may be able to help polish stuff up, we'll see as I move forward. If nothing else it may help gathering up lists of stuff I miss to do.

    • Ironically, I find greenfield projects the least stimulating and the most rote, aside from thinking about system design.

      I've always much preferred figuring out how to improve or build on existing messy systems and codebases, which is certainly aided by LLMs for big refactoring type stuff, but to be successful at it requires thinking about how some component of a system is already used and the complexity of that. Lots of edge cases and nuances, people problems, relative conservativeness.

    • Looks like the definition of boilerplate will continue to shift up the chain

  • I find Claude great at all of the boilerplate needed to get testing in place. It's also pretty good at divining test cases to lock in the current behavior, even if it's buggy. I use Claude as a first pass on tests, then I run through each test case myself to make sure it's a meaningful test. I've let it loose on the code coverage loop as well, so it can drill in and get those uncommon lines covered. I still don't have a good process for path coverage, but I'm not sure how easy that is in go as I haven't checked into it much yet.

    I'm with you 100% on the boring stuff. It's generally good at the boring stuff *because* it's boring and well-trod.

  • It's interesting that every task in the world is boring to somebody, which means nothing left in the world will be done by those interested in it, because somebody will gladly shotgun it with an AI tool.

  • > For me LLM:s are just a computer interface you can program using natural language. ... boring tasks cause extreme discomfort ... Now - I can offload the most boring task to LLM and spend my mental energy on the interesting stuff!

    The problem with this perspective, is that when you try to offload exactly the same boring task(s), to exactly the same LLM, the results you get back are never even close to being the same. This work you're offloading via natural language prompting is not programming in any meaningful sense.

    Many people don't care about this non-determinism. Some, because they don't have enough knowledge to identify, much less evaluate, the consequent problems. Others, because they're happy to deal with those problems, under the belief that they are a cost that's worth the net benefit provided by the LLM.

    And there are also many people who do care about this non-determinism, and aren't willing to accept the consequent problems.

    Bluntly, I don't think that anyone in group (1) can call themselves a software engineer.

> Agents are a boon for extraverts and neurotypical people.

This sounds like a wild generalization.

I am in neither of those two groups, and I’ve been finding tools like Claude Code becoming increasingly more useful over time.

Made me much more optimistic about the direction of AI development in general too. Because with each iteration and new version it isn’t getting anywhere closer to replacing me or my colleagues, but it is becoming more and more useful and helpful to my workflow.

And I am not one of those people who are into “prompt engineering” or typing novels into the AI chatbox. My entire interaction is typically short 2-3 sentences “do this and that, make sure that XYZ is ABC”, attach the files that are relevant, let it do its thing, and then manual checks/adjustments. Saves me a boatload of work tbh, as I enjoy the debugging/fixing/“getting the nuanced details right” aspect of writing code (and am pretty decent at it, I think), but absolutely dread starting from a brand new empty file.

> I can’t get into using AI tools like Claude Code. As far as I go is chat style where I’m mostly in control.

Try aider.chat (it's in the name), but specifically start with "ask" mode then dip a toe into "architect" mode, not "code" which is where Claude Code and the "vibe" nonsense is.

Let aider.chat use Opus 4.1 or GPT-5 for thinking, with no limit on reasoning tokens and --reasoning-effort high.

> agents are a boon for extraverts and neurotypical people.

On the contrary, I think the non-vibe tools are force multipliers for those with an ability to communicate so precisely they find “extraverts and neurotypical people” confounding when attempting to specify engineering work.

I'd put both aider.chat and Claude Code in the non-vibe class if you use them Socratically.

  • thanks for this, going to try it out - i need to use paid api and not my claude max or gpt pro subn, right?

> Agents are a boon for extraverts and neurotypical people.

Please stop with this kind of thing. It isn't true, it doesn't make sense and it doesn't help anyone.

I bet your code sucks in quality and quantity compared to the senior+ engineer who uses the modern tools. My code certainly did even after 20 years of experience, much of that as senior/staff level at well paying companies.

For what it’s worth I’m neurodivergent, introverted and have avoided management up to the staff+level. Claude Code is great I use it all day every day now.

For me (an introvert), I have found great value in these tools. Normally, I kind of talk to myself about a problem / algorithm / code segment as I'm fleshing it out. I'm not telling myself complete sentences, but there's some sort of logical dialog I am having with myself.

So I just have to convert that conversation into an AI prompt, basically. It just kind of does the typing for the construct already in my head. The trick is to just get the words out of my head as prompt input.

That's honestly not much different than an author writing a book, for example. The story line is in their head, they just have to get it on paper. And that's really the tricky part of writing a novel as much as writing code.

I therefore don't believe this is an introvert/extrovert thing. There are plenty of book authors which are both. The tools available as AI code agents are really just an advanced form of dictation.

I kind of think we will see some industry attrition as a result of LLM coding and agent usage, simply because the ~vIbEs~ I'm witnessing boil down to quite a lot of resistance (for multiple reasons: stubbornness, ethics, exhaustion from the hype cycle, sticking with what you know, etc)

The thing is, they're just tools. You can choose to learn them, or not. They aren't going to make or break your career. People will do fine with and without them.

I do think it's worth learning new tools though, even if you're just a casual observer / conscientious objector -- the world is changing fast, for better or worse, and you'll be better prepared to do anything with a wider breadth of tech skill and experience than with less. And I'm not just talking about writing software for a living, you could go full Uncle Ted and be a farmer or a carpenter or a barista in the middle of nowhere, and you're going to be way better equipped to deal with logistical issues that WILL arise from the very nature of the planet hurtling towards 100% computerization. Inventory management, crop planning, point of sale, marketing, monitoring sensors on your brewery vats, whatever.

Another thought I had was that introverts often blame their deficits in sales, marketing and customer service on their introversion, but what if you could deploy an agent to either guide, perform, or prompt (the human) with some of those activities? I'd argue that it would be worth the time to kick the tires and see what's possible there.

It feels like early times still with some of these pie in the sky ideas, but just because it's not turn-key YET doesn't mean it won't be in the near future. Just food for thought!

  • "ethics"

    I agree with all of your reasons but this one sticks out. Is this a big issue? Are many people refusing to use LLMs due to (I'm guessing here): perceived copyright issues, or power usage, or maybe that they think that automation is unjust?

    • I can't tell how widespread any of is, to be honest.. mostly because it's anecdata, and impossible to determine if what I'm seeing is just ragebait, or shallow dunks by reply-guys in comment sections, or particularly-loud voices on social media that aren't representative of the majority opinion, etc

      That said, the amount of sort-of-thoughtless, I'm-just-repeating-something-I-heard-but-don't-really-understand outrage towards AI that I'm seeing appears to be increasing -- "how many bottles of water did that slop image waste??", "Clanker"-adjacent memes and commentary (include self-driving + robots in this category), people ranting about broligarchs stealing art, music, movies, books to train their models (oddly often while also performatively parroting party lines about how Spotify rips artists off), all the way to refusing to interact with people on dating apps if they have anything AI in their profiles hahaha (file "AI" alongside men holding fish in their pics, and "crypto" lol)

      It's all chronically-online nonsense that may well just be perception that's artificially amplified by "the algorithm".

      Me, I have no fundamental issue with any of it -- LLMs, like anything else, aren't categorically good or bad. They can be used positively and negatively. Everything we use and consume has hidden downsides and unsavory circumstances.

    • Yes, people are refusing for those reasons. I don't know how many, but I'd say about half of the the people I know who do not work in tech are rejecting AI, with ethics being the primary reason. That is all just anecdata, but I suspect the tech bubble around AI is making people in tech underestimate how many people in the world simply are not interested in it being part of their lives.

> Agents are a boon for extraverts and neurotypical people.

As an extrovert the chances I'll use an AI agent in the next year is zero. Not even a billion to one but a straight zero. I understand very well how AI works, and as such I have absolutely no trust in it for anything that isn't easy/simple/solved, which means I have virtually no use for generative AI. Search, reference, data transformation, sure. Coding? Not without verification or being able to understand the code.

I can't even trust Google Maps to give me a reliable route anymore, why would I actually believe some AI model can code? AI tools are helpers, not workers.

  • >no trust in it for anything that isn't easy/simple/solved

    I'm not sure what part of programming isn't generally solved thousands of times over for most languages out there. I'm only using it for lowly web development but I can tell you that it can definitely do it at a level that surprises me. It's not just "auto-complete" it's actually able to 'think' over code I've broken or code that I want improved and give me not just one but multiple paths to make it better.

    • In the case of programming is not quite as problematic with unsolved problems as much as others, like completeness. In the case of programming, it's context and understanding. It's great for small chunks of code but people think you can vibe code entire interactive applications with no programming knowledge, but LLMs simply don't understand, so they can't keep a cohesive idea of what the end goal is in mind. The larger the codebase it needs to work on the more likely it is to make catastrophic errors, create massive security flaws, or just generate nonfunctional code.

      Programming LLMs will become awesome when we create more narrowly targeted LLMs rather than these "train on everything" models.

At one point in my life I liked crafting code. I took a break, came back, and I no longer liked it--my thoughts ranged further, and the fine-grained details of implementations were a nuisance rather than ~pleasurable to deal with.

Whatever you like is probably what you should be doing right now. Nothing wrong with that.

I think they're fantastic at generating the sort of thing I don't like writing out. For example, a dictionary mapping state names to their abbreviations, or extracting a data dictionary from a pdf so that I can include it with my documentation.

>Agents are a boon for extraverts and neurotypical people.

I completely disagree. Juggling several agents (and hopping from feature-to-feature) at once, is perfect for somebody with ADHD. Being an agent wrangler is great for introverts instead of having to talk to actual people.

I think you misunderstand what this does. It is not only a coding agent. It is an abstraction layer between you and the computer.

It is effin nutzo that you would try to relate chatting with AI and agentic LLM codegen workflows to the intra/extra vert dichotomy or to neuro a/typicality - you so casually lean way into this absolute spectrum that I don’t even think associates the way you think it does, and it’s honestly kind of unsettling, like - what do you think you know about me, and about My People, that apparently I don’t know??

If it doesn’t work for you that’s fine, but turning it into some tribalised over-generalization is just… why, why would you do that, who is that kind of thing useful for??

Agents are boon for introverts who fucking hate dealing with other people (read: me). I can iterate rapidly with another 'entity' in a technical fashion and not have to spend hours explaining in relatable language what to do next.

I feel as if you need to work with these things more, as you would prefer to work, and see just how good they are.

You are leaving a lot of productivity on the table by not parallelizing agents for any of your work. Seemingly for psychological comfort quirks rather than earnestly seeking results.

Automation productivity doesn’t remove your own agency. It frees more time for you to apply your desire for control more discerningly.

  • I can imagine there are plenty of use cases, but I could not find one for myself. Can you give an example?

Pretty sure we can make LLM agents to transform declarative inputs to agentic action.

> Agents are a boon for extraverts and neurotypical people

As a neurodivergent introvert, please don't speak for the rest of us.

  • That stuck out to me as well. People will make up all sorts of stories to justify their resistance to change.

    • It's the same as saying that writing good commit messages is a boon for extroverts and neurotypicals. It's a computer. You're giving it instructions, and the only difference to traditional coding is that the input is English text.