Comment by harrall

3 days ago

People get into this field for very different reasons.

- People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.

- People who like general engineering. AI is positive for reducing the amount of (mundane) code to write, but still requires significant high-level architectural guidance. It’s a tool.

- People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.

- People who just want to build a MVP. AI is honestly amazing at making something that at least works. It might be bad code but you are testing product fit. Koolaid mode.

That’s why everyone has a totally different viewpoint.

> - People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.

Those who value craftsmanships would value LLM, since they can pick up new languages or frameworks much faster. They can then master the newly acquired skills on their own if preferred, or they can use LLM to help along the way.

> People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.

Any serious product often comprises of multiple modules, layers, interfaces. LLM can help greatly with building some of those building blocks. Definitely a useful tool for product building.

  • > Those who value craftsmanships would value LLM, since they can pick up new languages or frameworks much faster. They can then master the newly acquired skills on their own if preferred, or they can use LLM to help along the way.

    That's like saying those who value books would value movie adaptations because they can pick up new stories much faster.

    Is it really so alien to you the someone might prefer learning a new language or framework by, gasp, reading its documentation?

Real subtle. Why not just write "there are good programmers and bad programmers and AI is good for bad programmers and only bad programmers"? Think about what you just said about Mitchell Hashimoto here.

  • I'm not sure that's a fair take.

    I don't think it's an unfair statement that LLM-generated code typically is not very good - you can work with it and set up enough guard rails and guidance and whatnot that it can start to produce decent code, but out of the box, speed is definitely the selling point. They're basically junior interns.

    If you consider an engineer's job to be writing code, sure, you could read OP's post as a shot, but I tend to switch between the personas they're listing pretty regularly in my job, and I think the read's about right.

    To the OP's point, if the thing you like doing is actually crafting and writing the code, the LLMs have substantially less value - they're doing the thing you like doing and they're not putting the care into it you normally would. It's like giving a painter an inkjet printer - sure, it's faster, but that's not really the point here. Typically, when building the part of the system that's doing the heavy lifting, I'm writing that myself. That's where the dragons live, that's what's gotta be right, and it's usually not worth the effort to incorporate the LLMs.

    If you're trying to build something that will provide long-term value to other people, the LLMs can reduce some of the boilerplate stuff (convert this spec into a struct, create matching endpoints for these other four objects, etc) - the "I build one, it builds the rest" model tends to actually work pretty well and can be a real force multiplier (alternatively, you can wind up in a state where the LLM has absolutely no idea what you're doing and its proposals are totally unhinged, or worse, where it's introducing bugs because it doesn't quite understand which objects are which).

    If you've got your product manager hat on, being able to quickly prototype designs and interactions can make a huge, huge difference in what kind of feedback you get from your users - "hey try this out and let me know what you think" as opposed to "would you use this imaginary thing if I built it?" The point is to poke at the toy, not build something durable.

    Same with the MVP/technical prototyping - usually the question you're trying to answer is "would this work at all", and letting the LLM crap out the shittiest version of the thing that could possibly work is often sufficient to find out.

    The thing is, I think these are all things good engineers _do_. We're not always painting the Sistine Chapel, we also have to build the rest of the building, run the plumbing, design the thing, and try to get buy-in from the relevant parties. LLMs are a tool like any other - they're not the one you pull out when you're painting Adam, but an awful lot of our work doesn't need to be done to that standard.

    • First, I agree with tptacek that Ghostty is a work of craftsmanship. Mitchell is a very talented dev and he says he greatly benefits from using AI.

      On the other hand I understand your point that some people got into coding because of coding and they like doing that manually. Unfortunately, we're not being paid to do what we like, but to solve problems with code. What we like is usually a hobby. Software engineering had a golden run for 20-30 years where we were paid well to do things we enjoyed doing, but unfortunately that might change. As an analogy, think about woodworking: there's craftsmanship in a nice wood table, but at the end of day I won't pay thousands of dollars for one, when a couple of hundred dollars will buy me a good enough one from IKEA (maybe you're not like that, but the general population is).

      1 reply →

    • I can't get past the framing that "people who like the act and craftsmanship" feel AI is negative, which implicitly defines whatever Mitchell Hashimoto is doing as not craftsmanship, which: ghostty is pure craftsmanship (the only reason anyone would spend months writing a new terminal).

      No, I think my response was fair, if worded sharply. I stand by it.

      3 replies →

    • They're far more than junior interns. I've had a long languishing project to build an ahead-of-time Ruby compiler. It started as a toy, and a blog series, and I then mostly put it on ice about a decade ago, except for very occasional little rounds of hacking. It self-hosts, but is very limited.

      A week or so ago, I gave Claude a task of making it compile rubyspecs. I then asked it to keep making specs pass. I do need to babysit it, but it's doing debugging and work no junior I've ever worked with could be trusted to do. It knows how to work with gdb, and trace x86 assembler. It understands how to read the code of a complex compiler, and modify code generation and parsers that even I - who wrote it in the first place - sometimes find challenging.

      It's currently (as I'm writing this) working its way through adding bignum support. Which in Ruby is tricky because it now no longer splits it in two classes - the code need to handle tagged integers that gets auto-promoted to heap allocated objects, that to the user has the same class. I spent the morning swearing at it, but then reset with a clearer description and it produced an extensive plan, and started working through it.

      I'll agree it's not great code without a lot of coaxing, but it's doing stuff that even a lot of senior, highly experienced developers would struggle with.

      I will agree it needs oversight, and someone experienced guiding it, like a junior developer would, but if I had junior developers producing things this complex, I'd lock them in a basement and never let them go (okay, maybe not).

      One of the hardest things, I find, where I will agree it smells of junior developer sometimes, is that it's impatient (once it even said "this is getting tedious") and skipping ahead instead of carefully testing assumptions and building up a solution step by step if you don't tell it very clearly how to work.

      I don't think we disagree that much, btw., I just wanted to describe my recent experience with it - it's been amazing to see, and is changing how I work with LLMs, in terms of giving it plenty of scratchpads and focusing on guiding how it works, making it create an ambitious plan to work to, and getting out of its way more, instead of continuously giving it small tasks and obsessing over reviewing intermediate work product.

      What I'm seeing often with this approach is that whenever I see something that annoys me scroll past, it's often fixed before I've even had a chance to tell it off for doing something stupid.