← Back to context

Comment by simonw

3 days ago

A few of my favourites:

- Software Sprawl, The Golden Path, and Scaling Teams With Agency: https://charity.wtf/2018/12/02/software-sprawl-the-golden-pa... - introduces the idea of the "golden path", where you tell engineers at your company that if they use the approved stack of e.g. PostgreSQL + Django + Redis then the ops team will support that for them, but if they want to go off path and use something like MongoDB they can do that but they'll be on the hook for ops themselves.

- Generative AI is not going to build your engineering team for you: https://stackoverflow.blog/2024/12/31/generative-ai-is-not-g... - why generative AI doesn't mean you should stop hiring junior programmers.

- I test in prod: https://increment.com/testing/i-test-in-production/ - on how modern distributed systems WILL have errors that only show up in production, hence why you need to have great instrumentation in place. "No pull request should ever be accepted unless the engineer can answer the question, “How will I know if this breaks?”"

- Advice for Engineering Managers Who Want to Climb the Ladder: https://charity.wtf/2022/06/13/advice-for-engineering-manage...

- The Engineer/Manager Pendulum: https://charity.wtf/2017/05/11/the-engineer-manager-pendulum... - I LOVE this one, it's about how it's OK to have a career where you swing back and forth between engineering management and being an "IC".

The one on Generate AI seems a bit outdated. This was before Claude Code was released.

  • Most of that one still rings very true to me. I particularly liked this section:

    > Let’s start here: hiring engineers is not a process of “picking the best person for the job”. Hiring engineers is about composing teams. The smallest unit of software ownership is not the individual, it’s the team. Only teams can own, build, and maintain a corpus of software. It is inherently a collaborative, cooperative activity.

    • I totally agree with this part.

      Right now, we are in a transitioning phase, where parts of a team might reject the notion of using AI, while others might be using it wisely, and still others might be auto-creating PRs without checking the output. These misalignments are a big problem in my view, and it’s hard to know (for anybody involved) during hiring what the stance really is because the latter group is often not honest about it.