Comment by kace91

4 days ago

>In the last thirty days, I landed 259 PRs -- 497 commits, 40k lines added, 38k lines removed.

Is anyone with or without AI approaching anywhere near that speed of delivery?

I don’t think my whole company matches that amount. It sounds super unreasonable, just doing a sanity check.

40K - 38K means 2K lines of actual code.

Which could mean that code was refactored and then built on top of. Or it could just mean that Claude had to correct itself multiple times over those 459 commits.

Does correcting your mistakes from yesterday’s ChatGPT binge episode count as progress…maybe?

  • If it doesn't revert the corrections, maybe it is progress?

    I can easily imagine constant churn in the code because it switches between five different implementations when run five times, foing back to the first one on the sixth time and repeating the process.

    I gotta ask, though, why exactly is that much code needed for what CC does?

    It's a specialised wrapper.

    • How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use? There's probably a bit more to it than just:

          #!/usr/bin/env bash
          while true; do
            printf "> "
            read -r USER_INPUT || exit 0
            RESPONSE=$(curl -s https://api.openai.com/v1/chat/completions \
              -H "Authorization: Bearer $OPENAI_API_KEY" \
              -H "Content-Type: application/json" \
              -d "{
                \"model\": \"gpt-5.2\",
                \"messages\": [
                  {\"role\": \"user\", \"content\": \"$USER_INPUT\"}
                ]
              }")
            echo "$RESPONSE" | jq -r '.choices[0].message.content'
          done

      1 reply →

AI approaches can churn code more than a human would.

Lines of code has always been a questionable metric of velocity, and AI makes that more true than ever.

  • Even discounting lines of code:

    - get a feature request/bug

    - understand the problem

    - think on a solution

    - deliver the solution

    - test

    - submit to code review, including sufficient explanation, and merge when ready

    260 PRs a month means the cycle above is happening once per hour, at constant speed, for 60 hours work weeks.

    • The premise of the steps you've listed is flawed in two ways.

      This is more what agentic-assisted dev looks like:

      1. Get a feature request / bug

      2. Enrich the request / bug description with additional details

      3. Send AI agents to handle request

      4a. In some situations, manually QA results, possibly return to 2.

      4b. Otherwise, agents will babysit the code through merge.

      The second is that the above steps are performed in parallel across X worktrees. So, the stats are based on the above steps proceeding a handful of times per hour--in some cases completely unassisted.

      ---

      With enough automation, the engineer is only dealing with steps 2 and 4a. You get notified when you are needed, so your attention can focus on finding the next todo or enriching a current todo as per step 2.

      ---

      Babysitting the code through merge means it handles review comments and CI failures automatically.

      ---

      I find communication / consensus with stakeholders, and retooling take the most time.

    • One can think of a lot of obvious improvements to a MVP product that don't requre much regarding "get a feature request/bug - understand the problem - think on a solution".

      You know the features you'd like to have in advance, or changes you want to make you can see as you build it.

      And a lot of the "deliver the solution - test - submit to code review, including sufficient explanation" can be handled by AI.

  • I'd love to see Claude Code remove more lines than it added TBH.

    There's a ton of cruft in code that humans are less inclined to remove because it just works, but imagine having LLM doing the clean up work instead of the generation work.

Is it possible for humans to review that amount of code?

My understanding of the current state of AI in software engineering is that humans are allowed (and encouraged) to use LLMs to write code. BUT the person opening a PR must read and understand that code. And the code must be read and reviewed by other humans before being approved.

I could easily generate that amount of code and make it write and pass tests. But I don't think I could have it reviewed by the rest of my team - while I am also taking part in reviewing code written by other people on my team at that pace.

Perhaps they just aren't human reviewing the code? Then it is feasible to me. But it would go against all of the rules that I have personally encountered at my companies and that peers have told me they have at their companies.

  • >BUT the person opening a PR must read and understand that code.

    The AI evangelists at my work who say this the loudest are also the ones shipping the most "did anyone actually look at this code?" bugs.

    • It's very easy to not read the code, just like it's very easy to click "approve" on requests that the agent/LLM makes to run terminal commands.

  • I'm appalled this isn't talked about more. Understanding code let alone code written by others is where the real complexity lies. I fail to see how more written code by some dumbass AI that gets things wrong half the time is going to make the job less draining to me. I can only conclude that half the devs of the world, or more, don't really do code reviews, or just rubber stamp crap.

Recently came across a project on HN front page that was developed on Github with a public repo. https://github.com/steveyegge/gastown/graphs/contributors 2000 commits over 20 days +497K/-360K lines

I'm not affiliated with Claude or the project linked.

Read that as "speed of lines of code", which is very VERY very different from "speed of delivery."

Lines of code never correlated with quality or even progress. Now they do even less.

I've been working a lot more with coding agents, but my convictions around the core principles of software development have not changed. Just the iteration speed of certain parts of the process.

Check out Steve Yegge’s pace with beads and gas town - well in excess of that.

You're counting wheel revolutions, not miles travelled. Not an accurate proxy measurement unless you can verify the wheels are on the road for the entire duration.

  ratatui_ruby % git remote -v
  origin https://git.sr.ht/~kerrick/ratatui_ruby (fetch)
  origin https://git.sr.ht/~kerrick/ratatui_ruby (push)
  
  ratatui_ruby % git checkout v0.8.0
  HEAD is now at dd3407a chore: release v0.8.0
  
  ratatui_ruby % git log --reverse --format="%ci" | head -1 | read first; \
  echo "First Commit: $first\nHEAD Commit:  $(git show -s --format='%ci' HEAD --)" 
  First Commit: 2025-12-22 00:40:22 -0600
  HEAD Commit:  2026-01-05 08:57:58 -0600
  
  ratatui_ruby % git log --numstat --pretty=tformat: | \
  awk '$1 != "-" { \
      if ($3 ~ /\./) { ext=$3; sub(/.*\./, "", ext) } else { ext="(no-ext)" } \
      if (ext ~ /^(txt|ansi|lock)$/) next; \
      add[ext]+=$1; rem[ext]+=$2 \
  } \
  END { for (e in add) print e, add[e], rem[e] }' | \
  sort -k2 -nr | \
  awk 'BEGIN { \
      print "---------------------------------------"; \
      printf "%-12s %12s %12s\n", "EXT", "ADDED", "REMOVED"; \
      print "---------------------------------------" \
  } \
  { \
      sum_a += $2; sum_r += $3; \
      printf "%-12s %12d %12d\n", $1, $2, $3 \
  } \
  END { \
      print "---------------------------------------"; \
      printf "%-12s %12d %12d\n", "SUM:", sum_a, sum_r; \
      print "---------------------------------------" \
  }'
  ---------------------------------------
  EXT                 ADDED      REMOVED
  ---------------------------------------
  rb                  51705        18913
  md                  20037        13167
  rs                   8576         3001
  (no-ext)             4072         2157
  rbs                  2139          569
  rake                 1632          317
  yml                  1431          153
  patch                 894          894
  erb                   300           30
  toml                  118           39
  gemspec                62           10
  gitignore              27            4
  css                    22            0
  yaml                   18            2
  ruby-version            1            1
  png                     0            0
  gitkeep                 0            0
  ---------------------------------------
  SUM:                91034        39257
  ---------------------------------------

  
  ratatui_ruby % cloc .
       888 text files.
       584 unique files.                                          
       341 files ignored.
  
  github.com/AlDanial/cloc v 2.06  T=0.26 s (2226.1 files/s, 209779.6 lines/s)
  --------------------------------------------------------------------------------
  Language                      files          blank        comment           code
  --------------------------------------------------------------------------------
  Ruby                            305           4792          10413          20458
  Markdown                         60           1989            256           4741
  Rust                             32            645            530           4400
  Text                            168            523              0           4358
  YAML                              8            316             17            961
  ERB                               3             20              4            246
  Bourne Again Shell                2             24             90            150
  TOML                              5             16             10             53
  CSS                               1              3              8             11
  --------------------------------------------------------------------------------
  SUM:                            584           8328          11328          35378
  --------------------------------------------------------------------------------