Comment by lelanthran

2 days ago

If it doesn't revert the corrections, maybe it is progress?

I can easily imagine constant churn in the code because it switches between five different implementations when run five times, foing back to the first one on the sixth time and repeating the process.

I gotta ask, though, why exactly is that much code needed for what CC does?

It's a specialised wrapper.

How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use? There's probably a bit more to it than just:

    #!/usr/bin/env bash
    while true; do
      printf "> "
      read -r USER_INPUT || exit 0
      RESPONSE=$(curl -s https://api.openai.com/v1/chat/completions \
        -H "Authorization: Bearer $OPENAI_API_KEY" \
        -H "Content-Type: application/json" \
        -d "{
          \"model\": \"gpt-5.2\",
          \"messages\": [
            {\"role\": \"user\", \"content\": \"$USER_INPUT\"}
          ]
        }")
      echo "$RESPONSE" | jq -r '.choices[0].message.content'
    done

  • > How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use?

    That's an awfully presumptious tone to take :-)

    I'm not deciding "This is how many lines they are allowed", I'm trying to get an idea of exactly what sort of functionality that CC provides requires that sort of volume.

    I mean, it's a high-level language being used, it's pulling in a lot of dependencies, etc. It literally is glue code.

    Bearing in mind that it appears to be (at this point anyway) purely vibe-coded, I am wondering just how much of the code is dead weight - generated by the LLM and never removed.