← Back to context

Comment by JohnKemeny

6 days ago

I once graded over 100 exams in an introductory programming course (Python). The main exercise was to implement a simple game (without using a runtime).

Some answers were trivial to grade—either obviously correct or clearly wrong. The rest were painful and exhausting to evaluate.

Checking whether the code was correct and tracing it step by step in my head was so draining that I swore never to grade programming again.

Right, sure. So: this doesn't generally happen with LLM outputs, but if it does, you simply kill the PR. A lot of people seem to be hung up on the idea that LLM agents don't have a 100% hit rate, let alone a 100% one-shot hit rate. A huge part of the idea is that it does not matter if an agents output is immediately workable. Just take the PRs where the code is straightforwardly reviewable.

  • But your reply was to "reviewing code is easily 10x harder than writing it". Of course that's not true if you just kill all PRs that are difficult to review.

    Sometimes, code is hard to review. It's not very helpful if the reviewer just kills it because it's hard.

    • > It's not very helpful if the reviewer just kills it because it's hard.

      I am absolutely still an AI skeptic, but like: we do this at work. If a dev has produced some absolutely nonsense overcomplicated impossible to understand PR, it gets rejected and sent back to the drawing board (and then I work with them to find out what happened, because thats a leadership failure more than a developer one IMO)

    • I understand everything about this comment except the words "but" and "it's not very helpful if".

  • You're massively burying the lede here with your statement of 'just take the PRs where the code is straightforwardly reviewable'. It's honestly such an odious statement that it makes me doubt your expertise in reviewing code and PRs.

    A lot of code can not and will not be straightforwardly reviewable because it all depends on context. Using an LLM adds an additional layer of abstraction between you and the context, because now you have to untangle whether or not it accomplished the context you gave it.

    • I have no idea what you mean. Tptacek is correct. LLM does not add an additional layer because at the end of the day code is code. You read it and you can tell whether it does what you want because you were the person who gave the instructions. It is no different than reviewing the code written by a junior (who also does not add an additional layer of abstraction).

      1 reply →