← Back to context

Comment by osigurdson

3 days ago

Concretely, I was creating a cli application. I had implemented a few commands end-to-end and established some solid patterns. I used Codex (i.e. the PR creating flavor) to provide instructions and get it to review the existing patterns before continuing as asked it to rigorously follow them. I had to do about ~10 more things and it worked really well. It was easy for me to review and understand because I already knew the pattern and it seemed easy for it to get right.

It worked so well that I am always trying to look for opportunities like this but honestly, it isn't that common. Many times you aren't creating a pattern and repeating - you are creating a new pattern. AI is good to chat with to get ideas and come up with an approach in these situations seems to be more effective to me.

> It was easy for me to review and understand because I already knew the pattern and it seemed easy for it to get right.

I suggest that you already knowing the pattern actually makes it harder for you to review code that you expect to contain the pattern. You're likely to perceive it as being there whether it is or not. This strikes me as a way of using LLMs that is more dangerous than average.

Relatedly, proofreading your own work is much more error-prone than proofreading someone else's work, precisely because you have a mental model of your own work (created when you produced it) and you're likely to consult the mental model rather than the work.