← Back to context

Comment by nprateem

2 days ago

It's not that hard. Just ask them to explain the code. Then ask them how they'd change it for several different scenarios.

I've taken this approach, and found that it's trivially easy to distinguish people relying on LLMs from people who have thought the problem through and can explain their own decision-making process.

I had a couple of people who, when asked to explain specific approaches reflected in their code, very obviously typed my question right back into ChatGPT and then recited its output verbatim. Those interviews came to an end rather quickly.

One of my favorite ones was when I asked a candidate to estimate the complexity of their solution, and ChatGPT got it wrong, giving O(log(n)) for an O(n) algorithm. When I asked leading questions to see if the candidate could see where the error came in, they starting verbatim reciting a dictionary definition of computational complexity, and could not address the specifics of the problem at all.