← Back to context

Comment by OutOfHere

19 days ago

Agentic coding doesn't make any sense for a job interview. To do it well requires a detailed specification prompt which can't reliably be written in an interview. It ideally also requires iterating upon the prompt to refine it before execution. You get out of it what you put into it.

As someone that agenticly codes A LOT. Detailed specs are not required, but certainly one way to use the systems.

If you are going to do a big build out of something, spec up front at least to have a clear idea of the application architectural boundaries.

If you are adding features to a mature code base, then the general order of the day is: First have the Ai scout all the code related to the thing you are changing. Then have it give you a summary of its general plan of action. Then fire it off and review the results (or watch it, less needed now though).

For smaller edits or even significant features, I often just give it very short instructions of a few sentences, if I have done my job well the code is fairly opinionated and the models pick up the patterns well and I don't really have to give much guidance. I'll usually just ask for a few touchups like introdusing some fluent api nicities.

That being said, I do tend to make a few surgical requests of the AI when I review the PR, usually around abraction seams.

(For my play projects I don't even look at the code any more unless I hit a wall, and I haven't really hit a wall since Opus 4.5, though I do have a material physics simulator that Opus 4.5 wrote that runs REALLY slow that I should muck around in, but I'm thinking of seeing if Opus 4.6 can move it to the GPU by itself first.)

So if I were doing an interview with an interview question. I would probably do a "let's break down what we know", "what can we apply to this", "ok. let's start with x" and then iterate quickly and look at the code to validate as needed.

  • There is a real danger here during an interview of unfairly imposing one's style on others. I think it's great to share one's approach, but making it the only approach can lead to stagnation and lose out on picking ideas from alternatives.

In the UK the driving test requires a portion of driving using a satnav, the idea being that drivers are going to use satnavs so it's important to test that they know how how to use them safely.

The same goes for using Claude in a programming interview. If the environment of interview is not representative of how people actually work then the interview needs to be changed.

  • In the Before Times we used to do programming interviews with “you can use Google and stack overflow” for precisely this reason. We weren’t testing for encyclopaedic knowledge - we were testing to see if the candidate could solve a problem.

    But the hard part is designing the problem so that it exercises skill.

  • We don't solve LeetCode for a living yet it is asked in interviews anyway, so nah, we don't have to use AI in interviews.

    • You’ve just written the exact reason LeeteCode is widely mocked as an interview technique. They are not representative of most real world software, and engineers that train to solve them give a false impression of their ability to solve most other problems.

      I’ve interviewed hundreds of engineers for software and hardware roles. A good coding test is based on self-contained problems that the team actually encountered while developing our product. Boil the problem down to its core, create a realistic setup that reflects the information the team had when they encountered the challenge, and then ask the candidate to think it through. It doesn’t matter if they only write notes or pseudo code, and it doesn’t matter if they reach the wrong conclusion. What it’s testing for is the thought process. The fact the candidate has to ask the interviewer questions as though the interviewer is effectively the IDE, is great! The interviewer experiences the engineer’s thought process first-hand. And the interviewer can nudge the candidate in the correct direction by communicating answers that aren’t just typical IDE error messages.

      To validate these kinds of questions in advance, I’d often run them on existing team members that hadn’t already been exposed to the real challenge the problem was based on.

      2 replies →

How about bug fixing? Give someone a repo with a tricky bug, ask them to figure it out with the help of their coding agent of choice.

  • It doesn't have to be a "tricky" bug. A straightforward bug will do. If it's too tricky, the logic could be better off being rewritten.

>which can't reliably be written in an interview

Why not? It sounds like a skill issue to me.

>It ideally also requires iterating upon the prompt to refine it before execution.

I don't understand. It's not like you would need to one shot it.

  • It's a time issue. Interviews hardly offer much time as it is. To ask for something that benefits from multiple iterations is probably not going to fit in the available time.