← Back to context

Comment by CharlieDigital

2 days ago

Code reviews.

Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.

A junior, mid, senior, staff are going to see very different things in the same codebase.

Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.

I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)

This is harder than it sounds, although I agree in a vacuum the idea is a good one.

So much value of the code review comes from having actual knowledge of the larger context. Mundane stuff like formatting quirks and obvious bad practices should be getting hoovered up by the linters anyways. But what someone new may *not* know is that this cruft is actually important for some arcane reason. Or that it's important that this specific line be super performant and that's why stylistically it's odd.

The real failure mode I worry about here is how much of this stuff becomes second nature to people on a team. They see it as "obvious" and forgot that it's actually nuance of their specific circumstances. So then a candidate comes in and misses something "obvious", well, here's the door.

  • You can do code review exercises without larger context.

    An example from the interview: the code included a python web API and SQL schema. Some obvious points I noticed were no input validation, string concatenating for the database access (SQL injection), no input scrubbing (XSS), based on the call pattern there were some missing indices, a few bad data type choices (e.g. integer for user ID), a possible infinite loop in one case.

    You might be thinking about it in the wrong way; what you want to see is that someone can spot these types of logic errors that either a human or AI copilot might produce regardless of the larger context.

    The juniors will find formatting and obvious bad practices; the senior and staff will find the real gems. This format works really well for stratification.

    • > no input validation, string concatenating for the database access (SQL injection), no input scrubbing (XSS), based on the call pattern there were some missing indices, a few bad data type choices (e.g. integer for user ID), a possible infinite loop in one case

      I'd say all this stuff is junior-level (maybe ~mid for things like user ID integers). It's just a checklist of "obvious bad practices", it doesn't require experience.

      The senior stuff is much higher-level: domain modelling, code architecture, consistency guarantees, system resilience... system design in general.

      2 replies →

    • Yes, ish.

      In a previous job we did code review interviews. And went the route you said due to the problem I said. And yes, it's a lot better. But what also happened over time was that the bar slowly raised. Because over time the "harder" pieces of that session started to seem rote to interviewers, they became viewed as table stakes.

      Mind you this is true of any interview scheme that has a problem solving component to it. I'm not saying that the code review style is extra bad here, just that it doesn't solve this problem.

      1 reply →

    • In theory, you can do code reviews without larger context if the code is carefully selected. Apparently, some companies think any badly written code from their code base can just be selected though.

  • It's not so hard. One of the interview stages I did somewhere well known used this.

    Here's the neural net model your colleague sent you. They say it's meant to do ABC, but they found limitation XYZ. What is going on? What changes would you suggest and why?

    Was actually a decent combined knowledge + code question.

I like the code review approach and tried it a few times when I was needed to do interviews.

The great thing about code reviews is that there are LOTS of ways people can improve code. You can start with the basics like can you make this code run at all (i.e. compile) and can you make it create the right output. And there's also more advanced improvements like how to make the code more performant, more maintainable, and less error-prone.

Also, the candidates can talk about their reasoning about why or why not they'd change the code they're reviewing.

For example, you'd probably view the candidates differently based on their responses to seeing a code sample with a global variable.

Poor: "Everything looks fine here"

Good: "Eliminate that global variable. We can do that by refactoring this function to..."

Better: "I see that there's a global variable here. Some say they're an anti-pattern, and that is true in most but not all cases. This one here may be ok if ..., but if not you'll need to..."

  • 100% it is more conducive to a conversational exchange that actually gives you better insight into how a developer thinks much more so than leetcode.

    Coding for me is an intensely focused activity and I work from home to boot so most of the time, I'm coding in complete silence. It's very awkward to be talking about my thought process while I'm coding, but not talking is just as awkward!

Some of the most interesting interviews that I felt like accurately assessed my skills (even non-live ones) where debugging and code review assessments. I didn't get offers from these cos later on because I failed the leetcodes they did later in the process but I felt the review interviews were a good way to be assessed.

I loved the idea of code reviews interviews, i've had several good ones, until yesterday when I had my first bad code review interview.

They asked me to review a function for a residential housing payment workflow, which I'm unfamiliar with. From an actual snippet of their bad production code (which has since been rewritten). In Go which I've never used (I've never professionally used the language that doesn't have error handling built-in, for example).

I had to spend more than half of my time asking questions to try and get enough context about Go error handling techniques, the abstractions they were using which we only had the import statements to and the way that the external system was structured to handle these requests to review the hundred lines of code they shared.

I was able to identify a bunch of things incidentally, like making all of the DB changes as part of a transaction so that we don't get inconsistent state or breaking up the function into sub functions, because the names were extremely long, but this was so far outside my area of expertise and comfort zone that I felt like shooting in the dark.

So just like any other interview style, they can be done very poorly.

  • Honestly this sounds like a successful "bad fit" signal (assuming that they work with go and payment systems mostly).

    Language and domain experience are things id like to know after an interview process.

    • Honestly, it was also a red flag for me that they don’t actually know what they want and have bad communication between leadership and engineering. Prior to this interview I was already on the fence about them.

      They don’t work mostly in Go. Even the interviewer said that he’s vaguely familiar with this area of the code, but he doesn’t work and Go. They work mostly in Kotlin and they explicitly are advertising for solid generalists.

I don't know. A cold code review on a codebase they never saw is not super informative about how the candidate would interact with you and the code once they're in known territory.

  •     > A cold code review on a codebase they never saw
    

    What do you think happens in the first few weeks of someone joining a new team? Probably reading a lot of code they never saw...

    So yeah, I think it's the opposite: explicitly testing for their ability to read code is probably kinda important.

Is there a site where one could review some code and see what many others say about it and their experience level?

I guess it would degrade to stackoverflow-like poems eventually, but still interesting.

I did this once and it was obvious the interviewer wanted me to point out his pet "gotcha." Not a great experience.

  • Yup, that's just one of the many ways to do a code-review interview wrong.

    Each code sample should have multiple things wrong. The best people will find most (not necessarily all) of them. The mediocre will find a few.

  • Yeah it's really tempting when you discover an interesting fact to think "that would make an interesting interview question" and turn the interview into some kind of pub quiz. Happens with all forms of technical interview though. I mean 90% of leetcode questions are "this one weird trick".

Yep, I've done a lot of SQL interviews and it is always interesting to see the folks who've crash and burned at code review and killed it at writing individual queries and sometimes the unexpected, the opposite happened, the person would fly through a code review and do really subpar on writing it, a signal I usually took to mean that the person was nervous as hell in the interview.

The two folks who showed this behavior I hired anyway (they were contractors so nbd) and they were excellent hires, so I really love the code review approach for climbing up bloom's taxonomy.