← Back to context

Comment by sigmoid10

1 day ago

I've had take home problems for job interviews that were given a few days before and during the actual interview I only had to explain my code. But I wouldn't be sure this still works as a useful candidate filter today, given how much coding agents have advanced. In fact, if you were a sr dev and had a bunch of guys to bounce this problem back and forth, it wouldn't even have filtered out the bad ones back in the old days. There is very little that is more telling than seeing a person work out a problem live, even if that sucks for smart people who can't handle stress.

I have found over the years that I learn more by asking easier questions and interacting with candidates as they think through problems out loud. Little things that test the critical ability to craft a Boolean expression to accurately characterize a situation can be explored in a brief interview where you have some assurance that they're working on their own, and not just getting an answer online or from a smart roommate. (Sample: given two intervals [a,b] and [c,d], write an expression that determines whether they intersect.). Candidates that have lots of trouble programming "in the small" are going to have trouble programming in the large as well.

  • What I find effective (on both sides of the interview table) is not only asking easier questions, but active encouragement to first talk out and work through the most fundamental, basic aspects of the problem and it's simplest solutions before moving into more advanced stuff.

    I think a lot of experienced people's brains lock up in an interview when asked simple questions because they assume that they're expected to skip straight past the "obvious" solution and go straight for some crazy algorithm or explain the fine points of memory/time tradeoff. This often doesn't present as intended - it looks to the interviewer like you don't even know the basics and are grasping at straws trying to sound smart.

If people can explain their decisions, I'd say it's fair game. It would be nice to know up front if someone used AI of course.

The other implication here is that if a candidate can use AI for a take home and ace the interview, then maybe the company doesn't have as tough of problems as it thought and it could fill this seat quickly. Not a bad problem to have.

Leetcode for CRUD app positions is overkill.

I don’t use those LLM tools, but if someone can pass the test with LLM tools, then they can pass the test unless there’s something special about the environment that precludes the LLM tools they use.

> But I wouldn't be sure this still works as a useful candidate filter today, given how much coding agents have advanced.

Prior to ChatGPT coming out, I gave a take home test to sort roman numerals.

What before was a "here's a take home that you can do in an hour and I can check the 'did you write the code reasonably?'" is now a 30 seconds in ChatGPT with comments embedded in it that would make an "explain how this function works" to be less useful. https://chatgpt.com/share/688cd543-e9f0-8011-bb79-bd7ac73b3f...

When next there's an interview for a programmer, strongly suggest that it be in person with a whiteboard instead to mitigate the risks of North Korean IT workers and developers who are reliant on an LLM for each task.

My solution for this was a propose a problem obscure enough that no LLM tool really knows how to deal with it. This involved some old Fortran code and obscure Fortran file format.

How would someone explain code that was vibe-coded?

  • You can have the AI explain it to you. There's also a middle ground between vibe coding and "I can code some things but never could have coded this without an AI".

    Doesn't even have to be AI. Give me some random file from the Linux kernel and I could probably explain everything to you if you gave me a few hours. But that doesn't mean I would ever be able to write that code.

    • I don’t disagree, but in those interviews the explanation is also a bit of a q&a, so an effective interviewer can detect candidates who only memorized things. Someone who can ace a q&a about Linux code is already better than average.

  • Isn't that a good thing? The fact that the candidate dumped out code that they didn't write is often called "cheating". The fact that candidates can't explain it (because they didn't write it) means it's a good test of something most interviewers find unacceptable.

  • Are you asking how they would get that info they didn't have / couldn't come up with? Because you can literally have a chatbot explain every line to you and why it is there and even ask it about things you don't know like a teacher. And for simple problems this will probably work better than with actual humans.

    • I assume questions to explain the code would be extremely specific about why you did something on a specific line, or why you chose to design this part that way instead of another, to detect plagiarism and vibe coding, not a request for a prepared monologue.