Comment by timr
2 days ago
> We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.
Except, that's not what happens. In basically every coding interview in my life, it's been a gauntlet: code this leetcode medium/hard problem while singing and tapdancing backwards. Screw up in any way -- or worse (and also commonly) miss the obscure trick that brings the solution to the next level of algorithmic complexity -- and your interview day is over. And it's only gotten worse over time, in that nowadays, interviewers start with the leetcode medium as the "warmup exercise". That's nuts.
It's not a one off. The people doing these interviews either don't know what they're supposed to be looking for, or they're at a big tech company and their mandate is to be a severe winnowing function.
> It isn't that your interviewer knew all the languages, but that the language didn't matter.
I've done enough programming interviews to know that using even a marginally exotic language (like, say, Ruby) will drastically reduce your success rate. You either use a language that your interviewer knows well, or you're adding a level of friction that will hurt you. Interviewers love to say that language doesn't matter, but in practice, if they can't know that you're not making up the syntax, then it dials up the skepticism level.
They generally do not know what they are looking for. They are generally untrained, and if they are trained, the training is probably all about using leetcode-type problems to give out interviews that are sufficiently similar that you can run stats on the results and call them "objective", which is exactly the thing we are all quite correctly complaining about. Which is perhaps anti-training.
The problem is that the business side wants to reduce it to an objective checklist, but you can't do that because of Goodhart's Law [1]. AI is throwing this problem into focus because it is basically capable of passing any objective checklist, with just a bit of human driving [2]. Interviews can not consist of "I'm going to ask a question and if you give me the objectively correct answer you get a point and if you do not give the objectively correct answer you do not". The risk of hiring someone who could give the objectively correct answers but couldn't program their way out of a wet paper bag, let alone do requirements elicitation in collaboration with other humans or architecture or risk analysis or any of the many other things that a real engineering job consists of, was already pretty high before AI.
But if interviewing is not a matter of saying the objectively correct things, a lot of people at all levels are just incapable of handling it after that. The Western philosophical mindset doesn't handle this sort of thing very well.
[1]: https://en.wikipedia.org/wiki/Goodhart%27s_law
[2]: Note this is not necessarily bad because "AI bad!", but, if all the human on the other end can offer me is that they can drive the AI, I don't need them. I can do it myself and/or hire any number of other such people. You need to bring something to the job other than the ability to drive an AI and you need to demonstrate whatever that is in the interview process. I can type what you tell me into a computer and then fail to comprehend the answer it gives is not a value-add.
> The Western philosophical mindset doesn't handle this sort of thing very well.
Mind elaborating on that?
It is a gross oversimplification but you can look at the Western mindset as being a reductionistic, "things are composed of their parts" sort of view, and the Eastern mindset as a holistic mindset where breaking things into their components also destroys the thing in the process.
The reality isn't so much "in between" as "both". There is a reason the West developed a lot of tech and the East, despite thousands of years of opportunity, didn't so much. But there is also a limit to the reductionistic viewpoint.
In this case, being told that the only way to hire a truly good developer is to make a holistic evaluation of a candidate, that you can not "reduce" it to a checklist because the very act of reducing it to a checklist invalidates the process, is something that a lot of Western sorts of people just can't process. How can something be effectively impossible to break into parts?
On the other hand, it is arguably a Western viewpoint that leads to the idea of Goodhart's law in the first place; the Eastern viewpoint tends to just say "things can't be reduced" and stop the investigation there.
This is highly stereotypical, of course, and should be considered as an extremely broad classification of types of philosophy, and not really associated directly with any individual humans who may happen to be physically located in the east or west. Further as I said I think the "correct" answer is neither one, nor the other, nor anything in between, but both, so I am not casting any shade on any country or culture per se. It is a useful, if broad, framework to understand things at a very, very high level.
When I joined my current team I found they had changed the technical test after I had interviewed but before I joined. A couple of friends also applied and got rejected because of this new test.
When I finally got in the door and joined the hiring effort I was appalled to find they’d implemented a leetcode-esque series of challenges with criteria such as “if the candidate doesn’t immediately identify and then use a stack then fail interview”. There were 7 more like this with increasingly harsh criteria.
I would not have passed.