AI killed the tech interview. Now what?

2 days ago (kanenarraway.com)

The best interview process I've ever been a part of involved pair programming with the person for a couple hours, after doing the tech screening having a phone call with a member of the team. You never failed to know within a few minutes whether the person could do the job, and be a good coworker. This process worked so well, it created the best team, most productive team I've worked on in 20+ years in the industry, despite that company's other dysfunctions.

The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"

  • I dislike pair programming interviews - as they currently exist - because they usually feel like a time-crunched exam. You don't realistically have the freedom to actually think as you would in actual pair programming. i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview, but it's pretty realistic of real life work and entirely a non-problem. It's probably even good to test for at interview: how does a person work when they aren't working with an oracle that already knows the answer (ie: the interviewer)?

    Pair programming with the person for a couple hours, maybe even on an actual feature, would probably work, assuming the candidate is compensated for their time. I can imagine it'd especially work for teams working on open source projects (Sentry, Zed, etc). Might not be as workable for companies whose work is entirely closed source.

    Indeed, the other problem is what you mention: it doesn't scale to high throughput.

    • > i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview

      In all pair programming interviews I have run (which I will admit have been only a few) I would fail myself as an interviewer if I was not able to guide the interviewee away from a dead end within 15 minutes.

      If the candidate wasn't able to understand the hints I was giving them, or just kept driving forward, then they would fail.

      3 replies →

    • That's definitely up to the interviewer, in which a lot of discretion and trust has been placed. I think a lot of it also comes down to the culture of the company—whether they're cutthroat or supportive. As you get better people into the company, hopefully this improves over time. I know that when we did it, it was never about nailing it on the first try, it was literally about proving you knew how to program and were not an asshole. So, not the equivalent of reversing a binary tree on a whiteboard. The kinds of problems we worked on in the interviews weren't leetcode type problems, they were real tickets from our current project. Sometimes it was just doing stuff like making a new component or closing a bug, but those were the things we really did, so it felt like a better test.

    • > i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview

      That’s an assumption. Perhaps following a dead end for a while, realizing it, pivoting, etc is a valuable, positive, signal?

      5 replies →

    • I do 1hr pair programming interviews for my company and you have to strike a balance between letting candidates think through the problem even when you think it won't work (to see their thought process and maybe be surprised at their approach working/see how quickly they can self-correct) and keeping them on track so that the interview still provides a good signal for candidates who are less familiar with that specific task/stack.

      I'm also not actually testing for pair programming ability directly, moreso ability to complete practical tasks / work in a specific area, collaborate, and communicate. If you choose a problem that is general/expandable enough that good candidates for the position are unlikely to go down bad rabbit holes (eg for a senior fullstack role, create a minimal frontend and api server that talk to each other) it works just fine. Actually with these kinds of problems it's kind of good if your candidates end up "studying" them like with leetcode, because it means they are just learning how to do the things that they'll do on the job.

      > maybe even on an actual feature

      I don't think this would work unless the feature were entirely self-contained. If your workaround is to give the candidate an OSS project they need to study beforehand, I think that would bias candidates' performance in ways that aren't aligned with who you want to hire (eg how desperate are they for the role and how much time outside of work are they willing to put into your interview).

    • Another problem is it is difficult to compare candidates whose interviews involved working on completely different problems.

    • > if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview

      Eh, if it's a reasonable bad end and you communicate through it, I wouldn't see it as a fail. Particularly if it's something I would have tried, too. (If it's something I wouldn't have thought of and it's a good idea, you're hired.)

    • I did a couple of rounds of this with my manager as the interviewer. Personally I really liked the process, and the feedback I got from the candidates was positive (but then again it always would be).

      What worked well for me was that I made it very clear to my manager, a man who I trust, that I would not be able to provide him with a boolean pass/fail result. I couldn't provide him any objective measure of their ability or performance. What I could do was hang out with the canditates for an hour while we discussed some concepts I thought were important in my position. From that conversation I would be able to provide him a ranking along with a personal evaluation on whether I would personally like to work with the candidate.

      I prepared some example problems that I worked for myself a bit. Then I went into the interviews with those problems and let them direct those same explorations of the problem. Some of them took me on detours I hadn't taken on my own. Some of them needed a little nudge at times. I never took specific notes, but just allowed my brain to get a natural impression of the person. I was there to get to know them, not administer an exam.

      I feel like the whole experience worked super well. It felt casual, but also very telling. It was almost like a focused date. Afterwards I discussed my impression of the candidates with my manager to ensure the things I was weighing was somewhat commutable to what he desired.

      All in all it was a very human process. It must have taken an enormous amount of trust from my manager to allow me the discretion to make a subjective judgment. I was extremely surprised at how clearly I was able to delineate the people, but also how that delineation shifted depending on which axis we evaluated. A simple pass/fail technical interview would have missed that image of a full person.

  • I've (unfortunately) been interviewing the last two months and the main pattern that I've noticed is that a) big companies have terrible interview processes while b) small companies and startups are great at interviewing.

    Big companies need to hire tons of people and interview even more so they need some sort of scalable process for it. An early stage startup can just ask you about your past projects and pair program with you for an hour.

    • I hear this all the time, but I have yet to experience it. It may be because the small companies that I interview with are all startups, but I have yet to be able to get a call back from any other kind of small company. And the startups I do interview with have a full FAANG interview loops.

      There seems to be a weird selection bias that if you're FAANG or FAANG adjacent these small companies aren't interested.

      43 replies →

    • What exactly does "scalable" mean here?

      If a startup can spend 20 man-hours filling a single position, why can't a big company spend 1000 man-hours filling 50 positions?

      11 replies →

    • > small companies and startups are great at interviewing

      Small companies have the benefit of the pressure to fill a role to get work done, the lack of bureaucratic baggage to "protect" the company from bad hires, and generally don't have enough staff to suffer empire-building.

      Somewhere along the line the question changes from "can this candidate do the job that other people in this office are already doing?" to "can this candidate do the job of this imaginary archetype I've invented with seemingly impossible qualities".

  • "We need to run through hundreds of candidates per position, not a half dozen"

    But you don't! You only need to find the first person who is good enough to do the job. You do not need to find the best person.

  • >The best interview process I've ever been a part of involved pair programming with the person for a couple hours... You never failed to know within a few minutes whether the person could do the job

    There is something funny about the "best interview process" taking "a couple hours" despite giving you the answer "within a few minutes". Seems like even the best process is a little broken.

    • Lightly ironic indeed! Though I’m not sure “broken” is exactly the word I’d choose.

      I can only speak for myself, but I imagine myself as a candidate approaching a “couple of hours” project or relationship differently than I would a “few minutes” speed round. For that matter I can think of people I know professionally who I only know through their behavior “on stage” in structured and stylized meetings of a half hour or an hour—and I don’t feel like I have any sense at all of how they would be as day-to-day coworkers.

      If we sat down to work together, you’d probably have a sense in the first few minutes of whether or not we were going to work out—but that would be contingent on us going into it with the attitude that we were working together as colleagues toward a goal.

    • That's mainly because there were multiple pairing sessions, and even if you knew the person was going to pass, there are still a couple more people who need to meet them, and a schedule to make sure they're available to do that. Plus due diligence, etc.

      Nor am I saying it was a perfect system, just the best I've seen in terms of results.

  • The biggest victims of these non-scalable process is people without a good network. As an intl PhD student, I am that person.

    So now I have this weird dynamic: I get interview calls only from FAANG companies, the ones with the manpower to do your so called "cursed" scalable interviews. But the smaller companies or startups, ones who are a far better fit for my specialized kills, never call me. You need to either "know someone" or be from a big school, or there is zero chance.

  • Pairing on something close to whatever real work they'd be doing, but familiar to the applicant is my favorite way to evaluate someone (e.g. choose a side project, pre agree adding a feature).

    I don't care if someone uses modern tools (like AI assists), google, etc - e.g. "open book" - as that's the how they want to work. Evaluating their style of thinking / solving problems, comms, and output is the win.

  • Some of us find the prospect of pairing with an unknown person in an unknown environment, and against the clock, to be very stressful.

    • Anecdote:

      I've been interviewing recently and got through to the last round (of five...) with an interesting company. I knew the interview involved pairing, but I didn't expect: two people sitting behind me while I sat on a wobbly stool at a standing desk, trying to use a sticky wired mouse and a non-UK keyboard, and a very bright monitor (I normally use dark mode). They had a screen recorder running, and a radio was on in the next room.

      I totally bombed it. I suspect just the two people behind me would have been enough though.

    • I would find trying to solve such problems with known people in known environments to be somewhat stressful too.

  • Very few people doing this sort of interview (they tend to be our best, most empathetic developers) are likely to cut a multi-hour planned process short after a few minutes. It will eat at least an hour of their (very expensive & valuable) time.

    Also how am I supposed to filter the 100's of AI-completed assessments? Who gets this opportunity?

    • We didn't do assessments (if by that you mean take home assignments). This was partly a solution to that, since nobody thought they were a good idea. If you mean the phone screen, I think that would be a problem, yep, but it wasn't an issue back in 2016. Having them pair would weed out cheaters, but we would have to figure out a way to weed them out during the screening, I agree.

      We also did not require the employers doing the interview to be our most senior team members. They probably did it more often than most people, but often because they volunteered to do it. Anyone on the team would be part of the loop, which helped with scheduling. And, remember, we were working on actual tickets, so in a lot of cases it actually helped having the candidate there as a pairing partner.

      For a little extra detail, the way we actually did it was to have 2-3 pairing sessions of up to 2 hours apiece. At the end of the day, all the team members who paired with the candidate had to give them the thumbs up.

  • This. Interviewing for a sr dev position with a web app, backend stack is the bog standard java, spring, SQL abstracted away via JPA. We did a first screen, then the tech interview was two of their senior devs shoulder surfing me as I built a simple API. We chatted, I built, they asked questions, I defended my decisions (sometimes successfully, sometimes gracefully conceding defeat), they left knowing that I was who my resume said I was and the reminder that popped up in the middle of the interview to feed my sourdough starter showed them that I'm a culture fit.

    I think you're onto something with that last paragraph but I want to try being a bit more generous with why things are the way they are. The question seems to be "When there are hundreds of applicants how do we give everyone a fair shake without hiring an entire team of devs who do nothing but interview?" From that perspective the intentionality is different and even sensible but the end product is likely to be the same. Even when someone is chasing a metric it's because someone else wants what's best and has decided that metric is a sensible way to make that happen. At the end of the day they really do want to hire the best candidate out of a pool whose size is extremely variable and that's challenging.

  • I work at a company which has 11 engineers and competes with companies with 100s. The hiring process was a screening call with the CTO to not waste the prospective team's time, then a call with 2 of my prospective colleagues to gauge competence and cultural fit. Since then I have been involved in hiring most of the team I work with now. The CTO is one of the most competent engineers I have ever met and he designed this process. He also has very high EQ. One of the points I sell to prospective hires is him as a person to work with, as well as our team. He has also flatly denied people I suggested before and that's fine.

    I have been here 5 years now and I'm working with the most competent team I have ever worked with. My take away from this is that hiring doesn't need to be commoditised and scale, it just needs to find good people and give them an opportunity to show you that you do or don't want to work with them.

  • > You never failed to know within a few minutes whether the person could do the job

    Then why spend a couple hours?

  • This is the exact time to use phrase “A people hire other A people, B people hire C people”

    Additionally it’s rarely the hiring that makes a great team - it’s the long term commitment and investment in training.

  • I've been a proponent of pair programming since the early days of Agile, when it was still seen as part of extreme programming. Unfortunately, it’s not often employed in workplace settings.

    With that said, would your perception of the interview remain positive if the outcome had been negative?

    A common challenge across all interviews is a mismatch in personal dynamics, which can significantly impact the experience for both participants.

    Consider a scenario where a senior developer, who prefers simplicity, is paired with a mid-level developer who is still captivated by complexity.

    • Or a "just start typing" person is with a "mull it over first" person. By the time I am typing code, I want to have 90% of it already completely worked out (at least till I type a "c.Lock()" and suddenly realize I hadn't considered thus and so synchronization issue.

  • Similarly, in general the best interviews I've ever been part of (either giving or being) turn into discussions where people's experience, opinions, and stories get aired (going both ways). You eventually get a good sense of each other and things get more relaxed when you both realize that you know what you're talking about (this is harder for Jr roles, though).

    Being peppered with questions very rarely gives any insight.

    • For junior roles, you want to interview for intelligence and shall we say an interest in learning rather than specific skills.

      Even for senior roles, that's what I want to interview for, although it is true at times a business case can be made for someone that is good at some specific complex skill and doesn't need to listen to other people to do ok work.

  • > person for a couple hours,

    >You never failed to know within a few minutes whether the person could do the job

    Did a misunderstood something or your best interview process is to multiple hours from someone when you've decided within minutes?

  • Pair programming on what problem though? I dont think many companies would want an outsider to work on their codebase.

  • I used to love getting to know the interviewer and doing things like that but IMO the market has shifted fundamentally on both ends for this to be effective anymore for most SaaS roles. This is anecdotal for US/Canada tech market over the past 10 years so YMMV.

    Developers Side: Since developers don't have job security anymore (at least for those who work on common languages like Go, Python, Java and Typescript) they are better off learning and keeping in touch with leetcode and system design questions, looking for new opportunities and interviewing in "batch mode" when looking for a job. The idea is to clear as many interviews as possible using the same concepts, get in and make money asap before you get laid off. No incentive for collaboration or for fulfilling but esoteric stuff like Haskell and Scala. Career security > Job security.

    Companies Side: On the other end software companies have less trust in developers staying long term so they want to make the interview process as quick and risk free as possible. In essence they are betting that by perusing 100s of resumes and hiring someone who seemingly knows CS concepts they can get some value out of them before they leave. Standardized tests/vetting > team fit.

    TLDR; The art is gone from this job, its become akin to management consulting or investment banking. Quality and UX seems to be regressing across the board as a result.

  • This is how my team hires and it’s incredible.

    I think what makes it work is that our code pair is pretty low stakes. I was told that I didn’t have to finish the problem and I was free to use whatever tools or language I needed. They just wanted to see how I work and collaborate.

  • It's a super interesting approach, and if you put a strong filter before it (e.g. intense non-BS Q&A), the whole thing could be high-throughput.

  • Does your company operate the same way? I.e. is most, or at least a large chunk of engineering done as pair-programming?

  • Thats what we did, pair program on some real production code and tickets. This way the person could get a feel about what they potentially were walking into and you get a good idea of how they think and approach problems.

Code reviews.

Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.

A junior, mid, senior, staff are going to see very different things in the same codebase.

Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.

I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)

  • This is harder than it sounds, although I agree in a vacuum the idea is a good one.

    So much value of the code review comes from having actual knowledge of the larger context. Mundane stuff like formatting quirks and obvious bad practices should be getting hoovered up by the linters anyways. But what someone new may *not* know is that this cruft is actually important for some arcane reason. Or that it's important that this specific line be super performant and that's why stylistically it's odd.

    The real failure mode I worry about here is how much of this stuff becomes second nature to people on a team. They see it as "obvious" and forgot that it's actually nuance of their specific circumstances. So then a candidate comes in and misses something "obvious", well, here's the door.

    • You can do code review exercises without larger context.

      An example from the interview: the code included a python web API and SQL schema. Some obvious points I noticed were no input validation, string concatenating for the database access (SQL injection), no input scrubbing (XSS), based on the call pattern there were some missing indices, a few bad data type choices (e.g. integer for user ID), a possible infinite loop in one case.

      You might be thinking about it in the wrong way; what you want to see is that someone can spot these types of logic errors that either a human or AI copilot might produce regardless of the larger context.

      The juniors will find formatting and obvious bad practices; the senior and staff will find the real gems. This format works really well for stratification.

      6 replies →

    • It's not so hard. One of the interview stages I did somewhere well known used this.

      Here's the neural net model your colleague sent you. They say it's meant to do ABC, but they found limitation XYZ. What is going on? What changes would you suggest and why?

      Was actually a decent combined knowledge + code question.

      1 reply →

  • I like the code review approach and tried it a few times when I was needed to do interviews.

    The great thing about code reviews is that there are LOTS of ways people can improve code. You can start with the basics like can you make this code run at all (i.e. compile) and can you make it create the right output. And there's also more advanced improvements like how to make the code more performant, more maintainable, and less error-prone.

    Also, the candidates can talk about their reasoning about why or why not they'd change the code they're reviewing.

    For example, you'd probably view the candidates differently based on their responses to seeing a code sample with a global variable.

    Poor: "Everything looks fine here"

    Good: "Eliminate that global variable. We can do that by refactoring this function to..."

    Better: "I see that there's a global variable here. Some say they're an anti-pattern, and that is true in most but not all cases. This one here may be ok if ..., but if not you'll need to..."

    • 100% it is more conducive to a conversational exchange that actually gives you better insight into how a developer thinks much more so than leetcode.

      Coding for me is an intensely focused activity and I work from home to boot so most of the time, I'm coding in complete silence. It's very awkward to be talking about my thought process while I'm coding, but not talking is just as awkward!

  • Some of the most interesting interviews that I felt like accurately assessed my skills (even non-live ones) where debugging and code review assessments. I didn't get offers from these cos later on because I failed the leetcodes they did later in the process but I felt the review interviews were a good way to be assessed.

  • I loved the idea of code reviews interviews, i've had several good ones, until yesterday when I had my first bad code review interview.

    They asked me to review a function for a residential housing payment workflow, which I'm unfamiliar with. From an actual snippet of their bad production code (which has since been rewritten). In Go which I've never used (I've never professionally used the language that doesn't have error handling built-in, for example).

    I had to spend more than half of my time asking questions to try and get enough context about Go error handling techniques, the abstractions they were using which we only had the import statements to and the way that the external system was structured to handle these requests to review the hundred lines of code they shared.

    I was able to identify a bunch of things incidentally, like making all of the DB changes as part of a transaction so that we don't get inconsistent state or breaking up the function into sub functions, because the names were extremely long, but this was so far outside my area of expertise and comfort zone that I felt like shooting in the dark.

    So just like any other interview style, they can be done very poorly.

    • Honestly this sounds like a successful "bad fit" signal (assuming that they work with go and payment systems mostly).

      Language and domain experience are things id like to know after an interview process.

      1 reply →

  • I don't know. A cold code review on a codebase they never saw is not super informative about how the candidate would interact with you and the code once they're in known territory.

    •     > A cold code review on a codebase they never saw
      

      What do you think happens in the first few weeks of someone joining a new team? Probably reading a lot of code they never saw...

      So yeah, I think it's the opposite: explicitly testing for their ability to read code is probably kinda important.

  • Is there a site where one could review some code and see what many others say about it and their experience level?

    I guess it would degrade to stackoverflow-like poems eventually, but still interesting.

  • I did this once and it was obvious the interviewer wanted me to point out his pet "gotcha." Not a great experience.

    • Yup, that's just one of the many ways to do a code-review interview wrong.

      Each code sample should have multiple things wrong. The best people will find most (not necessarily all) of them. The mediocre will find a few.

    • Yeah it's really tempting when you discover an interesting fact to think "that would make an interesting interview question" and turn the interview into some kind of pub quiz. Happens with all forms of technical interview though. I mean 90% of leetcode questions are "this one weird trick".

  • Yep, I've done a lot of SQL interviews and it is always interesting to see the folks who've crash and burned at code review and killed it at writing individual queries and sometimes the unexpected, the opposite happened, the person would fly through a code review and do really subpar on writing it, a signal I usually took to mean that the person was nervous as hell in the interview.

    The two folks who showed this behavior I hired anyway (they were contractors so nbd) and they were excellent hires, so I really love the code review approach for climbing up bloom's taxonomy.

Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?

There's very likely a real answer to that question, and that answer should shape the way that engineer should be assessed and hired.

For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.

It seems to me the heart of the problem is that companies aren't very clear about what value the engineers add, and so they have trouble deciding whether a candidate could provide that value.

  • The even bigger challenge is that hiring experts in any domain requires domain knowledge, but hiring has been shifted to HR. They aren't experts in anything, and for some years they made do with formulaic approaches, but that doesn't cut it anymore. So now if your group wants to get it done, and done well, you have to get involved yourself, and it's a lot of work on top of your regular tasks. Maybe more work because HR is deeply involved.

    • >hiring has been shifted to HR

      Well, unless you know sufficiently senior people. But I suspect that is a deeply unsatisfactory answer to many people in this forum.

      My long term last, only technically-adjacent, job came through a combination of knowing execs, having gone to the same school as my ultimate manager, and knowing various other people involved. (And having a portfolio of public work.)

      29 replies →

    • I saw this at the big corporate (not faang/tech) place I work at. Engineers run and score interviews, but we don't make the final decision. That goes to HR and the hiring manager who usually has no technically background.

      1 reply →

    • HR are experts in HR, which is to say they have a broader view of the institutional needs and legal requirements of hiring staffing than you do. It's always annoying when that clashes with your vision, but dismissing their entire domain is unlikely to help you avoid running into that dynamic again and again

    • > hiring has been shifted to HR

      Not everywhere. At my company, HR owns the process but we -- the hiring tech team -- own the content of interviews and the outcomes.

    • I've never seen hiring completely in the domain of HR. HR filters incoming candidates and checks for culture fit etc, but technical competency is checked by engineers/ML folks. I can't imagine an HR person checking if someone understands neural networks.

      2 replies →

  • > Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?

    Interview coding questions aren't like the day-to-day job, because of the nature of an interview.

    In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.

    It also has to be a problem that's solvable within about 40 minutes.

    The problem needs to test the candidate meets the company's hiring bar - while also having enough nuance that there's an opportunity for absolutely great candidates to impress me.

    And the problem has to be possible to state unambiguously. Can't have a candidate solving the problem, but failing the interview because there was a secret requirement and they failed to read my mind.

    And of course, if we're doing it in person on a whiteboard (do people do that these days?) it has to be solvable without any reference to documentation.

    • > In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.

      If you send me a rubric I can pre-load whatever you want to talk about. If you tell me what you're trying to build and what you need help with, I can show up with a game plan.

      You need to make time for a conversation on the intricacies of voucher calculation and global sales tax law if you want to find people jazzed about the problem space.

    • > In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.

      Proving if they are technically capable of a job seems rather silly. Look at their resume, look at their online works, ask them questions about it. Use probing questions to understand the depths of their knowledge. I don't get why we are over-engineering interviews. If I have 10+ years of experience with some proof through chatting that I am, in fact, a professional software engineer, isn't that enough?

      1 reply →

    • >Interview coding questions aren't like the day-to-day job, because of the nature of an interview.

      You have missed his point. If the interview questions are such that AI can solve them, they are the wrong questions being asked, by definition. Unless that company is trying to hire a robot, of course.

      1 reply →

  • One of the best interviews I've encountered as a candidate wasn't exactly a pair programming session but it was similar. The interviewer pulled up a webpage of theirs and showed me a problem with it, and then asked how I would approach fixing it. We worked our way through many parts of their stack and while it was me driving most of the way we ended up having a number of interesting conversations that cropped up organically at various points. It was scheduled for an hour and the time actually flew by.

    I felt like I got a good sense of what he would be like to work with and he got to see how I approached various problems. It avoided the live coding problems of needing to remember a bunch of syntax trivia on the spot and having to focus on a quick small solution, rather than a large scalable one that you need more often for actual work problems.

  • Problem is, company A doesn't need an engineer to solve those interview questions but real problems.

    • “Real problems” aren’t something that can be effectively discussed in the time span of an interview, so companies concoct unreal problems that are meant to be good indicators.

      9 replies →

    • This is the answer.

      Let's not pretend 95% of companies are asking asinine interview questions (though I understand the reasons why) that LLMs can easily solve.

      2 replies →

  • Tech interviews in general need to be overhauled, and if they were it’d be less likely that AI would be as helpful in the process to begin with (at least for LLMs in their current state).

    Current LLMs can do some basic coding and stitch it together to form cool programs, but it struggles at good design work that scales. Design-focused interviews paired with soft-skill-focus is a better measure of how a dev will be in the workplace in general. Yet, most interviews are just “if you can solve this esoteric problem we don’t use at all at work, you are hired”. I’d take a bad solution with a good design over a good solution with a bad design any day, because the former is always easier to refactor and iterate on.

    AI is not really good at that yet; it’s trained on a lot of public data that skews towards worse designs. It’s also not all that great at behaving like a human during code reviews; it agrees too much, is overly verbose, it hallucinates, etc.

  • I want to hire people who can be given some problem and will go off and work on it and come to me with questions when specs are unclear or there's some weird thing that cropped up. AI is 100% not that. You have to watch it like a 15 year old driver.

  • A company wants to hire someone to perform tasks X, Y and Z. It's difficult to cleanly evaluate someone's ability to do these tasks in a short amount of time, so they do their best to construct a task A which is easy to test, and such that most people who can do A can also do X, Y and Z.

    Now someone comes along and builds a machine that can do A. It turns out that while for humans, A was a good indicator of X, Y and Z, for the machine it is not. A is easy for the machine, but X, Y and Z are still difficult.

    This isn't a sign that the company was wrong to ask A, nor is it a sign that they could just hire the machine.

  • It's because coding interview questions aren't so much assessing job skills as much as they are thinly veiled IQ tests.

    I think if it was socially acceptable they'd just do the latter.

  • This is a great point. Though what if the answer is that the company can hire that AI to solve a significant fraction of its actual problems? People who do the assessments and decide what features should look like are often called managers (product, engineering, etc.).

    For a while I’ve been skeptical that the rate of hiring of engineers would change significantly because of LLMs, but I’m starting to feel like maybe I’m wrong and it’s already changing and companies are looking toward AI to lower costs and require fewer humans. In that case they are probably still going to want people who are technically exceptional - maybe even more so - but are able and willing to create, integrate, and babysit AI generated code, and also do PM and EM style feature management.

    If companies are slowing hiring due to AI, I would expect interviews to get worse before they get better.

  • > For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.

    So a Product Manager?

    • In most companies every engineer above a junior level is expected to pass features and bugfixes through their common sense filter and provide feedback. Product managers and designers aren't infallible and sometimes lack knowledge about the system or product that an engineer might have.

      You can't just take requirements and churn out code without a critical eye at what you're doing.

    • Maybe.

      Maybe now, or maybe in a year or two, AI coding tools will be good enough that a single semi-technical person can be Product Manager for a small product, and implement all the feature through AI/LLM tools.

      Probably not for something of the complexity of Google Maps, but for a simpler website with some interactive elements, that could work.

      But then, this was just an example. There can be lots of reasons that companies still need engineers, my point was that they need to think about these reasons, and then use these reasons to decide how to select their engineers.

> Using apps like GitHub Co-pilot and Cursor to auto-complete code requires very little skill in hands-on coding.

this is a crazy take in the context of coding interviews. first, because it's quite obvious if someone is blindly copy and pasting from cursor, for example, and figuring out what to do is a significant portion of the battle, if you can get cursor to solve a complex problem, elegantly, and in one try, the likelihood that you're actually a good engineer is quite high.

if you're solving a tightly scoped and precise problem, like most coding interviews, the challenge largely lies in identifying the right solution and debugging when it's not right. if you're conducting an interview, you're also likely asking someone to walk through their solution, so it's obvious if they don't understand what they're doing.

cursor and copilot don't solve for that, they make it much easier to write code quickly, once you know what you're doing.

I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice. That discussion went on for a good few minutes and by the end of it I was teetering on leaving because the fit wasn't good.

I agreed to do it as long as they understood that I felt it was a terrible way of assessing someone's ability to code. I was allowed to use any programming language because they knew them all (allegedly).

The solution was a pretty obvious bit-shift. So I wrote memory registers up on the board and did it in Motorola 68000 Assembler (because I had been doing a lot of it around that time), halfway through they stopped me and I said I'd be happy to do it again if they gave me a computer.

The offered me the job. I went elsewhere.

  • You should’ve asked them “do you also mimic google’s compensation?”

    • I work for a faang subsidiary. We pay well below average salary and equity. We finally got one nice perk, a very good 401k match. A few months later it was announced that the 401k match would be scaled back "to come in line with what our parent company offers". I thought about asking "will be getting salaries or equity in line with what our parent company offers?" but that would have been useless. Management doesn't care. I'm job hunting.

    • Oh man I needed that in the clip for like a dozen interviews a decade ago.

    • This zinger I have to remember for the next time someone tries this whiteboard BS on me!

  • > I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice.

    This looks more like a culture fit test than a coding test.

  • Yeah, very bad fit. Surprised they made an offer.

    Folks getting mad about whiteboard interviews is a meme at this point. It misses the point. We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.

    It isn't that your interviewer knew all the languages, but that the language didn't matter.

    I didn't get this until I was giving interviews. The instructions on how to give them are pretty clear. The goal isn't to "solve the puzzle" but instead to demonstrate you can reason about it effectively, communicate your knowledge and communicate as part of problem solving.

    I know many interviewers also didn't get it, and it became just "do you know the trick to my puzzle". That pattern of failure is a good reason to deprecate white board interviews, not "I don't write on a whiteboard when i program in real life".

    • > We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.

      Except, that's not what happens. In basically every coding interview in my life, it's been a gauntlet: code this leetcode medium/hard problem while singing and tapdancing backwards. Screw up in any way -- or worse (and also commonly) miss the obscure trick that brings the solution to the next level of algorithmic complexity -- and your interview day is over. And it's only gotten worse over time, in that nowadays, interviewers start with the leetcode medium as the "warmup exercise". That's nuts.

      It's not a one off. The people doing these interviews either don't know what they're supposed to be looking for, or they're at a big tech company and their mandate is to be a severe winnowing function.

      > It isn't that your interviewer knew all the languages, but that the language didn't matter.

      I've done enough programming interviews to know that using even a marginally exotic language (like, say, Ruby) will drastically reduce your success rate. You either use a language that your interviewer knows well, or you're adding a level of friction that will hurt you. Interviewers love to say that language doesn't matter, but in practice, if they can't know that you're not making up the syntax, then it dials up the skepticism level.

      4 replies →

    • > can you have a real conversation (with a whiteboard to help) about how to solve the problem

      And do you frame the problem like that when giving interviews? Or the candidates are led to believe working code is expected?

      4 replies →

    • > The goal isn't to "solve the puzzle" but instead to demonstrate you can reason about it effectively, communicate your knowledge and communicate as part of problem solving.

      ...while being closely monitored in a high-stakes performance in front of an audience of strangers judging them critically.

      5 replies →

    • > So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.

      Everybody says that, but reality is they don't imho. If you don't pass the pet question quiz "they don't know how to program" or are a "faker", etc.

      I've seen this over and over and if you want to test a real conversation you can ask about their experience. (I realize the challenge with that is young interviewers aren't able to do that very well with more experienced people.)

    • +1 to all this. It still surprises me how many people, even after being in the industry for years, think the goal of any interview is to “write the best code” or “get the right answer”.

      What I want to know from an interview is if you can be presented an abstract problem and collaboratively work with others on it. After that, getting the “right” answer to my contrived interview question is barely even icing on the cake.

      If you complain about having to have a discussion about how to solve the problem, I no longer care about actually solving the problem, because you’ve already failed the test.

      3 replies →

  • > The offered me the job. I went elsewhere.

    I am so happy that you did this. We vote with our feet and sadly, too many tech folks are unwilling to use their power or have golden handcuff tunnel vision.

  • >I was allowed to use any programming language because they knew them all (allegedly).

    After 30 years of doing this, I find that typically the people who claim to know a lot often know very little. They're insecure in their ability so much that they've tricked themselves into not learning anything.

  • Are there people who still aren't aware that FAANGs developed this kind of thing to bypass H1-B regulations?

  • 2005? You were in the right.

    Today? Now that's when it is tricky. How can we know you are not one of these prompt "engineers" copy paster? That's the issue being discussed.

    20 years and many new technologies of difference.

    • What is the functional difference between copying an AI answer and copying a StackOverflow answer, in terms of it being "cheating" during an interview?

      I think the entire question is missing the forest for the trees. I have never asked a candidate to write code in any fashion during an interview. I talk to them. I ask them how they would solve problems, chase down bugs, or implement new features. I ask about concepts like OOP. I ask about what they've worked on previously, what they found interesting, what they found frustrating, etc.

      Languages are largely teachable, it's just syntax and keywords. What I can't teach people is how to think like programmers need to: how to break down big, hard problems into smaller problems and implement solutions. If you know that, I can teach you fucking Swift, it isn't THAT complicated and there's about 5 million examples of "how do I $X" available all over the Internet.

      16 replies →

I've accidentally been using an AI-proof hiring technique for about 20 years: ask a junior developer to bring code with them and ask them to explain it verbally. You can then talk about what they would change, how they would change it, what they would do differently, if they've used patterns (on purpose or by accident) what the benefits/drawbacks are etc. If they're a senior dev, we give them - on the day - a small but humorously-nasty chunk of code and ask them to reason through it live.

Works really well and it mimics the what we find is the most important bit about coding.

I don't mind if they use AI to shortcut the boring stuff in the day-to-day, as long as they can think critically about the result.

  • Yep. I've also been using an AI-proof interview for years. We have a normal conversation, they talk about their work, and I do a short round of well-tested technical questions (there's no trivia, let's just talk about some concepts you probably encounter fairly regularly given your areas of expertise).

    You can tell who's trying to use AI live. They're clearly reading, and they don't understand the content of their answers, and they never say "I don't know." So if you ask a followup or even "are you sure" they start to panic. It's really obvious.

    Maybe this is only a real problem for the teams that offloading their interviewing skills onto some leetcode nonsense...

  • This is a fine way. I’ll say that the difference between a senior and a principal is that the senior might snicker but the principal knows that there’s a chance the code was written by a founder.

    • And if the Principal is good, they should stand up and say exactly why the code is bad. If there's a reason to laugh because it is cliche bad, they should say so.

      If someone gave me code with

      if (x = 7) { ... } as part of a C eval.

      Yeah, you'll get a sarcastic response back because I know it is testing code.

      What I think people ignore is that personality matters. Especially at the higher levels. If you are a Principal SWE you have to be able to stand up to a CEO and say "No, sir. I think you are wrong. This is why." In a diplomatic way. Or sometimes. Less than diplomatic, depending on the CEO.

      One manager that hired me was trying to figure me out. So he said (and I think he was honest at the time). "You got the job as long an you aren't an axe murderer."

      To which I replied deadpan: "I hope I hid the axe well." (To be clear to all reading, I have never killed someone, nevermind with an axe! Hi FBI, NSA, CIA and pals!)

      Got the job, and we got along great, I operated as his right hand.

Nowadays I am on the other part of the fence, I am the interviewer. We are not a FAANG, so we just use a SANE interview process. Single interview, we ask the candidate about his CV and what his expectations are, what are his competences and we ask him to show us some code he has written. That's all. The process is fast and extremely effective. You can discriminate week candidates in minutes.

  • That process might work for your company precisely because you are not FAANG. You don't get hundreds of applicants that are dying to get in, so people don't have that strong of a motivation to do anything it takes (including lying) to get the job.

    • I’ve worked at a company with 150,000 employees. The interview process was pretty much as described here. There is absolutely no reason a Big Co needs to operate any differently.

  • >we ask him to show us some code he has written

    How do you expect them to get access to the property internal Git repo codebase and approval from their employer's lawyers to show it to third parties during the interview?

    Sounds like you're only selecting Foss devs and nothing more.

    • Most people have still written code for school or a hobby project. Maybe I'm missing empathy, but I cannot understand how some developers have no code to show.

      If that's the case however, just let them make a small project over the weekend and then do another interview where you ask stuff about what they've made. It's not that deep

      49 replies →

    • My worst code is always what I wrote yesterday. Often what’s missing is context, unless I comment ad nauseam. Sure I didn’t write complete test, obey open closed principles abstract into factory functions. The code I send from my hobby projects is likely a mess, because finishing on my own time by my own unpaid constraints wills it to be so

    • Maybe you forked a library because of reasons. You can tour the original repo and explain the problems. I have at least one of those examples for each time the legal or confidentiality department stepped in.

      4 replies →

  • We do this too, works fine. We ask open ended questions like, "What's your favorite thing you've done in your career and why?" and "What was the most challenging project in your career and why?" If you listen, you can get a lot of insight from just those two questions. If they don't give enough detail, we'll probe a little.

    Our "gotcha," which doesn't apply to most languages anymore is, "What's the difference between a function and a procedure." It's a one sentence answer, but people who didn't know it would give some pretty enlightening answers.

    Edit: From the replies I can see people are a little defensive about not knowing it. Not knowing it is ok because it was a question I asked people 20 years ago relevant to a language long dead in the US. I blame the defensiveness on how FUBAR the current landscape is. Giving a nuanced answer to show your depth of knowledge is actually preferred. A once sentence answer is minimal.

    I'm editing this because HN says I'm posting too fast, which is super annoying, but what can I do?

    • > We do this too, works fine. We ask open ended questions like, "What's your favorite thing you've done in your career and why?" and "What was the most challenging project in your career and why?" If you listen, you can get a lot of insight from just those two questions. If they don't give enough detail, we'll probe a little.

      The problem is: there is a very negative incentive to give honest answers. If I were to answer these questions honestly, I'd bring up some very interesting theorems (related to some deep algorithmic topics) that I proved in my PhD thesis. Yes, I would have loved to stay in academia, but I switched to industry because of the bad job prospects in academia - this is not what interviewers want to hear. :-(

      > "What's the difference between a function and a procedure." It's a one sentence answer

      The terminology here differs quite a lot in different "programming communities". For example

      > https://en.wikipedia.org/w/index.php?title=Procedure&oldid=1...

      says: "Procedure (computer science), also termed a subroutine, function, or subprogram",

      i.e. there is no difference. On the other hand, Pascal programmers strongly distinguish between functions and procedures; here functions return a value, but procedures don't. Programmers who are more attracted to type theory (think Haskell) would rather consider "procedures" to be functions returning a unit type. If you rather come from a database programming background, (stored) procedures vs functions are quite different concepts.

      I could go on and on. What I want to point out is that this topic is much more subtle than a "one sentence answer".

      3 replies →

    • Here's an interesting thought on your "gotcha" - I'm 57 years old, been programming as a career for over 30 years, a lot of languages and I have no idea what the difference is.

      20 replies →

    • > Our "gotcha," which doesn't apply to most languages anymore is, "What's the difference between a function and a procedure."

      My answer would be along the lines of "It's 2025, no one has talked about procedures for 20+ years"

  • > Single interview, we ask the candidate about *his* CV and what *his* expectations are, what are *his* competences and we ask *him* to show us some code *he* has written

    You... might want to think about what implicit biases you might be bringing here

What I've been thinking about leetcode medium/hard as a 30-45 minute tech interview (as there are a few minutes of pleasantry and 10 minutes reserved for questions), is that you are only really likely to reveal 2 camps of people—taking in good faith that they are not "cheating". One who is approaching the problem from first principles and the other who knows the solution already.

Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.

I often see comments like: this person had this huge storied resume but couldn't code their way out of a paper bag. Now having been that engineer stuck in a paper bag a few times, I think this is a very narrow way to view others.

I don't know the optimal way to interview engineers. I do know the style of interview that I prefer and excel at[0], but I wouldn't be so naive to think that the style that works for me would work for all. Often I chuckle about an anecdote from the fabled I.P. Sharp: Ian Sharp would set a light meter on his desk and measure how wide an interviewees eyes would get when he explained to them about APL. A strange way to interview, but is it any less strange than interviewing people via leetcode problems?

0: I think my ideal tech screen interview question is one that 1) has test cases 2) the test cases gradually ramp up in complexity 3) the complexity isn't revealed all at once; the interviewer "hides their cards," so to speak 4) is focused on a data structure rather than an algorithm such that the algorithm falls out naturally rather than serves as the focus. 5) Gives the opportunity for the candidate to weigh tradeoffs, make compromises, and cut corners given the time frame. 6) Doesn't combine big ideas (i.e. you shouldn't have to parse complex input and do something complicated with it); pick a single focus. Interviews I have participated and enjoyed like this: construct a Set class (union, difference, etc); implement an rpn calculator (ramp up the complexity by introducing multiple arities); create a range function that works like the python range function (for junior engineers, this one involves a function with different behavior based on arity).

  • >Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.

    This is something that drives me nuts in academia when it comes to exam questions. I once took an exam that asked us to invent vector clocks from whole cloth, basically, having only knowledge of a basic Lamport clock for context. I think one person got it--and that person had just learned about vector clocks in a different class. Given some time, it's possible I could have figured it out. But on an exam, you've got like 10-15 minutes per question.

    The funny thing about it is that I do the same damn thing from the other side all the time when working with students. It's incredibly tempting once you know the solution to a problem (especially if you didn't "solve" it yourself, but had the solution presented to you already) to present the question as though it has an obvious solution and expect somebody else to immediately solve it.

    I'm aware of the effect, I've experienced it many times, and I still catch myself doing it. I've never interviewed a candidate for a job, but I can only imagine how tempting it would be to fall into that trap.

    • Yes that's a tricky one.

      When I'm interviewing a candidate, I'm often asking myself if this question is just something I happen to know therefor expect the candidate to know too, or if it's crucial to doing the job?

      Sometimes it may not be fair to expect a random developer to be familiar with a specific concept. But at the same time it might be critical to the kind of work we're doing.

The current job market is so messed up that I honestly can't see myself getting a job until we hit a wall and people start using their brains again.

I have 26 years of solid experience, been writing code since I was 8.

There should be a ton of companies out there just dying to hire someone with that kind of experience.

But I'm not perfect, no one is; and faking doesn't work very well for me.

  • > There should be a ton of companies out there just dying to hire someone with that kind of experience.

    heh.. they are probably dead already?

    i have even longer years.. But this time i am looking since.. september? Applying 1-2 per day, on average.. Widening the fishing net each month.. ~2% showed some interest.. but no bingo.

    "overqualified" is about half of the "excuses" :/

    Time to plant tomatoes maybe..

    • Or maybe join forces and show them how it's really done?

      Not that I mind growing tomatoes, quite the opposite :)

  • I am with you! Been programming since I was 10 and have 20YoE. Many of my prototypes have grown into full fledged products, I have 40+ published papers, and I am regularly sought out for advice and help by those who know me. Everyone i have been, I am always told I am a good catch.

    However, I won't do leet coding. I want to hear about why I should come work for u. What about my works makes u think I could help ubm with your problem. Then let's have a talk about your problems and where I can create value for you.

    My experience in hiring is that leet coders are good one trick ponies. But long term don't become technical peers.

    • Part of the problem is there just aren't a lot of people out there who can correctly judge that level of experience, and looking up the spectrum tends to simply look weird.

      1 reply →

I mostly skipped the technical questions in the last few interviews I have conducted. I have a conversation, ask them about their career, about job changes, about hobbies, what they do after work. If you know the subject, skilled people talk a certain way, whether it is IT, construction, sailing.

I do rely on HR having, hopefully, done their job and validated the work history.

I do have one technical question that started out as fun and quirky but has actually shown more value than expected. I call it the desert island cli.

What are your 5 linux cli desert island commands?

Having a hardware background, today, mine are: vi, lsof, netcat, glances, and I am blanking on a fifth. We have been doing a lot of terraform lately

I have had several interesting responses

Manager level candidate with 15+ years hands on experience. He thought it was a dumb question because it would never happen. He became the teams manager a few months after hiring. He was a great manager and we are friends.

Manager level to replace the dumb question manager. His were all Mac terminal eye candy. He did not get the job.

Senior level SRE hire with a software background. He only needed two emacs and a compiler, he could write anything else he needed.

  • > I have a conversation, ask them about their career, about job changes, about hobbies, what they do after work. If you know the subject, skilled people talk a certain way, whether it is IT, construction, sailing.

    My experience differs a lot. Many insanely skilled people are somewhat "weird" (including possibly

    - being a little on the spectrum,

    - "living a little bit in their own world",

    - having opinions on topics that are politically "inappropriate" (not in the sense of "being on the 'wrong' side of a political fence", but rather in the sense of "an opinion that is quite different than what you have ever heard in your own bubble", and is thus not "socially accepted")

    - being a little bit "obnoxious" (not in bad sense, but in a sense that might annoy a particular kind of people))

    What you consider to be "skilled people" is what I would rather call "skilled self-promoters" (or possibly "smooth talker"). "Skilled people" and "skilled self-promoter" are quite different breeds of people.

    • > My experience differs a lot. Many insanely skilled people are somewhat "weird" (including possibly

      I am actually a bit weird myself, so I can relate.

      > What you consider to be "skilled people" is what I would rather call "skilled self-promoters". "Skilled people" and "skilled self-promoter" are quite different breeds of people.

      I don't mean that they have told me that they are skilled, or that their resume has implied it. I mean that they actually have the skills. Self-promoters that don't know the information always look good on paper, but after a few minutes of talking to them you can tell that they don't quite match.

      Before IT, I was a live sound engineer TV, theater, music. There was also a entertainment university starting up around the same time. They were pumping out tons of "trained" engineers that looked good on paper but couldn't mix for shit. I think we can blame them for the shitification of pop music.

      1 reply →

  • I have an ice breaker type question which is “what’s something (tool, tech, whatever) you are interested/excited about and wish people knew more about?” Selfishly, interviewing is kind of boring, so I’m always looking to learn something new.

    Sadly, out of 100s of people, I’ve probably only gotten an interesting response a handful of times. Mostly people say some well known tech from the job description.

    I never held that against anyone, but the people who had an interest in something were more fun to work with.

  • >I mostly skipped the technical questions in the last few interviews I have conducted. I have a conversation

    Sir, you have attained dizzying intellectual heights that few men have.

    My comment is meant to be a compliment, not snarky. And indeed I have noticed that the best people I have encountered can often size people up accurately with very general questions often on unrelated subjects.

  • > What are your 5 linux cli desert island commands?

    Are you familiar with busybox ?

  • > Manager level to replace the dumb question manager. His were all Mac terminal eye candy. He did not get the job

    Huh? Please explain

    • We hired the guy who said it was a dump question, he became our manager. He then decided to retire and we had to replace him. One of the candidates answered the 5 cli question with terminal eye candy, not functional commands. He was not hired for the job.

The problem isn't AI, the problem is companies don't know how to properly select between candidates, and they don't apply even the basics of Psychometrics. Do they do item analysis of their custom coding tests? Do they analyse the new hires' performances and relate them to their interview scores? I seriously doubt it.

Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.

[1] https://en.wikipedia.org/wiki/Psychometrics

  • > Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.

    What kind of desperate candidate would agree to that? Also, what do you expect to see from the person in a few weeks? Usual onboarding (company + project) will take like 2-3 months before a person is efficient.

    • Candidate would be compensated, obviously. That's why it's expensive.

      You don't need him to become efficient. Also I don't think it is always necessary to have such long onboarding. I'll never understand why a new hire (at least in senior position) can't start contributing after a week.

      8 replies →

    • If you work with Boring Technology, your onboarding process has no reason to be longer than a week, unless you're trying to make the non-tech parts of the role too interesting.

      4 replies →

  • How do you control for confounders and small data?

    For data size, if you're a medium-ish company, you may only hire a few engineers a year (1000 person company, 5% SWE staff, 20% turnover annually = 10 new engineers hired per year), so the numbers will be small and a correlation will be potentially weak/noisy.

    For confounders, a bad manager or atypical context may cause a great engineer to 'perform' poorly and leave early. Human factors are big.

    • Sure, psychological research is hard because of this, but that's not what I'm proposing - I'm talking about just having some data on predictive validity of the hiring process. If there's some coding test: is it reliable and valid? Aren't some items redundant because they're too easy or too hard? Which items have the best discrimination parameter? How the total scores correlate with e.g. length of the test takers tenures?

      Sure, the confidence intervals will be wide, but it doesn't matter, even noisy data are better than no data.

      Maybe some companies already do this, but I didn't see it (though my sample is small).

My last interview, for the job I'm currently employed in, asked for a take home assignment where I was allowed to use any tool I'd use regularly including AI. Similar process for a live coding interview iterating on the take home that followed. I personally used it to speed up wirting initial boilerplate and test suites.

I fail to see why this wouldn't be the obvious choice. Do we disallow linters or static analysis on interviews? This is a tool and checking for skill and good practices using it makes all sense.

  • As someone on the other side of the table, I don't care if you used AI to complete a take-home project. I care if you can explain the strengths and weaknesses of the approach it took, or if you chose to steer it in one direction or another. It usually becomes quite clear those who actually understand what the AI actually did for them.

There’s no other industry* that interviews experienced people the way we do. So maybe just do what everyone else does.

Everyone is so terrified of hiring someone that can’t code, but the most likely bad hires and the most damaging bad hires are bad because of things that have nothing to do with raw coding ability.

*Except the performing arts. The way we interview is pretty close to the way musicians are interviewed, but that’s also really similar to their actual job.

I've been arguing that "AI" has very little impact on meaningful technical interviews, that is ones that don't test for memorization of programming trivia: https://blog.sulami.xyz/posts/llm-interviews/

  • We have been interviewing people who are obviously using covert AI helper tools. Ask them a question and they respond with coherent response, but they are just reading off of a window we can't see.

    In some cases it is obvious they are blathering a stream of words they don't understand. But others are able to hold something resembling a coherent conversation. We also have to allow for the fact that most people we interview aren't native English speakers, and are talking over Teams. It can be very hard to tell if they are cheating.

    Asking questions to probe their technical skills is essential, otherwise you are just selecting for people who are good at talking and self promotion. We aren't just asking trivia questions.

    We also give a simple code challenge, nothing difficult. If they have a working knowledge of the language, they should be able to work through the problem in 30 minutes, and we let them use an IDE and google for things like regex syntax.

    Some of them are obviously using an AI, since they just start typing in a working solution. But in theory they could be a Scala expert who remembers how to use map plus a simple regex...

  • A couple of weeks ago I interviewed at a place where I had to do a take-home exercise. It's fine, I don't mind. No Leetcode. Just my own IDE, my own shortcuts, and write a piece code that solves a problem.

    I was asked whether I used AI/LLM for the solution. I didn't. I felt like using an LLM to solve the problem for me wasn't the right way of showcasing knowledge. The role was for some form of 'come in with knowledge and help us'.

    The response to that was basically: everybody here uses AI.

    I declined the follow-up interview, as I felt that if all you have is the speed of AI to be ahead of your competitors, you're not really building the kind of things that I want to be a part of. It basically implies that the role is up in the air as soon as the AI gets better.

    • When I started coding I did it in notepad. I thought it was hardcore and cool. I was young and stupid. Then I adopted an IDE and I became much better at writing code.

      To me AI is just another tool that helps me solve problems with code. An auto complete on steroids. A context aware stack overflow search. Not wanting to adopt or not even work somewhere where colleagues use it, sounds to me like coding in notepad AND in the process scoffing those who use an IDE.

      Besides, if AI gets to the point it can replace you, it will replace you. Better to start learning how to work with it so you can fill whatever gap AI can't.

      1 reply →

Prediction: faangs will come up with something clever or random or just fly everyone onsite, they are so rich and popular, they can filter by any arbitrary criteria.

Second-rate companies will keep some superficial coding, but will start to emphasize more of the verbal parts like system design and retrospective. Which sucks, because those are totally subjective and mostly filters for whoever can BS better on the spot and/or cater to the interviewer's mood and biases better.

My favorite still: in-person pair programming for a realistic problem (could be made-up or shortened, but similar to the real ones on the job). Use whatever tools you want, but get the correct requirements and then explain what you just did, and why.

A shorter/easier task is to code review/critique a chunk of code, could even just print it out if in person.

  • It's not that hard. Just ask them to explain the code. Then ask them how they'd change it for several different scenarios.

    • I've taken this approach, and found that it's trivially easy to distinguish people relying on LLMs from people who have thought the problem through and can explain their own decision-making process.

      I had a couple of people who, when asked to explain specific approaches reflected in their code, very obviously typed my question right back into ChatGPT and then recited its output verbatim. Those interviews came to an end rather quickly.

      One of my favorite ones was when I asked a candidate to estimate the complexity of their solution, and ChatGPT got it wrong, giving O(log(n)) for an O(n) algorithm. When I asked leading questions to see if the candidate could see where the error came in, they starting verbatim reciting a dictionary definition of computational complexity, and could not address the specifics of the problem at all.

This whole conversation is depressing me. when I left work a couple years ago due to health reasons, AI was just beginning to become a problem. Now, thanks to a clinical study, I may possibly be able to return to work, and it sounds like the industry has changed drastically.

Not looking forward to it.

  • It hasn’t. Most businesses have continued operating the way they have for years.

    SV Startup hiring is the most trendy and not representative.

  • I think that the effects at the moment are highly exaggerated in the tech media than the reality on the ground.

    How long that will remain true is a very open question, where different folks have widely differing timelines on when they expect AI to have highly meaningful impacts.

How about paid internships as a way to filter candidates? As in, hire a candidate for a small period of time, like 2 weeks or something, and have them work on a real task with a full-time employee and use their performance on that to decide whether or not to hire.

I realize it's not easy for smaller companies to do, but I think it's the single best way to see if someone's fit for a job

Our tech screen is having the candidate walk me through a small project that I created to highlight our stack. They watch my screen and solve a problem to get the app running then they walk me through a registration flow from the client to the server and then returning to the client. There are no gotchas but there are opportunities to improve on the code (left unstated... some candidates will see something that is suboptimal and ask why or suggest some changes).

We get to touch on client and browser issues, graphQL, Postgres, Node, Typescript (and even the various libraries used). It includes basic CRUD functionality, a third party API integration, basic security concerns and more. It's just meant to gauge a minimal level of fluency for people that will be in hands on keyboard roles (juniors up to leads, basically). I don't think anyone has found a way to use AI to help them (yet) but if this is too much for them they will quickly fall flat in the day to day job.

Where we HAVE encountered AI is in the question/answer portion of the process. So far many of those have been painfully obvious but I'm sure others were cagier about it. The one incident that we have had that kind of shook us was when someone used a stand-in to do the screen (he was fantastic, lol) and then when we hired him it took us about a week to realize that this was a team of people using an AI avatar that looked very much like the person we interviewed. They claimed to be in California but were actually in India and were streaming video from a Windows machine to the Mac we had supplied for Teams meetings. In one meeting (as red flags were accumulating) their Windows machine crashed and the outline of the person in Teams was replaced by the old school blue screen of death.

I'm someone who hated leetcode style interviews for the longest, but I'm starting to come around on them. I get that these style of questions are easy to game, but I still think they have _some_ value. The point of these style of questions was supposed to test your ability to problem solve and come up with a good solution given the tools you knew. That being said, I don't think every company should be using this type of question for their interviews. I think leetcode style questions should be reserved for companies that are pushing the boundary of the industry since they're exploring charted territory and need people who can come up with unique solutions to problems no one really knows. I think most companies would be fine with some kind of pairing problem since most people are probably solving engineering problems instead of computer science problems. But none of this matters, since, we all know that even if we went that direction as an industry, the business people would fuck it up some how anyways.

  • > reserved for companies that are pushing the boundary of the industry

    In a world where every company beleives (or wants to beleive) that they are doing some ground-breaking, bleeding edge work (see any tech company blog and you can only find hyped technologies in there), I do not think one can expect companies to do a fair assessment of if they really are doing such work.

I had no idea people took hackerrank as a serious signal rather than as a tool for recent graduates to track interview prep progress. Surely it has all the same issues AI does: you have no way of verifying that the person who takes the interview actually is responsible for that signal.

I don't see AI as a serious threat to the interview process unless your interview process looks a lot like hackerrank.

  • Your “unless” covers a huge swath of this industry, at the low end and at the high end. Excluding places that do that leaves you with what exactly? Boutique shops filled with 20 year veterans?

    • What do you mean by the "high" end? I would consider this sort of interview style necessarily precluding such a place from being considered a high-quality work-place. Not only is it a miserable way to interview, it's not an effective signal for engineer quality beyond rapid code snippet production.

      > Excluding places that do that leaves you with what exactly? Boutique shops filled with 20 year veterans?

      We are on a VC forum—I imagine small shops focused on quality are quite common here.

      2 replies →

I feel like we, SWEs, have been over-engineering our interview process. Maybe it's time to simplify it, for example, just ask questions based on the candidate's resume instead of coming up with random challenges. I feel like all the new proposals seem overly complicated, and nobody, interviewer or interviewee, is happy with any of them.

  • Definitely over-engineering. But I also think the industry is just extremely bad at hiring anything above junior or entry-level. Job postings are so generic and interchangeable between companies that they don't actually tell you what the role is or what the company is looking for. Everyone wants to cast the widest possible net so that they catch some wunderkind genius out of thousands. Then, they wonder why they can't find the exact person they're looking for to solve the problem they're filling the role for.

    In reality, job postings should be incredibly specific, with specificity rising as the role requires more experience and problem solving. You'll get less applicants (or will be able to clearly screen out the people who don't meet the specific requirements) but you'll get ones that actually match what you are looking for and can actually solve the problem your company is trying to solve with filling the role. Then the conversation/interview is much more important and both sides feel like they have some "stakes in the game".

  • This risks hiring candidates who can present themselves and their past projects very well but fail to actually write code and ship anything on the job. I've seen it happen.

  • You would be very amazed at how many people with reasonably strong resumes can't write _any_ code. Google for fizzbuz, it's a dumb problem, but candidates often can't solve similar problems with a _take_home_ interview.

Licensing. We do the Leetcode interview in a controlled testing center. When you apply for a position, I look up your license number, then I know you can leetcode without wasting any of my developer resources on determining that.

  • > Licensing. We do the Leetcode interview in a controlled testing center.

    Congratulations, you are now a "Certified Leetcoder (tm)". :-(

    Seriously: what a lot of people write down here is that a lot of programming jobs don't involve code puzzle skills, but are often rather about putting stuff/APIs together in the currently fashionable framework.

    This makes becoming a Certified Leetcoder (tm) just another useless hoop to jump through.

    (Just to be clear: for those few programming jobs that demand the employee to solve algorithmic puzzles regularly, doing them in a job interview makes sense. But these jobs are rare.)

    • > This makes becoming a Certified Leetcoder (tm) just another useless hoop to jump through.

      And this differs from the status quo how? Employers obviously find value in this signal for better or worse. We’re just making it so it only needs to be done once, by trained proctors, instead of for every position you apply for.

  • Isn’t this basically triplebyte? I did their process and every company still wanted to do their leetcode interview after.

    • Former head of product there here: no, we didn't do this kind of identity verification. That would have prohibitively damaged people's willingness to actually do the interview. We did use various means to try to identify people signing up multiple times, and we caught plenty of people trying to duplicate themselves that way, but you didn't have to physically go to a center to do our interview.

      I've considered something like that for my current company, which is doing basically the same thing, but:

      (a) this has not, in practice, been a problem for us in identifying good candidates

      and, far less importantly:

      (b) you need very high scale to have interviewers everywhere that candidates are OR you're paying extra for a third-party controlled environment

      (c) scheduling and cancellations become more difficult and costly respectively

When it comes to interviews I generally stick to asking fairly easy questions but leaving out key details, I care a lot more about candidates asking following ups and talking through the code they are writing over what code they actually produce. If a candidate can ask questions when they don't understand something and talk through their thought process then they are probably going to be a good person to work with. High level design questions are often pretty valuable I find as well, which I usually don't require code for I just ask them to talk through their ideas of how they would design an application.

thank fuck. they are terrible. being interviewed by CTOs just out of university, with no experience for a senior in everything role. they ask you to do some lame assignment, a pet problem not once looking at 20 years of GitHub repos and open source contributions.

In my uni days, I respected professors who designed exam in a way where students can utilize whatever they could to complete the assignment, including internet, their notes, calculators, etc.

I think the same applies to good tech interview. Company should adapt hiring process to friend with AI, not fight.

  • What would you suggest?

    • Ask questions that are tricky to cheat, evaluate candidates by their thought process in solving a problem and ability to clarify, discuss and justify their decisions.

nah ai killed stupid tech interviews. you can easily get an idea of someones competence by literally just talking to them instead of making them do silly homework exercises and testing their rote memorisation abilities.

  • This is the real answer. However to gauge competence you must first have it. The fact that most people don't is why we are in this position in the first place.

So, when AI can pass the tech interview seamlessly, I guess we can just hire it?

Maybe the future will be human shills pretending to be job candidates for shady AI “employment agencies” that are actually just (literally) skinning gpt6 apis that sockpuppet minimum wage developing nation”hosts”?

It’s simple don’t have a tech interview that does not relate to the job.

Show code, ask questions about it that requires opinion.

  • This oft repeated talking point lacks perspective on what companies want and how interviews work. It also doesn’t address the AI problem.

    Interviews are screening for multiple things, not just the ability to do one specific technical job. More often than not, technical coding ability is not even at the top of the priority list. Interviews are looking for well-rounded candidates who can do more than 1 job. Companies want to know if you can change jobs easily, they want to know if you’re average, better than the average programmer, or exceptional. They want to know if you’ll make a good manager after a few years, how good you are with people, how well you prioritize and communicate.

    I had a professor in college that graded tests with the median skewed low, centered on a D. He complained that the usual practice of putting it on C or B made it so he could clearly see the difference between F-, F, and D- students, while the A students were all clumped together. He wanted to identify the hard workers and superstars in the class, see who was A vs A+ vs A++. It freaked everyone out when grades came out much lower than expected, but he renormalized at the end and people with test scores in the Cs and Ds got As and Bs in the class.

    Be careful what you wish for. It’s competitive right now and interviews that limit screening to ability to do basic job-level coding and don’t screen for knowledge and soft skills and exceptionalism will make it harder for people who are good to demonstrate they’re better than people who are mediocre or use AI. Is that what you want?

I don't think it did, if anyone cares. The way I've been advocating to my colleagues who are concerned about "cheating" is that there's probably a problem with the interview process. I prefer to focus on the think, rather than the solve.

Collaborate, as opposed to just do.

Things that really tell me if I can work with that person and if together, we can make good things.

Stop remote tech interviews

Unless the job you're interviewing for is remote-only, this makes perfect sense. If you expect your candidates to be able to work in your office, they should be interviewed there.

I think that a mythology about where the difficulty in working with computers lies has made the relationship between businesses and the people they hire to do this stuff miserable for quite some time

"Coding", as in writing stuff in programming languages with correct syntax that does the thing asked for in isolation, has always been a very dumb skill to test for. Even before we had stackoverflow syntactic issues were something you could get through by consulting a reference book or doing some trial and error with a repl or a compiler. That this is faster now with internet search and LLMs is good for everyone involved, but the fact that it's not what matters remains

The important part of every job that gets a computer to do a thing is a combination of two capabilities: Problem-solving, that is, understanding the intended outcome and having intuition about how to get there through whatever tools are available, and frustration tolerance: The ability to keep trying new* stuff until you get there

Businesses can then optimize for things like efficiency or working well with others once those constraints are met, but without those capabilities you simply can't do the job, so they're paramount. The problem with most dinky little coding interviews wasn't that you could "cheat", it's thst they basically never tested for those constraints by design, though some clever hiring people manage to tweak them to do so on an ad hoc basis sometimes

* important because a common frustration failure mode is repetitive behavior. Try something. Don't understand why it doesn't work. Get more frustrated. Try the same thing again. Repeat

Funny enough, the songs from the website Coding For Nothing about grinding LeetCode and endless take-home projects seem very relevant, and everything nowadays feels like a meme.

Tech interviewing has become a weird survival game, and now AI is flipping the rules again. If you need a laugh: https://codingfornothing.com

One option is to make the interviews harder and let candidates use ai to see how they can work with ai and actually build working product. They will be using ai in the job anyway so let them use it instead of asking stupid algorithm questions to sort an array

So there’s AI that’s really good at doing the skills we’re hiring for. We want you to not use AI so we can hire you for a job that we’re saying we’re going to replace with AI. Sounds like a great plan.

  • No. Using AI requires a depth of knowledge to spot the mistakes in the generated. code, and to know how to fit all the snippets of code in to something that works.

    We need to know that the developer actually has skills and isn't just secretly copying the answer off of a hidden screen. We are interviewing now, and some cantidates are obviously cheating. Our interview process is not leet code based, and reasonably chill, but we will probably have to completely rethink the process.

    Since we are hiring contractors, in theory we can let them go after a couple months if they suck, but we haven't tested out how this will work in practice.

  • The article actually takes a stance that if you incorporate AI you can learn how the person is at doing things the AI cannot.

Maybe we don't need employers. Maybe we need a bunch of 1-person companies. I don't think AI is yet the force multiplier that makes that feasible for the masses, but who knows what things will look like in a few years.

  • That is the end point of AI in the economy, until we are removed from the economy entirely.

    Everyone on the planet is a one person startup, using AI and robotics to do all the actual work.

I think code design can often cover just as much as actual code anyway. Just describe to me how your solve it, the interfaces you'd use, and how you'd show me you solved it.

As an interviewee it's insane to me how many jobs I have not gotten because of some arbitrary coding problem. I can confidently say after having worked in this field for over a decade and at a FAANG that I am a very capable programmer. I am considered one of the best on every team I've been on. So they are definitely selecting the wrong people IMO.

> What are our options?

* Take a candidate's track record into account. Talk with them about it.

* Show that you're experienced yourself, by being able to tell something about what someone would be like to work with, by talking with them.

* Get a reputation for your company not tolerating dishonesty. If someone cheats in an interview and gets caught, they're banned there, all the interviewers will know, and the cheater might also start to get a reputation beyond that company. (Bonus: Company reputation for valuing honesty is attractive to people who don't want dishonest coworkers.)

* Treat people like a colleague, trying to assess whether it's a good match. You're not going to be perfectly aligned (e.g., the candidate or the company/role might be a bit out of the other's league right now), but to some degree you both want it to be a good match for both parties. Work as far as you can with that.

(Don't do this: Leetcode hazing, to establish the dynamic of them being there to dance for your approval, so hopefully they'll be negged, and will seek your approval, won't think critically about how competent and viable your self/team/company are, and will also be less likely to get uppity when you make a lowball offer. Which incidentally places the burden of rehearsing for Leetcode ritual performances upon the entire field, at huge cost.)

We did an experiment at interviewing.io a few months ago where we asked interviewees to try to cheat with AI, unbeknownst to their interviewers.

In parallel, we asked interviewers to use one of 3 question types: verbatim LeetCode questions, slightly modified LeetCode questions, and completely custom questions.

The full writeup is here: https://interviewing.io/blog/how-hard-is-it-to-cheat-with-ch...

TL;DR:

- Interviewers couldn't tell when candidates were cheating at all

- Both verbatim and slightly modified LeetCode questions were really easy to game with AI

- Custom questions were not gamable, on the other hand[1]

So, at least for now, my advice is that companies put more effort into coming up with questions that are unique to them. It's better for candidates because they get better signal about the work, it reduces the value asymmetry (companies have to put effort into their process instead of just grabbing questions from LeetCode etc), and it's better for employers (higher signal from the interview).

[1] This may change with the advent of better models

  • A couple years ago, we were using a take home coding assignment as a hiring signal. It was a small API based off something I'd built for an internal tool. It was self-contained and relatively easy to explain. The .md file was about two pages.

    I recently fed it into ChatGPT and asked it to do the assignment. It did it perfectly -- I read the code in detail and couldn't find any issues.

    So custom questions are off the table now, too. We'll be using a code review instead for the next round.

The death of shitty interviews has been greatly exaggerated.

AI might make e.g. your leetcode interview less predictive than it previously would have been. But was it predictive in the first place? I don't think most interviews are written by people thinking in those terms at all. If your method of interviewing never depended on data suggesting it actually, you know, worked in the first place, why would it matter if it starts working even worse?

Insofar as it makes the shittiness of those interviews more visible, the effect of AI is a good thing. An interview focused on recall of some specific algorithm was never predictive, it's just now predictive in a way that Generic Business Idiots can understand.

We frequently interview people who both (a) claim to have been in senior IC roles (not architect positions, roles where they are theoretically coding a lot) for many, many years and (b) cannot code their way out of a paper bag when presented with a problem that requires even a modicum of original reasoning. Some of that might be interview nerves, of course, but a lot of these people are not at all unconfident. They just...suck. And I wonder if what we're seeing is the downstream effects of Generic Business Idiots hiring primarily people who memorize stuff than people who build stuff.

  • > A lot of these people … just suck.

    Another possibility is that their job subtly drifted.

    I wrote a lot of code as a grad student but my first interviews afterward were disasters. Why? Because I’d spent the last few months writing my thesis and the few months before that writing a very specific kinds of code (signal processing, visualization) that were miles away from generic interview questions like “Make the longest palindrome.”

    • We don't ask "make the longest palindrome". We ask "convert this English into code that does what it says". If you want to make the discussion more concrete, we have a public practice problem [1] that we send out with our interview bookings so that people know what to expect. The real problems we ask are very similar to it.

      Do you feel like there's anything there that any reasonably skilled programmer shouldn't be able to figure out on the fly?

      [1] https://www.otherbranch.com/shared/practice-coding-problem

      2 replies →

The inconvenient truth is that everything circles back to in-person interviews.

The article addresses this:

>A lot of companies are doing RTO, but even companies that are 100% in-office still interview candidates from other cities. Spending money to fly every candidate out without an aggressive pre-screen is too wasteful.

No, accidently hiring someone who AI'd their way through the interview costs orders of magnitude more to undo. It's absolutely worth paying for a round trip flight and a couple days of accommodations.

1point3acres is massacring tech interviews right now. Having to pay $80/month to some China based website where NDA-protected interview questions are posted regularly, then being asked the same questions in the interview, seems insane.

It also feels like interviewers know this and assume you studied the questions, they seem incapable of giving hints, etc if you don't have the questions memorized.

AI is the least of it.

Very funny :) I too failed an interview at google, also related to binary search on a white board. I never write with pens. I'm on keyboards the whole time, my handwriting is terrible.

I've built a search engine for two countries and then I was failed by a guy that wears cowboy hats to work at google in Ireland. Not a lot of cows there I'm guessing. (No offence to any real cowboys that work at google of course).

I did like the free flight to Ireland though and the nice lunch. Though I was disappointed I lost "Do no evil" company booklet.

  • TBH, not the #1 source of animal meat, though plenty of cows in Ireland.

    • Good for google. Plenty of passable interview candidates then.

      Dang! I knew it was a mistake leaving my hat at home. Little things like that people tend to forget.

The best interview process I've ever had was going to work with former coworkers, aka no real process. A couple of quick calls with new people who deferred strongly to the person who knew me, my work, and my values. Nothing else has the signal value.

Of course the problem is this can't scale or be outsourced to HR, but is this a bug or a feature?

The best interview processes are chill, laid back, open ended.

That's the only way you're going to get relevant information.

I've been verified to the moon and back by Apple and others for roles that could never have worked.

The problem is that when it comes to the hiring process, everyone is suddenly an expert; no matter how dysfunctional, inhumane and destructive their ideas are.

Anyone who suggests a paired programming solution is right, and answering the wrong question. Unless/until we return to a covid-like market the process will never be optimized for the candidate, and this is just too expensive an approach for employers. In this market I think the answer is hire less.

one hiring manager told me they dont do code challenges. they said, "why would someone take a job they couldnt do?"

isnt it that simple?

I just ask to share a text editor and write down my questions. Its critical anyway because often then not its not always clear for tech questions what exactly i asked (linux command for example).

This blocks their screen too.

and yes we do know very soon if you look somewere else, take time or rephrase the question to get more time.

If you able to fake it, at that point you should just get th ejob anyway :P

Why don't we simply ask the AI how to conduct a tech interview nowadays?

Interestingly I find AI is actually better at that kind of CS whiteboard question (implementing a binary search tree) than that "connecting middlewares to API" type task. Or at least, it's more straightforward to apply the AI, when you want a set of functions with clearly defined interfaces written - rather than making a bunch of changes across existing files.

  • I’ve wondered about the kind of person who starts white boarding with the pros and cons of several AI offerings. As if, confronted with a problem domain, they are choosing or hiring AI before architecture. As an interviewer, how should I adapt my questions? Something like, “How would you prompt it to add fuzzing?” “How would you prompt it to describe how each change might affect our stack?”

If you are using Internet, Google, Stackoverflow at work, why insist that interviewee need to solve problems on their own?

  • To test their inherent thinking skills and base knowledge.

    There's a huge difference between occasionally looking up something, and practically leaning on it. Ironically, the mass degradation of search engine result quality within the past ~decade has made it much harder for people to do the latter, and when they do, it shows much more clearly.

Don't forget to wear your cowboy hat when interviewing at google. Very important.

I've been considering using a second webcam stream focused on my screen just to assure hiring managers that I don't have ChatGPT on my screen, or anywhere else. Kind of like chess players do it sometimes on online tournaments. I've been hearing people complain about cheating a lot.

If using AI is cheating then one solution as the author mentions is have the interview take place at an office but I'm surprised another approach isn't more readily available: having the candidate the the test remotely at a trusted 3rd party location.

I've been interviewing a bunch of developers the past year or so, and this:

> Architectural interviews are likely safe for a few years yet. From talking to people who have run these, it’s evident that someone is using AI. They often stop with long pauses, do not quite explain things succinctly, and do not understand the questions well enough to prompt the correct answer. As AI gets better (and faster), this will likely follow the same fate as the rest but I would give it some years yet.

Completely matches my experience. I don't do leet code BS, just "let's have a talk". I ask you questions about things you tell me you know about, and things I expect of someone at the level you're selling yourself at. The longest it's taken me to detect one of these scumbags was 15 minutes, and an extra 5 minutes to make sure.

Some of them make mistakes that are beyond stupid, like identity theft of someone who was born, raised and graduated in a country whose main language they cannot speak.

The smartest ones either do not know when to stop answering your questions with perfect answers (they just do not know what they're supposed to not know), or fumble their delivery and end up looking like unauthentic puppets. You just keep grinding them until you catch em.

I'm sure it's not infallible, but that's inherent to hiring. The only problem with this is cost, you're going to need a senior+ dev running the interview, and IME most are not happy to do so. But this might just be what the price of admission for running a hiring pipeline for software devs is nowadays. Heck, now feels like a good time to start a recruitment process outsourcing biz focused on the software industry.

  • I think this approach is not very favoured by hacker news but it's also what I prefer. It's so much easier to quickly gauge a minimum level of the basic programming knowledge and other sw knowledge by just asking some simple directed questions.

    I once got a guy that claimed to have implemented multiple default HTTP JSON REST APIs and somehow had never:

    - tested his API with JSON payloads, serialise serialise - never queried his APIs manually or semi automatically (no knowledge of curl, postman or anything similar)

I never understood why Big Tech never setup contracts with all the SAT and ACT test centers across the country. Even before Zoom with Codepads it would have made sense for the recruiters to send potential candidates to a test center to do a pre assessment rather than waste time with engineers sitting on prescreen calls all day.

  • What exactly are you suggesting here? A standardized test that applies to all your job applications? Or, a candidate having to drive to a test center for every company they apply to? Or something else?

    • the idea is sound. create a basic standardized test targeted at tech/engineering jobs. not actually SAT -- operated by a vendor like The College Board. There are plenty of standardized test operators

      22 replies →

    • Standardized plus the ability for companies to do their own test after they pass the standard one. So go get prescreened at test center then use that test to apply for jobs. Company either flys you in for in-person or sends you back to test center to do live remote interview in controlled environment.

      6 replies →

  • I have done interviews with Karat which just outsources technical interviews to engineers elsewhere.

    All of these technical interviews still suck though! I basically never code with someone watching me and find it very difficult to do in interviews. I also find it hard to find the time to actually practice this skill

  • Im revisiting technical interview prep because it has been awhile and it seems like a good time for a refresher, and it is striking how similar it all is to SAT and GMAT prep these days. A pretty cookie-cutter performance that is mostly about demonstrating that you have the time and means to properly prepare. Might as well just go the extra step at that point and have it be exactly like those standardized tests… take them once at a test center, get a score that is valid for a few years that you can just send in with your application…

  • That maybe end up happening. Send them to a test center to do their remote hacker rank interview. Doesn’t even need to be standardized. I don’t like it but it’s one of the lower friction options.

  • In the US and Canada, to get into university, there is, or at least used to be, a standard entrance exam or something close to it (the SAT in the US, OAC scores if you were from Ontario, Canada, etc..).

    Additionally, Undergraduate programs in the US and Canada, at least used to, despite their varied reputation, have a pretty standard program.

    Maybe things have deteriorated so far at the high school and university level that new standardized exams are needed. But we also have a plethora of verifiable certifications whose exams are held in independent test facilities.

  • Another commenter on this called it “Licensing” but it’s more like credentialing to me.

I miss one option from the list of non-solutions the author presents there - ditch the idiotic whiteboard/"coding exercise" interview style. Voila, the AI (non)problem solved!

This sort of comp-sci style exam with quizzes and what not maybe somewhat helps when hiring junior with zero experience fresh out of school.

But why are people with 20+ years of easily verifiable experience (picking up a phone and asking for references is still a thing!) being asked to invert trees and implement stuff like quicksort or some contrived BS assignment the interviewer uses to boost their own ego but with zero relevance to the day to day job they will be doing?

Why are we still wasting time with this? Why is always the default the assumption there that the applicants are all crooked hochstaplers that are lying on their resumes?

99% of jobs come with probationary period anyway where the person can be fired on the spot without justification or any strings attached. That should be more than enough time to see whether the person knows their stuff or not after having passed one or two rounds of oral interviews.

It is good enough for literally every other job - except for software engineering. What makes us the special snowflakes that people are being asked to put up with this crap?

Never really liked leetbro interviews. Always reeked of “SO YOU THINK YOU CAN CODE BRO? SHOW ME WHAT YOU GOT!” The majority of my work over 10+ years of experience always relied on general problem solving and soft skills like collaborating with others. Not rote memorization of in order traversal.

> Tech interviews are one of the worst parts of the process and are pretty much universally hated by the people taking them.

True.

> One of the things we can do, however, is change the nature of the interviews themselves. Coding interviews today are quite basic, anywhere from FizzBuzz, to building a calculator. With AI assistants, we could expand this 10x and have people build complete applications. I think a single, longer interview (2 hours) that mixes architecture and coding will probably be the way to go.

Oh.... yeah, that sounds just... great.

"One of the things we can do, however, is change the nature of the interviews themselves. Coding interviews today are quite basic, anywhere from FizzBuzz, to building a calculator. With AI assistants, we could expand this 10x and have people build complete applications. I think a single, longer interview (2 hours) that mixes architecture and coding will probably be the way to go."

If that's where the things are going, I'm retraining to become a line cook at McDonalds.

None of this makes any sense. Why should I complete a tech test interview if I have 15 years of experience at X top firm? I would have done it already anyway.

I had a ‘principal engineer’ at last place who grinded leetcode for 100 days and still failed a leetcode interview. It’s utter nonsense.

A conversation with technical questions and topics should suffice. Hire fast and fire people.

> I think the image below pretty much sums it up

The image below does sum it up but not in the way the author thinks.

Google wants to hire people who complete their hiring process. They're OK with missing out on some people who would be excellent but who can't/won't make it through their hiring process.

The mistake may lie in copying Google's hiring process.

LLMs killed busy work. Now people have to actually talk to each other and they're finding out that we've been imitating functionality instead being functional.

It hasn't killed the interview, it's killed the career field. Most people just haven't realized this yet.

What a BS article. As they say, just do the interview in person. Problem solved. Not sure about the US but 99% of jobs here in Spain are hybrid or onsite ("presencial"), not fully remote.

They're acting like all jobs are remote and it's impossible to do an interview in person.

Also, does it really matter? If a person is good at using AI and manages to be good at creating code with that, is it really so much worse than a person that does it from the top of their head? I think we have to drop the idea that AI is going to go away. I know it's all overhyped right now but there is definitely something to it. I think it will be another tool in our toolboxes. Just like stackoverflow has been for ages (and that didn't kill interviews either).

  • Our US mega corp is having us hire all remote contractors, in person is completely out of the question.

  • Costs money (travel) and time (company's) plus a lot of this is outsourced to agencies to do the screening.

    Yes, it is disgusting. Sadly also very common.

    • Ah weird, here in Europe if you apply for a job in another city or country they won't fly you out there. You can come on your own dime and usually you wouldn't even tell them you don't live there yet (after all if you don't even live there, why would they bother with you, it's only extra hassle for them when you start). Probably C-suite roles are an exception to this. But they're an exception to pretty much everything. Roles with particular foreign languages (e.g. support) too but this is also an edge case.

      Moving personnel between countries when they are already working for the company does happen. They did it for me. But at that point they already know what they have.

Show the remote candidate an AI's deficient answer to a well-asked question, and ask the candidate if they understand what exactly is wrong with the AI's assessment, or what the follow-up/rewritten prompt to the AI should be. Compile a library of such deficient chats with the AI.

It's a tricky subject, because what if people who use AI are just better together? And what if in a year from now, AI by itself is better? What's the point of hiring anyone? Perhaps this is the issue behind the problems being described, which might be mere symptoms. There are tons of very smart teams working on software that will basically replace the people you're hiring.

Or you can just given them a way to bypass all of that, and ask them about any significant project that the candidate did build (which is relevant to the job description, open or closed source that is released) or even open source contributions towards widely used and significant projects. (Not hello world, or demo projects, or README changes.)

Both scenarios are easily verifiable (can check that you released the project or if you made that commit or not) and in the case of open-source, the interviewer can lookup at how you code-review with others, and how you respond and reason about the code review comments of others all in public to see if you actually understand the patches you or another person submitted.

A conversation can be started around it and eliminates 95% of frauds. If the candidate cannot answer this, then no choice but give a leetcode / hackerrank hard challenge and interview them again to explain their solution and why.

A net positive to everyone and all it takes to qualify is to build something that you can point to or contribute to a significant open source project. Unlike Hackerrank which has now become a negative sum race to the bottom quest with rampant cheating thanks to LLMs.

After that, a simple whiteboard challenge and that is it.

  • This would be a nice interview for candidates who have open source contributions, but many who have day jobs do not. Or their open source code is 5 years old and not representative of their current skill set.

    • There is no shame in taking time off after leaving a job to develop or contribute to an open source project or two. The world would be a better place for it.

      5 replies →

I cannot emphasize this enough. Coding is the EASY part of writing software. You can teach someone to code in a couple of months. Interviews that focus on someone's ability to code are just dumb.

What you need to do is see how well they can design before writing software. What is their process for designing the software they make? Can they architect it correctly? How do they capture user's mental models? How do they deal with the many "tops" that software has?

No it didn't, you just need to stop asking questions an LLM can easily solve, most of those were probably terrible questions to begin with.

I can create a simple project with 20 files, where you would need to check almost all of them to understand the problem you need to solve, good luck feeding that into an LLM.

Maybe you have some sneaky script or IDE integration that does this for you, fine, I'll just generate a class with 200 useless fields to exhaust your LLM's context length.

Or I can just share my screen and ask you to help me debug an issue.

I know nobody likes doing tech interviews but how has AI killed it ? Anyways you do want to know basics of computer science, it is a helpful thing to know if you ever want to progress beyond CRUD shitshovelling.

Also wtf is inverting a binary tree ? Like doing a "bottom-view". That shit is easy.