Comment by sulami

3 days ago

I've been arguing that "AI" has very little impact on meaningful technical interviews, that is ones that don't test for memorization of programming trivia: https://blog.sulami.xyz/posts/llm-interviews/

A couple of weeks ago I interviewed at a place where I had to do a take-home exercise. It's fine, I don't mind. No Leetcode. Just my own IDE, my own shortcuts, and write a piece code that solves a problem.

I was asked whether I used AI/LLM for the solution. I didn't. I felt like using an LLM to solve the problem for me wasn't the right way of showcasing knowledge. The role was for some form of 'come in with knowledge and help us'.

The response to that was basically: everybody here uses AI.

I declined the follow-up interview, as I felt that if all you have is the speed of AI to be ahead of your competitors, you're not really building the kind of things that I want to be a part of. It basically implies that the role is up in the air as soon as the AI gets better.

  • When I started coding I did it in notepad. I thought it was hardcore and cool. I was young and stupid. Then I adopted an IDE and I became much better at writing code.

    To me AI is just another tool that helps me solve problems with code. An auto complete on steroids. A context aware stack overflow search. Not wanting to adopt or not even work somewhere where colleagues use it, sounds to me like coding in notepad AND in the process scoffing those who use an IDE.

    Besides, if AI gets to the point it can replace you, it will replace you. Better to start learning how to work with it so you can fill whatever gap AI can't.

    • I still mainly use a text editor after several decades, and do a lot of thinking and initial design with pencil and paper. IDEs just get in the way.

      I've seen the type of code AI generates. It might work, but if you think that's good or that massaging it so it works will make you any better, I have some bad news for you...

We have been interviewing people who are obviously using covert AI helper tools. Ask them a question and they respond with coherent response, but they are just reading off of a window we can't see.

In some cases it is obvious they are blathering a stream of words they don't understand. But others are able to hold something resembling a coherent conversation. We also have to allow for the fact that most people we interview aren't native English speakers, and are talking over Teams. It can be very hard to tell if they are cheating.

Asking questions to probe their technical skills is essential, otherwise you are just selecting for people who are good at talking and self promotion. We aren't just asking trivia questions.

We also give a simple code challenge, nothing difficult. If they have a working knowledge of the language, they should be able to work through the problem in 30 minutes, and we let them use an IDE and google for things like regex syntax.

Some of them are obviously using an AI, since they just start typing in a working solution. But in theory they could be a Scala expert who remembers how to use map plus a simple regex...