Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.
In my case, the question was "how are you using AI tools?" And trying to see whether you're still in the metaphorical stone age of copy-pasting code into chatgpt.com or making use of (at the time modern) agentic workflows. Not sure how good of an idea this is, but at least it was a question that popped up after passing technical interviews. I want to believe the purpose of this question was to gauge whether applicants were keeping up with dev tooling or potentially stagnating.
To be fair, this topic seems to be quite divisive, and seems like something that definitely should be discussed during an interview. Who is right and wrong is one thing, but you likely don't want to be working for a company who has an incompatible take on this topic to you.
C'mon let's be real here, there's either "testing AI skills" versus "using AI agents like you would on the daily".
The signal got from leetcode is already dubious to assert profeciency and it's mostly used as a filter for "Are you willing to cram useless knowledge and write code under pressure to get the job?" just like system design is. You won't be doing any system design for "scale" anywhere in any big tech because you have architects for that nor do you need to "know" anything, it's mostly gatekeeping but the truth is, LLMs democratized both leetcode and system design anyway. Anyone with the right prompting skills can now get to an output that's good for 99% of the cases and the other 1% are reserved for architecs/staff engineers to "design" for you.
The crux of the matter is, companies do not want to shift how they approach interviews for the new era because we have collectively believed that the current process is good enough as-is. Again, I'd argue this is questionable given how sometimes these services break with every new product launch or "under load" (where YO SYSTEM DESIGN SKILLZ AT).
Some people think so. I interviewed someone who, on a screenshare, would just type every question I said, verbatim, into antigravity. Then he'd look at the output for a second and say "Hm this looks good" (it was not) and then run the code and paste the error back into the prompt. It was a surreal experience. I didn't end the interview early because it was so incredibly wild I couldn't even believe it. I don't think he had a single thought the entire time that wasn't motivated by the LLM output.
If you can only code with AI, soon you won't have interviews at all because there's no reason to hire you, as the managers can just type the prompts themselves. Or at least that's what I've been led to believe by the marketing.
My guess is this is correct. To the extent coding with agents becomes dominant, the need for non-technical managers to coordinate large numbers of developers will decrease.
Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.
In my case, the question was "how are you using AI tools?" And trying to see whether you're still in the metaphorical stone age of copy-pasting code into chatgpt.com or making use of (at the time modern) agentic workflows. Not sure how good of an idea this is, but at least it was a question that popped up after passing technical interviews. I want to believe the purpose of this question was to gauge whether applicants were keeping up with dev tooling or potentially stagnating.
To be fair, this topic seems to be quite divisive, and seems like something that definitely should be discussed during an interview. Who is right and wrong is one thing, but you likely don't want to be working for a company who has an incompatible take on this topic to you.
What rock?
C'mon let's be real here, there's either "testing AI skills" versus "using AI agents like you would on the daily".
The signal got from leetcode is already dubious to assert profeciency and it's mostly used as a filter for "Are you willing to cram useless knowledge and write code under pressure to get the job?" just like system design is. You won't be doing any system design for "scale" anywhere in any big tech because you have architects for that nor do you need to "know" anything, it's mostly gatekeeping but the truth is, LLMs democratized both leetcode and system design anyway. Anyone with the right prompting skills can now get to an output that's good for 99% of the cases and the other 1% are reserved for architecs/staff engineers to "design" for you.
The crux of the matter is, companies do not want to shift how they approach interviews for the new era because we have collectively believed that the current process is good enough as-is. Again, I'd argue this is questionable given how sometimes these services break with every new product launch or "under load" (where YO SYSTEM DESIGN SKILLZ AT).
I wish I could edit that; Read: ..AI skills alone.
Some people think so. I interviewed someone who, on a screenshare, would just type every question I said, verbatim, into antigravity. Then he'd look at the output for a second and say "Hm this looks good" (it was not) and then run the code and paste the error back into the prompt. It was a surreal experience. I didn't end the interview early because it was so incredibly wild I couldn't even believe it. I don't think he had a single thought the entire time that wasn't motivated by the LLM output.
If you can only code with AI, soon you won't have interviews at all because there's no reason to hire you, as the managers can just type the prompts themselves. Or at least that's what I've been led to believe by the marketing.
Unless you are doing stuff that does not need to be maintened, there is still a need for a skilled human to maintain proper software architecture.
It is the managers who are doomed. The future is small team of dev answering directly to the cto.
My guess is this is correct. To the extent coding with agents becomes dominant, the need for non-technical managers to coordinate large numbers of developers will decrease.