Comment by adithyassekhar
10 hours ago
Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.
Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.
> Your next interview won't be testing your AI skills.
You are living under quite a big rock.
Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.
In my case, the question was "how are you using AI tools?" And trying to see whether you're still in the metaphorical stone age of copy-pasting code into chatgpt.com or making use of (at the time modern) agentic workflows. Not sure how good of an idea this is, but at least it was a question that popped up after passing technical interviews. I want to believe the purpose of this question was to gauge whether applicants were keeping up with dev tooling or potentially stagnating.
What rock?
C'mon let's be real here, there's either "testing AI skills" versus "using AI agents like you would on the daily".
The signal got from leetcode is already dubious to assert profeciency and it's mostly used as a filter for "Are you willing to cram useless knowledge and write code under pressure to get the job?" just like system design is. You won't be doing any system design for "scale" anywhere in any big tech because you have architects for that nor do you need to "know" anything, it's mostly gatekeeping but the truth is, LLMs democratized both leetcode and system design anyway. Anyone with the right prompting skills can now get to an output that's good for 99% of the cases and the other 1% are reserved for architecs/staff engineers to "design" for you.
The crux of the matter is, companies do not want to shift how they approach interviews for the new era because we have collectively believed that the current process is good enough as-is. Again, I'd argue this is questionable given how sometimes these services break with every new product launch or "under load" (where YO SYSTEM DESIGN SKILLZ AT).
I wish I could edit that; Read: ..AI skills alone.
If you can only code with AI, soon you won't have interviews at all because there's no reason to hire you, as the managers can just type the prompts themselves. Or at least that's what I've been led to believe by the marketing.
Unless you are doing stuff that does not need to be maintened, there is still a need for a skilled human to maintain proper software architecture.
It is the managers who are doomed. The future is small team of dev answering directly to the cto.
1 reply →
If you're not learning anything new, you're doing it wrong.
There's a massive gap between just using an LLM and using it optimally, e.g. with a proper harness, customised to your workflows, with sub-agents etc.
It's a different skill-set, and if you're going to go into another job that requires manual coding without any AI tools, by all means, then you need to focus on keeping those skills sharp.
Meanwhile, my last interview already did test my AI skills.
Have any descriptions or analysis of what is considered "properly" on the cutting edge? I'm very curious. Only part of my profession is coding. But it would be nice to get insight into how people who really try to learn with these tools work.
I would say the first starting point is to run your agent somewhere you're comfortable with giving it mostly unconstrained permissions (e.g. --dangerously-skip-permissions for Claude COde), but more importantly, setting up sub-agents to hand off most work to.
A key factor to me in whether you're "doing it right" is whether you're sitting there watching the agent work because you need to intervene all the time, or whether you go do other stuff and review the code when the agent think it's done.
To achieve that, you need a setup with skills and sub-agents to 1) let the model work semi-autonomously from planning stage until commit, 2) get as much out of the main context as possible.
E.g. at one client, the Claude Code plugin I've written for them will pull an issue from Jira, ask for clarification if needed, then augment the ticket with implementation details, write a detailed TODO list. Once that's done, the TODO items will be fed to a separate implementation agent to do the work, one by one - this keeps the top level agent free to orchestrate, with little entering its context, and so can keep the agent working for hours without stopping.
Once it's ready to commit, it invokes a code-review agent. Once the code-review agent is satisifed (possibly involving re-work), it goes through a commit checklist, and offers to push.
None of these agents are rocket-science. They're short and simple, because the point isn't for them to have lots of separate context, but mostly to tell them the task and move the step out of the main agents context.
I've worked on a lot more advanced setups too, but to me, table stakes beyond minimising permissions is is to have key workflows laid out in a skill + delegate each step to a separate sub-agent.
> Meanwhile, my last interview already did test my AI skills.
Curious to hear more about this.
> But you are forgetting your skills
Depends on what you consider your "skills". You can always relearn syntax, but you're certainly not going to forget your experience building architectures and developing a maintainable codebase. LLMs only do the what for you, not the why (or you're using it wrong).
There are three sides to this depending on when you started working in this field.
For the people who started before the LLM craze, they won't lose their skills if they are just focusing on their original roles. The truth is people are being assigned more than their original roles in most companies. Backend developers being tasked with frontend, devops, qa roles and then letting go of the others. This is happening right now. https://www.reddit.com/r/developersIndia/comments/1rinv3z/ju... When this happens, they don't care or have the mental capacity to care about a codebase in a language they never worked before. People here talk about guiding the llms, but at most places they are too exhausted to carry that context and let claude review it's own code.
For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book. They have to blindly trust the code and keep hitting it like a casino machine (forgot the name, excuse me) burning tokens which makes these companies more money.
For the people who are yet to begin, sorry for having to start in a world where a few companies hold everyone's skills hostage.
> For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book.
This! There are several forces that act on how code is written and getting the software to work is only one. Abstraction is another which itself reflect two needs: Not repeating code and solving the metaproblem instead of the direct one. Simplicity is another factor (solving only the current problem). Then there’s making the design manifests in how the files are arranged,…
As a developer, you need to guarantee that the code you produced works. But how the computer works is not how we think. We invented a lot of abstractions between the two, knowing the cost in performance for each one. And we also invented a lot of techniques to help us further. But most of them are only learned when you’ve experienced the pain of not knowing them. And then you’ll also start saying things like “code smells”, “technical debt”, “code is liability” even when things do work.
The syntax argument is correct, but from what I am seeing, people _are_ using it wrong, i.e. they have started offloading most of their problem solving to be LLM first, not just using it to maybe refine their ideas, but starting there.
That is a very real concern, I've had to chase engineers to ensure that they are not blindly accepting everything that the LLM is saying, encouraging them to first form some sense of what the solution could be and then use the LLM to refine it further.
As more and more thinking is offloaded to LLMs, people lose their gut instinct about how their systems are designed.
> not learning anything new
Huge disagree. Or likely more "depends on how you use it". I've learned a lot since I started using AI to help me with my projects, as I prompt it in such a way that if I'm going about something the "wrong" way, it'll tell me and suggest a better approach. Or just generally help me fill out my knowledge whenever I'm vague in my planning.
> Your next interview won't be testing your AI skills
Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.
I've done a couple flirty interviews and so far it hasn't come up. So take hope, it's not all bad.
Are they just looking for AI skills? If so that's terrifying.
I think most are looking for both.
AI/LLM knowledge without programming knowledge can make a mess.
Programming knowledge without AI/LLM knowledge can also make a mess.
5 replies →
Probably, I think hand coding is going the way of the dodo and the ox cart.
1 reply →
The usual LeetCode-ish tasks, often system design, but then deep AI usage. "I use Copilot" isn't going to fly at all, as far as I understand.
1 reply →
What a time to be alive. I once got roasted in an interview because I said I would use Google if I didn't know something (in this context, the answer to a question that would easily be found in language and compiler documentation).
You are not alone. The silver lining is they show their true colors early.
> But you are forgetting your skills (seen it first hand), and you're not learning anything new.
This is just false. I may forget how to write code by hand, but I'm playing with things I never imagined I would have time and ability to, and getting engineering experience that 15 years of hands on engineering couldn't give me.
> Your next interview won't be testing your AI skills.
Which will be a very good signal to me that it's not a good match. If my next interview is leetcode-style, I will fail catastrophically, but then again, I no longer have any desire to be a code writer - AI does it better than me. I want to be a problem solver.
> getting engineering experience that 15 years of hands on engineering couldn't give me.
This is the equivalent of how watching someone climb mountain everest in a tv show or youtube makes you feel like you did it too. You never did, your brain got the feeling that you did and it'll never motivate you to do it yourself.
This is only true for fully unsupervised "vibe coding". But you'll find this will not work for anything beyond a basic todo list app.
You'll free up your time from actually writing code, but on the other hand, you'll have to do way more reading, planning, making architectural decisions etc. This is what engineering feels like should be.
[dead]
> Professionally you are downgrading
It is the contrary!
You learn using a very powerfool tool. This is a tool, like text editor and compiler.
But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.
The analogy from construction is to be elevated from being a bricklayer to an engineer. Or using various shaped shovels with wheelbarrel versus mechanized tools like excavators and dumpers in making earthworks.
... of course for those the focus is in being the master of bricklayers, which is noble, no pun intended, saying with agreeing straight face, bricklaying is a fine skill with beautiful outputs in their area of use. For those AI is really unnecessary. An existential threat, but unnecessary.
I agree with you, syntax details are not important but they haven't been important for a long time due to better editors and linters.
> But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.
This is exactly my point. I learned logical mistakes when my first if else broke. Only reason you or I can guide these into good logic is because we dealt with bad ones before all this. I use claude myself a lot because it saves me time. But we're building a culture where no one ever reads the code, instead we're building black boxes.
Again you could see it as the next step in abstraction but not when everyone's this dependent on a few companies prepared to strip the world of its skills so they can sell it back to them.