Comment by tracker1
19 hours ago
My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.
It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.
edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.
The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way.
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
https://github.com/ensisoft/detonator
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.
Yes. If work was leetcode problem solving, I would actually enjoy it. Updating npm packages and writing tiny features that get canned a week later is all not that stimulating.
>The LC interviews are like testing people how fast they can run 100m after practice
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
LC interviews were made popular by companies that were started by CS students because they like feeling that this stuff is important. They're also useful when you have massive numbers of applicants to sift through because they can be automated and are an objective-seeming way to discard loads of applicants.
Startups that wanted to emulate FAANGs then cargo-culted them, particularly if they were also founded by CS students or ex-FAANG (which describes a lot of them). Very, very few of these actually try any other way of hiring and compare them.
Being able to study hard and learn something well is certainly a great skill to have, but leetcode is a really poor one to choose. It's not a skill that you can acquire on the job, so it rules out anyone who doesn't have time to spend months studying something in their own time that's inherently not very useful. If they chose to test skills that are hard and take effort to learn, but are also relevant to the job, then they can also find people who are good at learning on the job, which is what they are actually looking for.
But why stop there? Why not test candidates with problems they have never seen before? Or problems similar to the problems of the organization hiring? Leetcode mostly relies on memorizing patterns with a shallow understanding but shows the candidates have a gaming ability. Does that imply quality in any way? Some people argue that willing to study for leetcode shows some virtue. I very much disagree with that.
19 replies →
> If it didn't actually work, it would've been discarded by companies long ago.
This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.
> If it didn't actually work, it would've been discarded by companies long ago
You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?
1 reply →
Does it work though?
When I look at the messy Android code, Fuchsia's commercial failure, Dart being almost killed by politics, Go's marvellous design, WinUI/UWP catastrophical failure, how C++/CX got replaced with C++/WinRT, ongoing issues with macOS Tahoe,....
I am glad that apparently I am not good enough for such projects.
2 replies →
It's also a filter for people who are ok with working hard on something completely pointless for many months in order to get a job.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.
> If it didn't actually work, it would've been discarded by companies long ago
That makes the assumption that company hiring practices are evidence based.
How many companies continue to use pseudo-science Myers Briggs style tests?
5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
Did you mean to type 25? 5 years ago LC challenge were as, if not more, prevalent than they are today. And a single interview for a job is not something I have seen ever after 15 years in the space (and a bunch of successful OSS projects I can showcase).
I actually have the feeling it’s not as hardcore as it used to be on average. E.g. OpenAI doesn’t have a straight up LC interview even though they probably are the most sought after company. Google and MS and others still do it, but it feel like it has less weight in the final feedback than it did before. Most en-vogue startup have also ditched it for real world coding excercices.
Probably due to the fact that LC has been thoroughly gamed and is even less a useful signal than it was before.
Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
16 replies →
Not sure if that's a typo. 5 years ago was also pretty LC-heavy.
Ten years ago it was more based on Cracking the Coding Interview.
So i'd guess what you're referring to is even older than that.
3 replies →
I read this, and intentionally did not read the replies below. You are so wrong. You can write a library, even an entirely new language from scratch, and you will still be denied employment for that library/language.
> 5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
Based on my own experiences, that was true 25 years ago. 20 years ago, coding puzzles were now a standard part of interviewing, but it was pretty lightweight. 5 years ago (covid!) everything was leet-code to get to the interview stage.
I have been getting grilled on leet code style questions since the beginning my of my career over 12 years ago.
The faangs jump and then the rest of the industry does some dogshit imitation of their process
3 replies →
>how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way
I've always explained it as demonstrating your ping pong skills to get on the basketball team.
Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!"
> SMEGMA companies
Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two?
"Startups" and "Enterprise"? I guess that basically covers everything
I prefer AGAMEMNON: Apple, Google, Amazon, Microsoft, Ebay, Meta, NVIDIA, OpenAI, Netflix
Lol :)
"SMEGMA companies." :D
And nowadays people are blatantly using AI to answer questions like this (https://www.finalroundai.com/coding-copilot). Even trying to stumble through design questions using AI
100%. I just went through an interview process where I absolutely killed the assignment (had the best one they'd seen), had positive signal/feedback from multiple engineers, CEO liked me a lot etc, only to get sunk by a CTO who thought it would be cool to give me a surprise live test because of "vibe coding paranoia". 11 weeks in the process, didn't get the role. Beyond fucking stupid.
This was the demo/take-home (for https://monumental.co): https://github.com/rublev/monumental
It's funny because this repo really does seem vibe-coded. Obviously I have no reason not to believe you, but man! All those emojis in the install shell script - I've never seen anyone other than an AI do that :) Maybe you're the coder that the AI companies trained their AI on.
Sorry about the job interview. That sucks.
There's even a rocket emoji in server console.logs... There are memes with ChatGPT and rocket emojis as a sign of AI use. The whole repo looks super vibe-coded, emojis, abundance of redundant comments, all in perfect English and grammar, and the readme also has that "chatty" feel to it.
I'm not saying that using AI for take-home assignments is bad/unethical overall, but you need to be honest about it. If he was lying to them about not using any AI assistance to write all those emojis and folder structure map in the repo, then the CTO had a good nose and rightfully caught him.
10 replies →
I used AI for the Docker setup which I've already done before. I'm not wasting time on that. Yeah you can vibe code basic backend and frontend and whatnot, but you're not going to vibe code your way to a full inverse kinematics solution.
I'm not a math/university educated guy so this was truly "from the ground up" for me despite the math being simple. I was quite proud of that.
3 replies →
Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem: https://github.com/jonnycoder1/merck_coding_challenge
Enjoy the ultimate classic tour de force from world treasure Chung-chieh (Ken) Shan’s wikiblog "Proper Treatment"
discussion / punchline http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumber...
Start of main content: http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumber...
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem
That doesn't look too challenging for anyone who has experience in low-level programming, embedded systems, and reverse engineering. In fact for me it'd be far easier than part 1, as I've done plenty of work similar to the latter, but not the former.
That sucks so hard man, very disrespectful. We should team up and start out own company. I tried checking out your repo but this stuff is several stops past my station lol.
A surprise live test is absolutely the wrong approach for validating whether someone's done the work. IMO the correct approach is to go through the existing code with the applicant and have them explain how it works. Someone who used AI to build it (or in the past had someone else build it for them) wouldn't be able to do a deep dive into the code.
We did go into the assignment after I gently bowed out of the goofy live test. The CTO seemed uninterested & unfamiliar with it after returning from a 3 week vacation during the whole process. I waited. Was happy to run him through it all. Talked about how to extend this to a real-world scenario and all that, which I did fantastically well at.
1 reply →
That is an insane amount of work for a job application. Were you compensated for it at all?
It isn't impressive to spend a lot of time on a hiring problem, you shouldn't do that. If you can't do it in a few hours then just move on and apply for another job, you aren't the person they are looking for.
Doing it slowly over many days is only taking your time and probably wont get you the job anyway since the solution will be a hard to read mess compared to someone who solves it quickly since they are familiar with the domain.
The other comments here note that, and the author even stated it directly, that it was vibe-coded.
No. Should I invoice them? I'm still livid about it. The kicker is the position pays a max of 60-120k euros, the maximum being what I made 5 years ago.
4 replies →
Damn... that's WAY more than I'll do for an interview process assignment... I usually time box myself to an hour or two max. I think the most I did was a tic-tac-toe engine but ran out of time before I could make a UI over it.
I put absolutely every egg into that basket. The prospect of working in Europe (where I planned to return to eventually) working on cool robot stuff was enticing.
The fucking CTO thought I vibe-coded it and dismissed me. Shout-out to the hiring manager though, he was real.
This repo has enough red flags to warrant some suspicion.
You have also not attempted to hide that, which is interesting.
Wait, what.. you did this as a take home for a position? Damn that looks excessive.
Yes. I put a ton of work into it. I had about 60 pages worth of notes. On inverse kinematics, FABRIK, cyclic algorithms used in robotics, A*/RRT for real-world scenarios etc. I was super prepared. Talked to the CEO for about two hours. Took notes on all videos I can find of team members on youtube and their company.
Luckily the hiring manager called me back and levelled with me, nobody kept him in the loop and he felt terrible about it.
Some stupid contrived dumbed down version of this crane demo was used for the live test where I had to build some telemetry crap. Nerves took over, mind blanked.
Here's the take-home assignment requirements btw: https://i.imgur.com/HGL5g8t.png.
Here's the live assignment requirements: [1] https://i.imgur.com/aaiy7QR.png & [2] https://i.imgur.com/aaiy7QR.png.
At this rate I'm probably going to starve to death before I get a job. Should I write a blog post about my last 2 years of experiences? They are comically bad.
This was for monumental.co - found them in the HN who's hiring threads.
6 replies →
how much did this job pay?
60k-120k euros. The upper 20k probably being entirely inaccessible so in reality probably like 70-100k euros.
4 replies →
Its not really memorizing solutions. Yes you can get quite far by doing so but follow ups will trip people up. However if you have memorized it and can answer follow ups, I dont see a problem with Leetcode style problems. Problem solving is about pattern matching and the more patterns you know and can match against, the better your ability to solve problems.
Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems.
These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not.
It is and isn't. I'd argue it's not memorizing exact solutions(think copy paste) but memorizing fastest algos to accomplish X.
And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps.
That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now.
The only part of memorizing fastest algorithm the vast majority needs is whatever name that goes by in your library. Generic reusable code works very well in almost any language for algorithms.
Even if you are an exception either you are writing the library meaning you write that algorithm once for the hundreds of other users, or the algorithm was written once (long ago) and you are just spending months with a profiler trying to squeeze out a few more CPU cycles of optimization.
There are more algorithms than anyone can memorize that are not in your library, but either it is good enough to use a similar one that already is your library, or you will build it once and once again it works so you never go back to it.
Which is to say memorizing how to implement an algorithm is a negative: it means you don't know how to write/use generic reusable code. This lack is costing your company hundreds of thousands of dollars.
I’d say it’s not even problem solving and it’s more pattern recognition.
I actually love LC and have been doing a problem a week for years. Basically I give myself 30 minutes and see what I can do. It’s my equivalent to the Sunday crossword. After awhile the signals and patterns became obvious, to me anyway.
I also love puzzlerush at chess.com. In chess puzzles there are patterns and themes. I can easily solve a 1600 rated problem in under 3 seconds for a chess position I’ve never seen before not because I solve the position by searching some move tree in my mind, I just recognize and apply the pattern. (It also makes it easier to trick the player when rushing but even the tricks have patterns :)
That said, in our group we will definitely have one person ask the candidate a LC style question. It will probably be me asking and I usually just make it up on the spot based on the resume. I think it’s more fun when neither one of us know the answer. Algorithm development, especially on graphs, is a critical part of the job so it’s important to demonstrate competency there.
Software engineering is a hugely diverse field now. Saying you’re a programmer is kinda like saying you’re an artist. It does give some information but you still don’t really know what skill set that person uses day to day.
> memorizing fastest algos
I don't think most LC problems require you to do that. Actually most of them I've seen only require basic concepts taught in Introduction to Algorithms like shortest path, dynamic programming, binary search, etc. I think the only reason LC problems stress people out is time limit.
I've never seen a leetcode problem that requires you to know how to hand code an ever so slightly exotic algorithm / data structure like Fibonacci heap or Strassen matrix multiplication. The benefit of these "fastest algos" is too small to be measured by LC's automatic system anyway. Has that changed?
My personal issue with LC is that it has a very narrow view of what "fast" programs look like, like most competitive programming problem sets. In real world fast programs are fast usually because we distribute the workload across machines, across GPU and CPU, have cache-friendly memory alignment or sometimes just design clever UI tricks that make slow parts less noticeable.
> you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
I'm wondering how software devs explain this to themselves. What they train for vs what they actually do at their jobs differ more and more with time. And this constant cycle of forgetting and re-learning sounds like a nightmare. Perhaps people burn out not because of their jobs but the system they ended up in.
"Fastest algos" very rarely solve actual business problems, which is what most of us are here to do. There's some specialized fields and industries where extreme optimization is required. Most of software engineer work is not that.
There's probably a general positive correlation between knowing a lot of specific algorithms/techniques (i.e. as tested by LC) and being a great developer. HOWEVER I think the scenario of a real world job is far more a subset of that.
Firstly these questions you get like 30 mins to do, which is small compared to the time variance introduced by knowing or not knowing the required algorithm. If you know it you'll be done in like 10 mins with a perfect answer. Whereas if you don't know you could easily spend 30 mins figuring it out and fail. So while on average people passed by LC may be good engineers, in any one scenario it's likely you reject a good engineer because the variance is large. And then it's easy to see why people get upset, because yeah it feels dodgy to be rejected when you happen to not know some obscure algorithm off the top of your head. The process could be fairer.
Secondly, as many say, the actual job is rarely this technical stuff under such time pressure. Knowing algorithms or not means basically nothing when the job is like debugging CI errors for half your day.
I'm fine with that in an interview... I'm not fine with that, in a literally AI graded assignment where you cannot ask clarifying questions. In those cases, if you don't have a memorized answer a lot of times I cannot always grasp the question at hand.
I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves.
I'd say that learning to solve tough LeetCode problems has very little (if not precisely zero) value in terms of you as a programmer learning to do something useful. You will extremely rarely need to solve these type of tougher select-the-most efficient-algorithm problems in most real-world S/W dev jobs, and nowadays if you do then just as AI.
Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job.
If a position needs low-level from-scratch code so performance-critical, and needs it so quickly that the developer must recall all of this stuff from memory, any candidate likely wouldn’t be asked to give a technical interview, let alone some gotcha test.
Ironic that you’re touting these puzzles as useful interviewing techniques while also admitting that ChatGPT can solve them just fine.
If you’re hiring software engineers by asking them questions that are best answered by AI, you’re living in the past.
That was because the parent complained about not having good write ups. You can use GPT which has already been trained on publicly available solutions to generate a very good explanation. Like a coaching buddy. Keeping in mind there are paid solutions that charge 15k USD for this type of thing, being able to upskill at just 20bucks a month is an absolute steal.
Few people are in both circles of "can memorize answers" and "dont understand what they are doing".
You would need "photographic" memory
It's bizarre because I see the opposite.
Most people memorize and cargo cult practices with no deeper understanding of what they are doing.
Been in software development for 30 years. I have no idea what "Leetcode" is. As far as I know I've never been interviewed with "Leetcode", and it seems like I should be happy about that.
And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me.
Leetcode with no prep is a pretty decent coding skill test
The problem is that it is too amenable to prep
You can move your score like 2stddev with practice, which makes the test almost useless in many cases
On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
I think LC is used mostly as a metric of how much tolerance you have for BS and unpaid work: If you are willing to put unpaid time to prepare for something with realistically zero relevance with the day-to-day duties of the position, then you are ripe enough to be squeezed out.
3 replies →
> On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
This framing of the problem is deeply troubling to me. A good test is one that evaluates candidates on the tasks that they will do at the workplace and preferably connects those tasks to positive business outcomes.
If a candidate's performance improves with practice, then so what? The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company.
Skill is not a univariate quantity that doesn't change with time. Also it's susceptible to other confounding variables which negatively impact performance. It doesn't matter if you hire the smartest devs. If the social environment and quality of management is poor, then the work performance will be poor as well.
leetcode just shows why interviews are broken. As a former senior dev (retired now, thanks to almost dying) I can tell you that the ability to write code is like 5% of the job. Every interview I've ever attended has wasted gazillions of dollars and has robbed the company of 10X that amount.
Until companies can focus on things like problem solving, brainstorming, working as a team, etc. the situation won't improve. If I am wrong, why is it that the vast majority of my senior dev and dev management career involved the things I just mentioned?
(I had to leave the field, sadly, due to disability)
Oh and HR needs to stop using software to filter. Maybe ask for ID or something, however, the filters are flagging everyone and the software is sinking the ship, with you all with it.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided.
One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best"
The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test.
These interviews seem designed to filter out applicants with active jobs. In fact, I'd say that they seem specifically for selecting new CS graduates and H1B hires.
Which isn't that the main skill actually being tested? How the candidate goes about solving problems? I mean if all we did was measure peoples' skills at making sweeping assumptions we'd likely end up with people who oversimplify problems and all of software would go to shit and get insanely complex... Is the hard part writing the lines of code or solving the problem?
Skill? LC is testing rote memorization of artificial problems you most likely never encounter in actual work.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that.
Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue.
Where I interviewed you had effectively 1 or 2 LC question but the interviewer offered clarifying questions making for a real time discussion and coding exercise.
This solves one problem but it does add performance anxiety to the mix having to live code.
IMO leetcode has multiple problems.
1. People can be hired to take the test for you - surprise surprise 2. It is akin to deciding if someone can write a novel from reading a single sentence.
Hiring people for the test is only valid for online assessment. For an onsite, its very obvious if the candidates have cheated on the OA. I've been on the other side and its transparent.
> It is akin to deciding if someone can write a novel from reading a single sentence.
For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test.
If they are on site why not interview them? If the purpose of these online assessments is to be the mouth of the funnel that process is starting to fail.
https://www.reddit.com/r/leetcode/comments/1mu3qjt/breaking_...
There are funded companies set up just to help you get past this stuff.
https://www.reddit.com/r/leetcode/comments/1iz6xcy/cheating_...
Personally I feel software development has become more or less like assembly line work. If I was starting out today I would seriously consider other options.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation.
> I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers
If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free.
> It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well.
I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have.
> Without any chance of additional info/questions it's literally a setup to fail.
Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed.
Last year, I saw a lot of places do effectively AI/Automated pre-inverview screenings with a leetcode web editor, and a video capture... This is what I'm talking about.
I'm fine with hard questions in an actual interview.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection).
[dead]
How does asking clarifying questions work when a non-programmer is tasked with performing the assessment, because their programmers are busy doing other things, or find it degrading and pointless?
Many interviews now involve automated exercises on websites that track your activity (don't think about triggering a focus change event on your browser, it gets reported).
Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability).
You have few minutes and you're on your own.
If you pass that abomination, maybe, you have in person ones.
It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none.
The major difference between software devs and lawyers, surgeons, and civil engineers is that the latter three have fairly rigorous standards to pass to become a professional (bar, boards, and PE).
That could exist for software too, but I'm not sure HN folks would like that alternative any better. Like if you thought memorizing leetcode questions for 2 weeks before an interview was bad, well I have some bad news.
Maybe in 50-100 years software will have that, but things will look very different.
2 replies →
At least in the US, lawyers, surgeons, & civil engineers all have accredited testing to even enter the profession, in the form of the bar exam, boards, and FE & PE tests respectively. So they do have such theoretical tests, but only when they want to gain their license to practice in a given state. Software doesn't have any such centralized testing accreditation, so we end up with a mess.
"don't think about triggering a focus change event on your browser, it gets reported)."
So .. my approach would be to just open dev tools and deactivate that event.
Show of practical skill or cheating?
2 replies →
The one's i've gotten have all seemed more like tests of my puzzle solving skills than coding.
The worst ones i've had though had extra problems though:
one i was only told about when i joined the interview and that they would be watching live.
One where they wanted me streaming my face the whole time (maybe some people people are fine with that)
And one that would count it against me if i tabbed to another page. So no documentation because they assume i'm just googling it.
Still it's mostly on me to prepare and expect this stuff now.
You can make up API calls which you can say you'd implement later. As long as these are not tricky blocks, you'll be fine.
For Google, Facebook and Amazon, yes. At least last I interviewed there a few years ago. They're more interested in the data structure/algorithm
But I have also been to places that demand actual working code which is compiled and is tested against cases
Usually there the problem is simpler, so there's that