Comment by roncesvalles

11 hours ago

>The LC interviews are like testing people how fast they can run 100m after practice

Ah, but, the road to becoming good at Leetcode/100m sprint is:

>a slow arduous never ending jog with multiple detours and stops along the way

Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.

Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.

That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.

LC interviews were made popular by companies that were started by CS students because they like feeling that this stuff is important. They're also useful when you have massive numbers of applicants to sift through because they can be automated and are an objective-seeming way to discard loads of applicants.

Startups that wanted to emulate FAANGs then cargo-culted them, particularly if they were also founded by CS students or ex-FAANG (which describes a lot of them). Very, very few of these actually try any other way of hiring and compare them.

Being able to study hard and learn something well is certainly a great skill to have, but leetcode is a really poor one to choose. It's not a skill that you can acquire on the job, so it rules out anyone who doesn't have time to spend months studying something in their own time that's inherently not very useful. If they chose to test skills that are hard and take effort to learn, but are also relevant to the job, then they can also find people who are good at learning on the job, which is what they are actually looking for.

Does it work though?

When I look at the messy Android code, Fuchsia's commercial failure, Dart being almost killed by politics, Go's marvellous design, WinUI/UWP catastrophical failure, how C++/CX got replaced with C++/WinRT, ongoing issues with macOS Tahoe,....

I am glad that apparently I am not good enough for such projects.

  • zero of those failures are of a technical nature.

    The fact is that they fail is not evidence that leetcode interviews fails to select for high quality engineers.

    • On the contrary, they prove high quality engineers, for whatever measure that happens to be, does not correlate to product quality.

It's also a filter for people who are ok with working hard on something completely pointless for many months in order to get a job.

But why stop there? Why not test candidates with problems they have never seen before? Or problems similar to the problems of the organization hiring? Leetcode mostly relies on memorizing patterns with a shallow understanding but shows the candidates have a gaming ability. Does that imply quality in any way? Some people argue that willing to study for leetcode shows some virtue. I very much disagree with that.

  • I think you have a misunderstanding. Most companies that do LC-style interviews usually show unknown problems.

    Memorizing the Top 100 list from Leetcode only works for a few companies (notably and perplexingly, Meta) but doesn't for the vast majority.

    Also, just solving the problem isn't enough to perform well on the interview. Getting the optimal solution is just the table stakes. There's communication, tradeoffs between alternative solutions, coding style, follow-up questions, opportunities to show off language trivia etc.

    Memorizing problems is wholly not the point of Leetcode grinding at all.

    In terms of memorizing "patterns", in mathematics and computer science all new discovery is just a recombination of what was already known. There's virtually no information coming from outside the system like in, say, biology or physics. The whole field is just memorized patterns being recombined in different ways to solve different problems.

    • It’s not about memorizing individual problems per se, but rather recognizing overall patterns and turning the process into a gameable endeavor. This can give candidates an edge, but it doesn’t necessarily demonstrate higher-level ability beyond surface familiarity with common patterns and the expectations around them. I’d understand the value if the job actually involved work similar to what's reflected in leetCode style problems, but in most cases, that couldn’t be further from reality. leetCode serves little purpose beyond measuring a candidate’s willingness to invest time and effort. That’s the only real virtue it rewards. But ultimately, I believe leetCode style interviews are measuring the wrong metric.

      5 replies →

  • To play the devils advocate, being able to memorize patterns and recognize which patterns apply to a given problem is extremely valuable. Tons of software dev is knowing the subset of algorithms, data structures, and architecture that apply to a similar problem and being able to adapt it.

    • It's funny you mention that.

      That's literally what CS teaches you too. Which is what "leetcode" questions are: fundamental CS problems that you'd learn about in a computer science curriculum.

      It's called "reducing" one problem to another. We had an entire semester's mandatory class spend a lot of time on reducing problems. Like figuring out how you can solve a new type of question/problem with an algorithm or two that you already know from before.

      Like showing that "this is just bin packing". And there are algorithms for that, which "suck" in the CS kind of sense but there are real world algorithms that are "good enough" to be usable to get shit done.

      Or showing that something "doesn't work, period" by showing that it can be reduced to the halting problem (assuming that nobody has solved that yet - oh and good luck btw. if you want to try ;) )

      1 reply →

  • > Leetcode mostly relies on memorizing patterns

    Math is like that as well though. It's about learning all the prior axioms, laws, knowing allowed simplifications, and so on.

    • In the same way that writing and performing a new song is "just memorizing prior patterns and law"

      or that writing a new book is the same.

      I.e. it's not about that. Like sure it helps to have a base set of shared language, knowledge, and symbols, but math is so much more than just that.

      1 reply →

> If it didn't actually work, it would've been discarded by companies long ago

You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?

  • I never made any claims about optimality. It works (for whatever reason) hence companies continue to use it

    If it didn't work, these companies wouldn't be able to function at all.

    It must be the case that it works better than running a RNG on everyone who applied.

    Does it mean some genius software engineer who wrote a fundamental part of the Linux kernel but never learned about Minimum Spanning Trees got filtered out? Probably. But it's okay. That guy would've been a pain in the ass anyway.

> If it didn't actually work, it would've been discarded by companies long ago.

This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.

> If it didn't actually work, it would've been discarded by companies long ago

That makes the assumption that company hiring practices are evidence based.

How many companies continue to use pseudo-science Myers Briggs style tests?

> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.

I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.

> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.

Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.

> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.

This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.

> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.

My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.