← Back to context

Comment by roncesvalles

5 hours ago

>The LC interviews are like testing people how fast they can run 100m after practice

Ah, but, the road to becoming good at Leetcode/100m sprint is:

>a slow arduous never ending jog with multiple detours and stops along the way

Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.

Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.

That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.

> If it didn't actually work, it would've been discarded by companies long ago

You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?

But why stop there? Why not test candidates with problems they have never seen before? Or problems similar to the problems of the organization hiring? Leetcode mostly relies on memorizing patterns with a shallow understanding but shows the candidates have a gaming ability. Does that imply quality in any way? Some people argue that willing to study for leetcode shows some virtue. I very much disagree with that.

  • To play the devils advocate, being able to memorize patterns and recognize which patterns apply to a given problem is extremely valuable. Tons of software dev is knowing the subset of algorithms, data structures, and architecture that apply to a similar problem and being able to adapt it.

    • It's funny you mention that.

      That's literally what CS teaches you too. Which is what "leetcode" questions are: fundamental CS problems that you'd learn about in a computer science curriculum.

      It's called "reducing" one problem to another. We had an entire semester's mandatory class spend a lot of time on reducing problems. Like figuring out how you can solve a new type of question/problem with an algorithm or two that you already know from before.

      Like showing that "this is just bin packing". And there are algorithms for that, which "suck" in the CS kind of sense but there are real world algorithms that are "good enough" to be usable to get shit done.

      Or showing that something "doesn't work, period" by showing that it can be reduced to the halting problem (assuming that nobody has solved that yet - oh and good luck btw. if you want to try ;) )

  • > Leetcode mostly relies on memorizing patterns

    Math is like that as well though. It's about learning all the prior axioms, laws, knowing allowed simplifications, and so on.

    • In the same way that writing and performing a new song is "just memorizing prior patterns and law"

      or that writing a new book is the same.

      I.e. it's not about that. Like sure it helps to have a base set of shared language, knowledge, and symbols, but math is so much more than just that.

> If it didn't actually work, it would've been discarded by companies long ago.

This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.

> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.

This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.

> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.

My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.

> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.

I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.

> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.

Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.