Comment by samiv
2 days ago
The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way.
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
https://github.com/ensisoft/detonator
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.
5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
Did you mean to type 25? 5 years ago LC challenge were as, if not more, prevalent than they are today. And a single interview for a job is not something I have seen ever after 15 years in the space (and a bunch of successful OSS projects I can showcase).
I actually have the feeling it’s not as hardcore as it used to be on average. E.g. OpenAI doesn’t have a straight up LC interview even though they probably are the most sought after company. Google and MS and others still do it, but it feel like it has less weight in the final feedback than it did before. Most en-vogue startup have also ditched it for real world coding excercices.
Probably due to the fact that LC has been thoroughly gamed and is even less a useful signal than it was before.
Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
I don’t know if this has changed or perhaps was not representative but my entire loop at Anthropic involved people reviewing my code.
1 reply →
I literally got my first real job 26 years ago by talking about my game engine, for a fintech firm.
There's an entire planet of jobs that have nothing to do with leetcode. I was talking about those, not FAANG stuff. Unfortunately I am not FAANG royalty.
>Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
Should be illegal honestly.
15 replies →
Not sure if that's a typo. 5 years ago was also pretty LC-heavy.
Ten years ago it was more based on Cracking the Coding Interview.
So i'd guess what you're referring to is even older than that.
Talking about general jobs not FAANG adjacent.
2 replies →
I read this, and intentionally did not read the replies below. You are so wrong. You can write a library, even an entirely new language from scratch, and you will still be denied employment for that library/language.
> 5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
Based on my own experiences, that was true 25 years ago. 20 years ago, coding puzzles were now a standard part of interviewing, but it was pretty lightweight. 5 years ago (covid!) everything was leet-code to get to the interview stage.
I have been getting grilled on leet code style questions since the beginning my of my career over 12 years ago.
The faangs jump and then the rest of the industry does some dogshit imitation of their process
I'm lucky I'm in the frontend webdev sphere then I guess instead of like being a pure backend guy. I've had a couple of those live ones and just denied them. I did manage to implement a "snake" algorithm once but got denied because I wasn't able to talk about time/space complexity.
2 replies →
>The LC interviews are like testing people how fast they can run 100m after practice
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
LC interviews were made popular by companies that were started by CS students because they like feeling that this stuff is important. They're also useful when you have massive numbers of applicants to sift through because they can be automated and are an objective-seeming way to discard loads of applicants.
Startups that wanted to emulate FAANGs then cargo-culted them, particularly if they were also founded by CS students or ex-FAANG (which describes a lot of them). Very, very few of these actually try any other way of hiring and compare them.
Being able to study hard and learn something well is certainly a great skill to have, but leetcode is a really poor one to choose. It's not a skill that you can acquire on the job, so it rules out anyone who doesn't have time to spend months studying something in their own time that's inherently not very useful. If they chose to test skills that are hard and take effort to learn, but are also relevant to the job, then they can also find people who are good at learning on the job, which is what they are actually looking for.
But why stop there? Why not test candidates with problems they have never seen before? Or problems similar to the problems of the organization hiring? Leetcode mostly relies on memorizing patterns with a shallow understanding but shows the candidates have a gaming ability. Does that imply quality in any way? Some people argue that willing to study for leetcode shows some virtue. I very much disagree with that.
I think you have a misunderstanding. Most companies that do LC-style interviews usually show unknown problems.
Memorizing the Top 100 list from Leetcode only works for a few companies (notably and perplexingly, Meta) but doesn't for the vast majority.
Also, just solving the problem isn't enough to perform well on the interview. Getting the optimal solution is just the table stakes. There's communication, tradeoffs between alternative solutions, coding style, follow-up questions, opportunities to show off language trivia etc.
Memorizing problems is wholly not the point of Leetcode grinding at all.
In terms of memorizing "patterns", in mathematics and computer science all new discovery is just a recombination of what was already known. There's virtually no information coming from outside the system like in, say, biology or physics. The whole field is just memorized patterns being recombined in different ways to solve different problems.
7 replies →
To play the devils advocate, being able to memorize patterns and recognize which patterns apply to a given problem is extremely valuable. Tons of software dev is knowing the subset of algorithms, data structures, and architecture that apply to a similar problem and being able to adapt it.
6 replies →
> Leetcode mostly relies on memorizing patterns
Math is like that as well though. It's about learning all the prior axioms, laws, knowing allowed simplifications, and so on.
5 replies →
> Or problems similar to the problems of the organization hiring?
People complain, rightly so in some cases, that their "interview" is really doing some (unpaid) work for the company
> If it didn't actually work, it would've been discarded by companies long ago.
This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.
> If it didn't actually work, it would've been discarded by companies long ago
You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?
I never made any claims about optimality. It works (for whatever reason) hence companies continue to use it
If it didn't work, these companies wouldn't be able to function at all.
It must be the case that it works better than running a RNG on everyone who applied.
Does it mean some genius software engineer who wrote a fundamental part of the Linux kernel but never learned about Minimum Spanning Trees got filtered out? Probably. But it's okay. That guy would've been a pain in the ass anyway.
Does it work though?
When I look at the messy Android code, Fuchsia's commercial failure, Dart being almost killed by politics, Go's marvellous design, WinUI/UWP catastrophical failure, how C++/CX got replaced with C++/WinRT, ongoing issues with macOS Tahoe,....
I am glad that apparently I am not good enough for such projects.
zero of those failures are of a technical nature.
The fact is that they fail is not evidence that leetcode interviews fails to select for high quality engineers.
1 reply →
> the road to becoming good
In my experience, it's totally not true.
Many college students of my generation are pretty good with LC hards these days purely due to FOMO-induced obsessive practice, which doesn't translate to a practical understanding of the job, (or any other parts of CS like OS/networks/languages/automata either).
I will give you an exercise, pick an LC hard problem and it's very likely an experienced engineer who has only done "real work" will not know the "trick" required to solve the problem. (Unless it's something common like BFS or backtracking).
I say this as someone with "knight" badge on leetcode, whatever that means, lest you think it's a sour grapes fallacy.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.
It's also a filter for people who are ok with working hard on something completely pointless for many months in order to get a job.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.
> If it didn't actually work, it would've been discarded by companies long ago
That makes the assumption that company hiring practices are evidence based.
How many companies continue to use pseudo-science Myers Briggs style tests?
>how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way
I've always explained it as demonstrating your ping pong skills to get on the basketball team.
Yes. If work was leetcode problem solving, I would actually enjoy it. Updating npm packages and writing tiny features that get canned a week later is all not that stimulating.
Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!"
> SMEGMA companies
Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two?
I prefer AGAMEMNON: Apple, Google, Amazon, Microsoft, Ebay, Meta, NVIDIA, OpenAI, Netflix
"Startups" and "Enterprise"? I guess that basically covers everything
Lol :)
"SMEGMA companies." :D
And nowadays people are blatantly using AI to answer questions like this (https://www.finalroundai.com/coding-copilot). Even trying to stumble through design questions using AI