← Back to context

Comment by Kamq

1 month ago

So on one hand, you're kinda right. HN is filled with exaggeration (imo often justified) from people venting because they have to deal with the bad parts of this system all day. That seems natural in a dev filled space.

But I don't think your comment is fair.

> We’re told of the engineer who isn’t hired by Google because he can’t invert a binary tree. Everyone else piles on and decree that, yes indeed, you cannot measure developer efficiency with a Leetcode or whiteboard problem.

Because this is a bad way to judge engineers. Or, rather, it's a great way if they don't know how to invert a binary tree. Most of the job is to figure out something you don't know yet and do it. Giving an engineer a random wikipedia page on an obscure algorithm and having them implement it is a great interview tactic. Having them regurgitate something common is bad, there will be a function for it somewhere, and you just need to call it.

> Meanwhile in the real world, hordes of awful engineers deliver no story points, because they in fact, do nothing and only waste time and lowers morale.

I agree with you on this one. Those people need to be fired. That doesn't mean story points are a good metric, often 90% of long term value can come from the kind of people who are like Tim, and losing them can destroy projects. Just because something bad is happening, it doesn't justify killing 90% of value for a team.

The only thing I've seen that works is to give team managers more discretion and rigorously fire managers who regularly create poor preforming teams (you often have to bump manager pay for this, that's fine, good managers are worth their weight in gold).

> Meanwhile in the real world, each job opportunity has thousands of applicants who can barely write a for loop. Leetcode and whiteboards filter these people out effectively every day.

You do need to filter for people that can code. That doesn't mean filtering for inverting binary trees is a good idea. Having people submit code samples that they're proud of is a much better approach for a first filter.

> Meanwhile in the real world, metrics on delivery, features and bugs drive company growth and success for those companies that employ them.

Bullshit. Basically all companies use metrics, and most companies are garbage at delivering useful software. A company being years behind and a million over budget on a software project, and eventually delivering something people don't want is so cliche that it's expected. And these companies regularly get out competed by small teams using 1% of the resources, as long as the small teams give half of a shit. In fact, if you want my metric for team success, what percentage of the team actually cares is a good one.

You're proposing a solution with a <20% success rate. Don't act like it's a gold standard that drives business value to new heights. With the system as it is today, most companies would be better off getting out of software and having a third party do it for them.

My wider point is not that the way companies are run is perfect and that we should stop the “innovators” (to quote the sibling comment). Each of these examples speak of corporate dysfunction, but we never give any weight to the constraints that force them in place. Leetcode is bad, but it’s bad in the sense that it errs too heavily on filtering out false negatives - the cheaper of the two errors. The alternative is worse.

Giving Tim the benefit of the doubt in this story, it still holds true that for every extraordinary and invisible superstar like Tim there are 99 under-performers who are indistinguishable from him.

We need to empathise with our managers and the processes in our organisations to understand their purpose and how they came to be.

We, software engineers, keep picking out singular data points of evidence to point at a flawed and unfair world, that go against our self inflated egos.

The brew guy inverting the binary tree and Tim being great, does not invalidate the practices of whiteboards and story points as a general practice.

To your final point, the best organisations that I’ve worked with used metrics in a very effective way (mostly in start ups). The worst did too. Just because some do it poorly, does not mean that it’s bad across the board.

What is tiring, is the unfair, and low expectation of the quality of evidence demanded of the anti-establishment notions in software development, before they are taken as gospel by this community.

And, in my experience, the people who are the strongest proponents of sidestepping or dismantling these processes overlap strongly with those who also do not deliver value to their teams.

  • > Leetcode is bad, but it’s bad in the sense that it errs too heavily on filtering out false negatives

    But, it doesn't. It filters for something orthogonal to development, which is ability to obsess over clever algorithmic solutions. Ok, well my company does HackerRank instead of LeetCode, maybe LeetCode is magically better, but I'm not seeing anything that tells me someone who grinds LeetCode is actually going to be useful on my team.

    Look, you want an idiot check to make sure someone is actually able to code, fine. That's probably a good idea. But the number of stories on here about people being turned away because they hadn't run into a particular tricky algorithm problem is concerning.

    > Giving Tim the benefit of the doubt in this story, it still holds true that for every extraordinary and invisible superstar like Tim there are 99 under-performers who are indistinguishable from him.

    But they aren't indistinguishable. The author of the blog post was perfectly able to distinguish them. That's my point. There are plenty of ways to be able to distinguish them, this metric just isn't one of them. Because it's a bad metric.

    It may not be legible to the higher ups, but good lower level managers have no problem distinguishing good unconventional people, and under-performers.

    > We need to empathise with our managers and the processes in our organisations to understand their purpose and how they came to be.

    I do empathize with the managers, at least the lower level ones. That's why I advocated for putting them in complete control and giving them unilateral firing privileges and increasing their pay.

    > the best organisations that I’ve worked with used metrics in a very effective way (mostly in start ups). The worst did too.

    You're really making it sound like metrics (at least as traditionally practiced in software) are orthogonal to being a good organization. If that's true, we might want to re-think how much time we spend on them and how much money we spend capturing them.

    Now, if you want to use profit, adoption, or user satisfaction as metrics, I'd love to talk about that, but I've seen nothing in my experience in the corporate world that tells me that the way we're currently using them is even net positive value.

    • It only appears that HackerRank/Leetcode isn’t good at filtering because you’re viewing it from your perspective, and not the perspective of the entire population that is tested. To you, the predictive power at the top tail end of the distribution is low, because you’re thinking of two strong developers Alice and Bob. Alice happens to know algorithm X and would pass the test, whereas Bob does not. But that’s not the population we’re testing. Think more along the lines of Alice and Bob and your grandmother were the test population. It’s absolutely fantastic at filtering the lower 95% of applicants because they will _never_ be able to pass. Yes, inadvertently 2.5% of “good developers” are filtered too, but that doesn’t matter to the outcome of your company. They just want someone competent, and they don’t care if it’s Alice or Bob.

      The same logic sort of applies to Tim and his performance. The bias of having an imperfect metric is probably much better than the bias of letting an army of middle managers go with their cut. Besides, it doesn’t have to be a hard filtering function at this stage, but a metric to indicate that we need to look a little closer at Tim

      3 replies →