Comment by baobabKoodaa
4 years ago
I understand that different people have different skill sets, that's totally fine. I don't think every single senior developer at Microsoft should have the skillset to fix performance issues like this. That said, however, I do think developers should be able to recognize when a task requires a skill that they do not have. In those cases it's best to reach out to someone within the organization who has that skill. Or, you know, you can be a senior developer at Microsoft and condescendingly claim that it's impossible, or "takes a PhD to develop that". I guess this is what you get when you fill organizations with people who don't care about performance, computer science or algorithms.
I'm going to use this case as another example to point to when people argue that algorithm skills are useless and shouldn't be used when interviewing.
I don't think this is about algorithms.
Yes it is. When you have 2 programs that do the same thing, except one program is orders of magnitude faster than the other, it's almost* always because the faster program is written with better algorithmic time complexity.
(*In some cases the big-O time complexity may be the same for 2 programs, but one program can still be much faster in practice due to micro-optimizations, heuristics, etc. Even in cases like this, people who are good at algorithms will be better at writing these heuristics and micro-optimizations than people who don't care about algorithms.)
In practice things are much more nuanced than what you learn in school and textbooks.
Sometimes you have a convenient library that sort of does what you want, but it does a lot more. Do you just use it, or do you re-implement only the subset that you need which can be optimized to run much faster?
That's not an algorithm question but more of a software engineering tradeoff between impact (how badly your users/business need it faster), resources and priority (how much time you can spend on optimization), and whether you want to maintain that code (as opposed to calling the library making it somebody else's problem). Sometimes the correct thing to do is really to call the slower library instead of writing your own highly optimized routines.
In this case of terminal emulation, apparently the folks at Microsoft wasn't aware of the faster solution, which you could say is an algorithm issue (but that's kind of stretching things a bit -- you surely wouldn't see terminal emulation in an algorithm in a textbook, and the fact that one has memorized the textbook algorithms for an interview doesn't automatically mean they would figure out a better way of emulating a terminal. Presumably Microsoft does some whiteboarding on algorithms as well but that didn't prevent this fiasco from happening). Anyway the concerns I mentioned above are probably still relevant here (the directdraw run thing was a convenient and slow library that apparently did much more than they needed).
Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want, but real world problems are often more complicated than that, and those "skills" don't necessarily translate/apply. For one, there's no general way to prove lower bounds, so a less imaginative programmer might just assume their inefficient implementation is all that is possible, until somebody else points out a better method. People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions -- the candidates expect them and prepare for them, memorizing them if need be. But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.
I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired. I've done a couple years of competitive programming in my younger years, and I can clearly observe that, when pressed with hard problems, there's a clear difference between how well people respond. The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.
6 replies →