← Back to context

Comment by jstimpfle

4 years ago

I don't think this is about algorithms.

Yes it is. When you have 2 programs that do the same thing, except one program is orders of magnitude faster than the other, it's almost* always because the faster program is written with better algorithmic time complexity.

(*In some cases the big-O time complexity may be the same for 2 programs, but one program can still be much faster in practice due to micro-optimizations, heuristics, etc. Even in cases like this, people who are good at algorithms will be better at writing these heuristics and micro-optimizations than people who don't care about algorithms.)

  • In practice things are much more nuanced than what you learn in school and textbooks.

    Sometimes you have a convenient library that sort of does what you want, but it does a lot more. Do you just use it, or do you re-implement only the subset that you need which can be optimized to run much faster?

    That's not an algorithm question but more of a software engineering tradeoff between impact (how badly your users/business need it faster), resources and priority (how much time you can spend on optimization), and whether you want to maintain that code (as opposed to calling the library making it somebody else's problem). Sometimes the correct thing to do is really to call the slower library instead of writing your own highly optimized routines.

    In this case of terminal emulation, apparently the folks at Microsoft wasn't aware of the faster solution, which you could say is an algorithm issue (but that's kind of stretching things a bit -- you surely wouldn't see terminal emulation in an algorithm in a textbook, and the fact that one has memorized the textbook algorithms for an interview doesn't automatically mean they would figure out a better way of emulating a terminal. Presumably Microsoft does some whiteboarding on algorithms as well but that didn't prevent this fiasco from happening). Anyway the concerns I mentioned above are probably still relevant here (the directdraw run thing was a convenient and slow library that apparently did much more than they needed).

    Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want, but real world problems are often more complicated than that, and those "skills" don't necessarily translate/apply. For one, there's no general way to prove lower bounds, so a less imaginative programmer might just assume their inefficient implementation is all that is possible, until somebody else points out a better method. People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions -- the candidates expect them and prepare for them, memorizing them if need be. But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.

    I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired. I've done a couple years of competitive programming in my younger years, and I can clearly observe that, when pressed with hard problems, there's a clear difference between how well people respond. The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.

    • > Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want ...

      Memorizing algorithms has very little to do with algorithm skills. You can memorize any algorithm textbook you like, then go take part in an algorithm competition and see how far that will get you (spoiler alert: not far). If you are good at algorithms, you have the ability to construct efficient solutions to problems you have never seen before. And by "construct" I don't mean pattern-matching to previously-memorized algorithms, I mean writing something that didn't exist before you wrote it.

      > People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions ... But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.

      I agree with you 100% here. Good algorithm questions are questions that can't be solved by memorized answers.

      > I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired [...] The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.

      Again I agree with you 100%. This also why I went in a different direction with my comment compared to the expectations set in the article. As you noted, the writer of the article expected everyone to learn these skills. In contrast, I said that it's ok for different people to have different skillsets.

      > In practice things are much more nuanced than what you learn in school and textbooks. Sometimes [...]

      This argument is presented way more often than it actually holds water. Yes, sometimes things are nuanced and complicated and whatnot, but we don't have to look at hypotheticals, we can just look at this actual case that we have right here. In this case things weren't nuanced and complicated and whatnot justifying a 40x performance drop when rendering colored text. In this case things really were simple: turns out this was just a horribly inefficient solution (like Many products written by Microsoft are). Furthermore, turns out Microsoft devs truely were incapable of recognizing that a few simple optimizations would deliver orders of magnitude better performance without breaking anything, and without introducing unnecessary complexity. We know this, because Microsoft devs called the prospect of optimizing this performance a "doctoral research project", and then some guy did it in his free time over a few weekends.

      5 replies →