But how about we flip the original statement around:
Many problems thought to require giant software packages/libraries are very solvable locally if you know what you are doing
This is why LC is actually meaningful - imagine if you faced the coin challenge problem IRLin prod and you decided to pull in a constraint solver - what would've been a 25 line function now is a giant python library that carries 30MB of native dependencies around.
These solutions are often lacking in many ways, since mentally offloading some of the work means you don't have to understand the problem in depth or how the designated tool solves it, which can lead to nasty surprised in production.
We see this everywhere - people pulling in questionable npm packages to save themselves 30 mins of thinking or 20 mins of reading docs - people convinced that you do need that huge 3D framework that comes with an editor to make a WebGL widget on your landing page etc.
You are more willing to accept bloat of others, just because they pulled in Electron, because they were too afraid to learn how native UI works.
People need to be more curious, and strive to be more knowledgeable.
You are right to be sceptical of “dependency bloat” - many many systems could be made simpler to debug and maintain if that advice was followed.
But: I urge you to give constraint solvers a try. This is something very different from some hipster ORM library. The good ones will be a single dependency containing extremely carefully optimised matrix math code that let you solve optimisation problems faster and with much more maintainable code.
Try some toy problem with the HiGHS solver for instance, it has good python bindings and even ships as a wasm package you can run in browsers. “Find the optimal times to run this heat pump given electricity prices”, “find the ideal train departure times that optimise for when people need to travel and minimises transfers”
Wait are you making the opposite claim? That one should eschew the "correct" formulation in favor of a bespoke one? Despite the stated (and hopefully obvious) difficulties that brings with maintenance, generalization, etc?
You probably haven't see front-end projects that pulls tons of library for a simple sorting or grouping task. Sometimes even solvable with build-in array function alone. It's a true nightmare when you have to deal with that kind of projects.
I'm sorry but after reading your comment I can't seem to be able to decide if you favor writing the dynamic programming version or pulling in the constraint solver.
Both are correct in the sense that they give the right output, and I don't think pulling in a huge library (maintained by who knows and for how long) is going to be beneficial for maintenace. And having a good understanding of both the precise requirements, and the algorithms used to solve them also helps maintenance.
It's just having two dozen lines tucked of code away in a function in the repo seems infinitely more maintainable to me then using some giant framework (of possibly unknown quality) to solve the issue.
This is a general argument I'm making, not just applying to this constraint solver/
I agree with essentially everything you said. While reinventing the wheel isn't always to most efficient solution, such actions have sometimes paid dividends in my past.
> People need to be more curious, and strive to be more knowledgeable.
Absolutely, though I am not sure lack of curiosity nor strive is always the issue. It's one thing to green field one's own solution in an attempt to better one's abilities, but it's also a bit idealistic in a working world of impending deadlines and death-march Agile sprints.
In this case, a 25 line function to solve a constraint would likely take me far less time than grokking an external library's documentation, but in many cases, I feel developers jump on external dependencies because time is of the essence.
Given ANY problem: first ask yourself have others solved this before? that is a hard question to answer, since we don't know in which context a similar puzzle was solved before (structures in different domains can boil down to the same mathematical puzzle). A literature search would be very time costly.
The most important point is the flexibility in being able to change the puzzle (a small change in the puzzle can result in a large change in the type of solution), as the author of the article points out. The bespoke algorithm is brittle. Description of the problem itself is less brittle (you can reuse most of the problem statement).
It may sound incredibly expensive to pull in a constraint solver, but if the application warrants constraint-solver-quality results, it should afford either the dependency, or the data structures and solvers used in the solver dependency, to optimize for the application,
Its just bizarre to ask people to beat the feats of those standing on the shoulders of giants, without allowing them to stand on the shoulders of giants too.
Think about why one is recruiting employees with data structure skills (so more than just information plumbers). Is it really so strange that the most qualified people understand the reality that the state of the art is constantly changing, but understand at least the basics of how these solvers work internally?
Viewed through this lens, the ideal job candidate are those who design, implement and maintain... constraint solvers! Assuming their familiarity with the constraint-solver code-base they could profile the solver package while its solving the puzzle. Do this on many instances of the puzzle, and keep track of the dead-ends and optimal solutions, to figure out which functionality can be ripped out of the solver, and which must be kept.
So in order of preference:
1. Programmers or mathematicians (or equivalent, think physicists, etc.) familiar with 1 or more constraint solver source code bases.
2. End-users of a constraint solver package, with sufficient familiarity (as a user) with constraint solvers. WITH data structure and algorithms experience.
3. End-users of a constraint solver package as above, but without data structure and algorithms knowledge
4. People with data structure and algorithms knowledge.
If 1. is too expensive you'll have to combine skills over multiple hires.
If industry is really interested in good profiles for these criteria, it should sponsor universities / students to get familiar with constraint solver usage, and if possible constraint solver development and contributions. After a few years the candidates will pop into existence.
>>It may sound incredibly expensive to pull in a constraint solver
Like this is actually kinda the point of the article. On every constraint programming article half the comments insist that they could do the (example) problem in a dozen lines of C so bringing in SCIP or OR-Tools is "too much" for the (example) problem. Wayne's point (here and in other articles he's written) is that actually constraint problems are fucking everywhere.
This is one of the real lessons of learning Prolog: data queries, scheduling, SAT, LP, integer programming, optimization... etc; it's all constraint programming, hell parsing is a constraint problem if you do it right. No one sees it like that though so they see their entire business application logic as containing at most one knapsack.
I feel like if I'm being asked this in an interview, they're not asking me to use a constraint solver, they're asking me to _write_ a constraint solver. Just for a specific constraint problem, not a more general constraint solver.
You're right, but that just shows how fundamentally silly this interview approach is.
In any real engineering situation I can solve 100% of these problems. That's because I can get a cup of coffee, read some papers, look in a textbook, go for a walk somewhere green and think hard about it... and yes, use tooling like a constraint solver. Or an LLM, which knows all these algorithms off by heart!
In an interview, I could solve 0% of these problems, because my brain just doesn't work that way. Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.
I was told to use ANY language in an interview. I asked them if they were sure, so I solved it with J. They were not too pleased and asked me if I could use another language, so I did prolog and we moved on to the next question. Then the idiot had the audacity to say I should not use "J and Prolog" but any common known language. I asked if assembly was fine, and they said no. Perhaps python or javascript. I did the rest in python, needless to say I didn't get the job. :-)
I haven't been asked leetcode questions in a while and when I was asked, it was an easy level problem. I don't know where they ask hard leetcode problems, I also never solved a hard leetcode problem on my own.
More exactly, you can't invent algorithms on a spot which took who knows how many years for others to invent. I.e. the question ends up being more if you know about a specific algorithm, which results in "invent it if you don't know about it". It's absolutely silly to test for ability to invent one on the spot, so it's a pretty pointless interview question really.
> Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.
Hrm. So what you're saying is you've never actually taken or given this style of interview. Nor presumably ever worked at a company that did this interview. So if on the off-chance these interviews actually were a somewhat successful tool for filtering candidates you wouldn't actually know it?
Depends on your experience and what you’re interviewing for. At a high enough level, the questions are pulled from the easier side, and the interviewer doesn’t want you to fail.
If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
Do you know how few people in this world even know what a constraint solver is, let alone how to correctly define the problem into one?
I used a constraint solver to solve a homework problem once in my CS degree 3rd year. My god just writing the damn constraints was a huge cognitive load!
I did this, wrote an Essence-prime program to generate Minion solver code for a simple instance of the knapsack problem, as part of a startups "solve one of these and get an interview" challenges. Because I had used those tools recently for a contract job (and wrote/presented a paper on invitation of the solver authors,) I thought it would be fun and didn't really want the job. Got an interview but every dev was like "why did you use a cannon to swat a fly?" and were clearly concerned that without strict supervision I would create baroque towers of garbage for them to clean up.
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
I do hope you're exagerating here, but in case you aren't: this is an extremely simplistic view of what (software) engineers have to do, and thus what hiring managers should optimize for. I'd put "ability to work in a team" above "raw academic/reasoning ability" for the vast majority of engineering roles, any day.
Not that the latter doesn't matter, of course, but it's by no means the one and only measure.
I've won a couple hackathons with just CP-SAT & Linear Programming which led to my first jobs. I'm surprised not more people know/use it. Very inefficient compared to the "correct" answer but the development speed is much faster.
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot
Sometimes you just don't want someone that takes these shortcuts. I think being able to solve the problem without a constraint solver is much more impressive
This - the only downside to a constraint solver is it's usually slower. If you want them to write a fast algorithm, then specify that. Have an actual metric for it, if they can pass it with the declarative language, then great. If not, they should have written a more complicated algorithm.
you might be interested in trying out JuMP.jl. It's a Julia package that abstracts over constraint solvers and can do some very complex reformulations automatically to take the declarative definition of your problem and turn it into the types of constraints that the solver you're using supports.
Yes and no: I've asked questions like this in interviews, and I'd count it as a plus if the candidate reached for a constraint solver. They're criminally underused in real-world software engineering and this would show the candidate probably knows how to get the right answer faster instead of wasting a bunch of time.
Now, if they did answer with a constraint solver, I'd probably ask some followup whiteboard questions to make sure they do actually know how to code. But just giving a constraint solver as an answer definitely wouldn't be bad.
Yes, especially if the interviewee said something like 'this may not be asymptomatically optimal, but if it's not a known bottleneck, then I might start with constraint solver to get something working quickly and then profile later.' Especially if it's a case where even the brute-force solution is tricky.
Otherwise penalizing interviewees for suggesting quick-and-dirty solutions reinforces bad habits. "Premature optimization is the root of all evil," after all.
General constraint solver would be terribly inefficient for problems like these. It's a linear problem and constraint solver just can't handle O(10^6) variables without some beefy machine.
This will be true in some interviews, but not in all.
I'm generally against using leetcode in interviews, but wherever I've seen it used it's usually for one reason & one reason alone: known dysfunctional hiring processes. These are processes where the participants in the hiring process are aware of the dysfunction in their process but are either powerless or - more often - too disorganised to properly reform the process.
Sometimes this is semi-technical director level staff leveraging HR to "standardise" interview techniques by asking the same questions across a wide range of teams within a large corp. Other times this is a small underresourced team cobbling together interview questions from online resources in a hurry, not having the cycles to write a tailored process for themselves.
In these cases, you're very likely to be dealing with a technical interviewer who is not an advocate of leetcode interviewing & is attempting to "look around" the standardised interview scoring approach to identify innovative stand out candidates. In a lot of cases I'd hazard even displaying an interest in / some knowledge of solvers would count significantly in your favour.
This. Literally every problem in NP can be cast as a constraint problem. The question of whether a solver is the right solution varies a lot depending on the application, and in an interview , it’s almost by definition not the right solution.
They can also be dreadfully slow (and typically are) compared to just a simple dynamic program.
Great insight. But this is sadly not applicable to interviews.
> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem
This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.
>The point of these problems is to test your cleverness.
In my experience, interviewers love going to the Leetcode "Top Interview 150" list and using problems in the "Array String" category. I'm not a fan of these problems for the kind of jobs I've interviewed for (backend Python mostly), as they are almost always a "give me a O(n) runtime O(1) memory algorithm over this array" type challenge that really doesn't resemble my day to day work at all. I do not regularly do in-place array algorithms in Python because those problems are almost always handled by other languages (C, Rust, etc.) where performance is critical.
I wish interviewers would go to the "Hashmap" section for interviews in Python, JavaScript, etc., type of languages. They are much less about cleverness and more about whether you can demonstrate using the appropriate tools in your language to solve problems that actually do resemble ones I encounter regularly.
There's also the problem of difficulty tuning on some of these. Problem 169 (Majority Element) being rated "Easy" for getting a O(n) runtime O(1) memory solution is hilarious to me. The algorithm first described in 1981 that does it (Boyer–Moore majority vote algorithm) has a Wikipedia page. It's not a difficult to implement or understand algorithm, but its correctness is not obvious until you think about it a bit, at which point you're at sufficient "cleverness" to get a Wikipedia page about an algorithm named after you. Seems excessive for an "Easy" problem.
Interviews should not be about cleverness. They should test that you can code. I almost never write an algorithm because all the important algorithms are in my standard library already. Sure back in school I did implement a red-black tree - I don't remember if it worked, but I implemented it: I can do that again if you need me to, but it will take me several days to get all the details right (most of it looking up how it works again). I use red-black trees all the time, but they are in the language.
You need to make sure a candidate can program so asking programing question make sense. However the candidate should not be judged on if they finish or get an optimal or even correct answer. You need to know if they write good code that you can understand, and are on a path that if given a reasonable amount of time on a realistic story would finish it and get it correct. If someone has seen the problem before they may get the correct answer, but if they have not seen it they won't know and shouldn't expected to get the right answer in an hour.
Majority Element is rated easy because it can be trivially solved with a hashmap in O(N) space and that's enough to pass the question on Leetcode. The O(1) space answer is probably more like a medium.
Honestly in day to day programming I find data types & associated APIs are so so much more important than algorithms.
I would rather work with a flexible data type with suboptimal performance than a brittle data type that maybe squeezes out some extra performance.
Your example of in-place array mutation feels like a good example of such a thing. I feel like there should be a category of interviewing questions for "code-safety" not just performance.
> The point of these problems is to test your cleverness.
Last round I did at Meta it was clearly to test that you grinded their specific set of problems, over and over again, until you could reproduce them without thinking. It's clear because the interviewers are always a bit surprised when you answer with whatever is not the text-book approach on both leetcode and on the interview guide they studied.
Cleverness is definitely not high on the list of things they're looking for.
Interviewers are typically surprised when you do something new because they reuse questions with a lot of people and eventually you end up seeing most variations of a solution.
Cheekily using counting sort ended things the one and only time I agreed to interview with Meta. Definitely improved my inbox for a couple years though.
Bottom up dynamic programming algorithms require some cleverness.
All of the ones listed can be solved with a top down dynamic programing algorithm. Which just means "write recursive solution, add caching to memoize it".
For some of these, you can get cleverer. For example the coin change problem is better solved with an A* search.
Still, very few programmers will actually need these algorithms. The top thing we need is to recognize when we accidentally wrote a quadratic algorithm. A quick scan of https://accidentallyquadratic.tumblr.com/ shows that even good people on prominent projects make that mistake on a constant basis. So apparently being able to produce an algorithm on the test, doesn't translate to catching an algorithmic mistake in the wild.
For the love of me I still can't consistently solve dynamic programming problems. Because "write a clever brute force solution that can be cached" is so broad that there are tons of variations out there, and a slight twist can bring you out of the loop fast.
When I interview with problem solving problems, the point is to understand how the candidate thinks, communicates, and decomposes problems. Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Interviewers learn nothing from an instant epiphany, and they learn next to nothing from someone being stumped.
Unfortunately, this is why we can't have nice things. Problem solving questions in interviews can be immensely useful tools that, sadly, are rarely usefully used.
> the point is to understand how the candidate thinks, communicates, and decomposes problems.
100% and it's a shame that over time this has become completely lost knowledge, on both sides of the interview table, and "leetcode" is now seen as an arbitrary rote memorization hurdle/hazing ritual that software engineers have to pass to enter a lucrative FAANG career. Interviewees grind problems until they've memorized every question in the FAANG interview bank, and FAANG interviewers will watch a candidate spit out regurgitated code on a whiteboard in silence, shrug, and say "yep, they used the optimal dynamic programming solution, they pass."
Would a good answer be "I can do it as a constraint problem, but since I guess you are not asking for this, the solution is..." and then proceed as usual?
> the point is to understand how the candidate thinks, communicates, and decomposes problems
Interviewers always say this, but consider: would you endorse a candidate who ultimately is unable to solve the problem you've presented them, even if they think, communicate, and decompose problems well? No interview in this industry prizes those things over getting the answer right.
> Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Absolutely agree. When I interview, I start with a simple problem and add complexity as they go. Can they write X? Can they combine it with Y? Do they understand how Z is related?
>The point of these problems is to test your cleverness.
No it's just memorization of 12 or so specific patterns. The stakes are too high that virtually everyone going in will not be staking passing on their own inherent problem solving ability. LeetCode has been so thoroughly gamified that it has lost all utility of differentiability beyond willingness to prepare.
Yeah, it tests if the candidate enjoys the programming-adjacent puzzle game of LeetCode, which is a perfectly decent game to play, but it is just a signal.
If somebody grinds LeetCode while hating it, it signals they are really desperate for a job and willing to jump through hoops for you.
If somebody actually enjoys this kind of stuff, that is probably a signal that they are a rare premium nerd and you should hire them. But the probably play Project Euler as well (is that still up?).
If somebody figures out a one-trick to minmax their LeetCode score… I dunno, I guess it means they are aware of the game and want to solve it efficiently. That seems clever to me…
Constraint solvers are also often not applicable to the real world either.
Many formulations scale in a way that is completely unusable in practice.
Knowing how to get tools like Z3 or Gurobi to solve your problems is it's own skill and one that some companies will hire for, but it's not a general purpose technology you can throw at everything.
This post is the unironic version of "FizzBuzz in TendorFlow", where just because you have a big hammer doesn't mean everything is a nail. And I say that as an enjoyer of bug hammers including SMT solvers.
No its not a measure of cleverness. Its about whether you can break down problems and apply common methods. Thats the entire job. Its a learnable skill and honestly resisting learning because of personal biases is a red flag in my book.
Most interviews are based on the premise that if a diabetic can't synthesize their own insulin in their basement, they are somehow cheating at the game of life.
If my wife's blood sugar is high, she takes insulin. If you need to solve a constraint problem, use a constraint solver.
If your company doesn't make and sell constraint solving software, why do you need me to presume that software doesn't exist and invent it from scratch?
It’s explicitly not testing if you can synthesize insulin in a crisis, it’s a general aptitude test for “if we tell you you need to cram this textbook on how to synthesize insulin by next week and then ask you how to do it on a call, can you coherently repeat that back to us?”
If you can figure out that a problem can be efficiently solved with a constraint solver then you can also write the two for loops and maybe some auxiliary recursive function to solve the given toy instance.
My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.
It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.
edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.
The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way.
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.
>The LC interviews are like testing people how fast they can run 100m after practice
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!"
Yes. If work was leetcode problem solving, I would actually enjoy it. Updating npm packages and writing tiny features that get canned a week later is all not that stimulating.
And nowadays people are blatantly using AI to answer questions like this (https://www.finalroundai.com/coding-copilot). Even trying to stumble through design questions using AI
100%. I just went through an interview process where I absolutely killed the assignment (had the best one they'd seen), had positive signal/feedback from multiple engineers, CEO liked me a lot etc, only to get sunk by a CTO who thought it would be cool to give me a surprise live test because of "vibe coding paranoia". 11 weeks in the process, didn't get the role. Beyond fucking stupid.
It's funny because this repo really does seem vibe-coded. Obviously I have no reason not to believe you, but man! All those emojis in the install shell script - I've never seen anyone other than an AI do that :) Maybe you're the coder that the AI companies trained their AI on.
Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
A surprise live test is absolutely the wrong approach for validating whether someone's done the work. IMO the correct approach is to go through the existing code with the applicant and have them explain how it works. Someone who used AI to build it (or in the past had someone else build it for them) wouldn't be able to do a deep dive into the code.
Damn... that's WAY more than I'll do for an interview process assignment... I usually time box myself to an hour or two max. I think the most I did was a tic-tac-toe engine but ran out of time before I could make a UI over it.
Its not really memorizing solutions. Yes you can get quite far by doing so but follow ups will trip people up. However if you have memorized it and can answer follow ups, I dont see a problem with Leetcode style problems. Problem solving is about pattern matching and the more patterns you know and can match against, the better your ability to solve problems.
Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems.
These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not.
It is and isn't. I'd argue it's not memorizing exact solutions(think copy paste) but memorizing fastest algos to accomplish X.
And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps.
That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now.
I'm fine with that in an interview... I'm not fine with that, in a literally AI graded assignment where you cannot ask clarifying questions. In those cases, if you don't have a memorized answer a lot of times I cannot always grasp the question at hand.
I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves.
I'd say that learning to solve tough LeetCode problems has very little (if not precisely zero) value in terms of you as a programmer learning to do something useful. You will extremely rarely need to solve these type of tougher select-the-most efficient-algorithm problems in most real-world S/W dev jobs, and nowadays if you do then just as AI.
Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job.
There's probably a general positive correlation between knowing a lot of specific algorithms/techniques (i.e. as tested by LC) and being a great developer. HOWEVER I think the scenario of a real world job is far more a subset of that.
Firstly these questions you get like 30 mins to do, which is small compared to the time variance introduced by knowing or not knowing the required algorithm. If you know it you'll be done in like 10 mins with a perfect answer. Whereas if you don't know you could easily spend 30 mins figuring it out and fail. So while on average people passed by LC may be good engineers, in any one scenario it's likely you reject a good engineer because the variance is large. And then it's easy to see why people get upset, because yeah it feels dodgy to be rejected when you happen to not know some obscure algorithm off the top of your head. The process could be fairer.
Secondly, as many say, the actual job is rarely this technical stuff under such time pressure. Knowing algorithms or not means basically nothing when the job is like debugging CI errors for half your day.
Been in software development for 30 years. I have no idea what "Leetcode" is. As far as I know I've never been interviewed with "Leetcode", and it seems like I should be happy about that.
And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me.
Leetcode with no prep is a pretty decent coding skill test
The problem is that it is too amenable to prep
You can move your score like 2stddev with practice, which makes the test almost useless in many cases
On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
leetcode just shows why interviews are broken. As a former senior dev (retired now, thanks to almost dying) I can tell you that the ability to write code is like 5% of the job. Every interview I've ever attended has wasted gazillions of dollars and has robbed the company of 10X that amount.
Until companies can focus on things like problem solving, brainstorming, working as a team, etc. the situation won't improve. If I am wrong, why is it that the vast majority of my senior dev and dev management career involved the things I just mentioned?
(I had to leave the field, sadly, due to disability)
Oh and HR needs to stop using software to filter. Maybe ask for ID or something, however, the filters are flagging everyone and the software is sinking the ship, with you all with it.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided.
One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best"
The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test.
These interviews seem designed to filter out applicants with active jobs. In fact, I'd say that they seem specifically for selecting new CS graduates and H1B hires.
Which isn't that the main skill actually being tested? How the candidate goes about solving problems? I mean if all we did was measure peoples' skills at making sweeping assumptions we'd likely end up with people who oversimplify problems and all of software would go to shit and get insanely complex... Is the hard part writing the lines of code or solving the problem?
> My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that.
Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue.
1. People can be hired to take the test for you - surprise surprise
2. It is akin to deciding if someone can write a novel from reading a single sentence.
Hiring people for the test is only valid for online assessment. For an onsite, its very obvious if the candidates have cheated on the OA. I've been on the other side and its transparent.
> It is akin to deciding if someone can write a novel from reading a single sentence.
For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test.
Where I interviewed you had effectively 1 or 2 LC question but the interviewer offered clarifying questions making for a real time discussion and coding exercise.
This solves one problem but it does add performance anxiety to the mix having to live code.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation.
> I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers
If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free.
> It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well.
I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have.
> Without any chance of additional info/questions it's literally a setup to fail.
Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed.
Last year, I saw a lot of places do effectively AI/Automated pre-inverview screenings with a leetcode web editor, and a video capture... This is what I'm talking about.
I'm fine with hard questions in an actual interview.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection).
Many interviews now involve automated exercises on websites that track your activity (don't think about triggering a focus change event on your browser, it gets reported).
Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability).
You have few minutes and you're on your own.
If you pass that abomination, maybe, you have in person ones.
It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none.
How does asking clarifying questions work when a non-programmer is tasked with performing the assessment, because their programmers are busy doing other things, or find it degrading and pointless?
Whenever constraint programming languages come up, you can’t miss mentioning Håkan Kjellerstrand. He’s put together an amazing collection of problems and examples—including plenty for MiniZinc—on his site: https://www.hakank.org/minizinc/
> Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?"
This completely undermines the author's main point. Constraint solvers don't solve hard leetcode problems if they can't solve large instances quickly enough.
Many hard leetcode problems can be solved fairly simply with more lax runtime requirements -- coming up with an efficient solution is a large part of the challenge.
> coming up with an efficient solution is a large part of the challenge
More of my work tends to be "rapidly adopting solution to additional and changing requirements" than "come up with most efficient solution", so why are we interviewing for something where in practice we just throw a couple extra instances at it? (Your specific job role may vary, of course, but I usually just increase the scaling factor)
Author's point is that coming up with the most efficient solution might not actually be a good measure of your real-world performance.
And that's been a longrunning critique of leetcode, of course. However, this is a neat framing where you can still use the same problems but give solutions that perform better when measured by "how adaptable is this to new requirements?"
A loonnngggg time ago when I was green, and wasn't taught about constraint solving in my State University compsci program, I encountered the problem when trying to help a friend with his idea.
He wanted to make an app to help sports club owners schedule players for the day based on a couple simple rules. I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know.
I often look back on that as a lesson of my own hubris. And it's helped me a lot when discussing estimates and timelines and expectations.
This might be a dumb question (as I'm not familiar with constraint solvers) but would a linear optimization approach be better? I've used linear optimization for scheduling in the past. The nice thing is that linear optimization handles rule conflicts well, because you just set weights on all your rules and the optimizer will find the "least bad" solution to the conflicts.
Well if your using MiniZinc you're free to use a CP solver, MIP solver, SAT solver, CP-SAT-LP solver. In general the model is roughly the same, even though some formulations work better for some solvers than others.
But CP (and CP-SAT) solvers tend to do very well on scheduling problems
> I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know
This reminds me of high school ~25 years ago when I just started learning TI-Basic on my calculator and was dabbling in VB6 on my PC, and I was flipping burgers at Steak n Shake as my part time job. The manager moaned about how hard it was to write the employee schedules out each week (taking into account requested days off, etc) and I thought “ooh, I know how to write software now, I’ll make a scheduling program!” I told the manager I bet I could do it.
… it took a very short time for 16 year old me to realize writing scheduling software to solve for various constraints is pretty damned hard. I never brought it up after that.
> This was a question in a different interview (which I thankfully passed):
> Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.
It was funny to see this, because I give that question in our interviews. If someone suggested a constraint solver... I don't know what I'd have done before reading this post (since I had only vaguely even heard of a constraint solver), but after reading it...
Yeah, I would still expect them to be able to produce a basic algorithm, but even if their solution was O(n^2) I would take it as a strong sign we should hire them, since I know there are several different use cases for our product that require generalized constraint solving (though I know it by other names) and having a diverse toolset on hand is more important in our domain than writing always-optimal code.
Something that works poorly is often better than something that doesn't work in an instant. This is what I have to tell myself every time I step into a massive, excessively complex mess of a codebase. Many business rules aren't clearly defined ahead of time in a way that always translates well to the code and starting over is a mistake more often than not imo.
Update... refactor... update... break off... etc. A lot of times, I'm looking at something where the tooling is 8+ years old, and the first order of business should be to get it working on a current and fully patched version of whatever is in place... replacing libraries that are no longer supported, etc. From there, refactor what you can, break off what makes sense into something new, refactor again. This process, in my experience, has been far more successful than ground up, new versions.
I say this while actively working on a "new" version of a software. New version being web based, "old" version being a winforms VB.Net app from over a decade ago. Old version has bespoke auth, new verion will rely on Azure Entra... Sometimes, starting over is the answer, but definitely not always.
- Constraint solvers? That's a nice concept, I heard about this once. However, for the purposes of the interview, let's just write some Python code, I wanna see your way of thinking...
(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)
Long time ago, just for fun, I wrote a constraint solver problem that could figure out which high yield banks to put money into that were recommended on doctor of credit(https://www.doctorofcredit.com/high-interest-savings-to-get/) based on <= `X` money and <= `Y` # of transactions on debit cards maximize the yield and other constraints(boolean and real valued)
I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)
I find this post interesting independent of the question of whether leetcode problems are a good tool for interviews. It's: here are some kinds or problems constraint solvers are useful for. I can imagine a similar post about non-linear least squared solvers like ceres.
Yeah, especially for learning how to use a solver!
> Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.
He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.
SAT, SMT, and constraint solvers are criminally underutilized in the software industry. We need more education about what they are, how they work, and what sorts of problems they can solve.
At least personally, I've been very underwhelmed by their performance when I've tried using them. Usually past a few dozen variables or so is when I start hitting unacceptable exponential runtimes, especially for problem instances that are unsatisfiable or barely-satisfiable. Maybe their optimizations are well-suited for knapsack problems and other classic OR stuff, but if your problem doesn't fit the mold, then it's very hit-or-miss.
I'm surprised to hear this. Modern SAT solvers can easily handle many problems with hundreds of thousands of variables and clauses. Of course, there are adversarial problems where CDCL solvers fail, but I would be fascinated if you can find industrial (e.g. human written for a specific purpose) formulas with "dozens of variables" that a solver can't solve fairly quickly.
I've worked on a model with thousands of variables and hundreds of thousands of parameters with a hundred constraints. There are pitfalls you need to avoid, like reification, but it's definitely doable.
Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.
Using local-search are very useful for scaling but at the cost of proven optimality
I think this hits the nail on the head: performance is the obstacle, and you can't get good performance without some modeling expertise, which most people don't have.
I wish I knew better how to use them for these coding problems, because I agree with GP they're underutilized.
But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.
Well, they aren’t magic. You have to use them correctly and apply them to problems that match how they work. Proving something is unsat is worst case NP. These solvers don’t change that.
SAT solvers are used daily to generate solutions for problems that have literally millions of variables. So, what you said is just wrong on the face. Yes, some talented people can write custom code that solves specific problems faster than a general purpose solver, particularly for easy special cases of the general problem, but most of the time that results in the programmer recreating the guts of a solver customized to a specific problem. There’s sort of a corollary of Greenspun’s Tenth Rule that every sufficiently complicated program also contains an ad hoc, informally-specified, bug-ridden, slow-implementation of half of a SAT or SMT solver.
> The real advantage of solvers, though, is how well they handle new constraints.
Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.
Agreed - adaptability to changing requirements is a very strong advantage for real-world work.
The first attempt to formalise some practical problem into a mathematical optimisation problem is often not quite right. You discover new things that change the problem statement after reviewing example solutions with experts who exclaim "that solution couldn't possibly work, because <discovery of additional requirements>".
A neat dynamic programming solution is a glorious thing, but dynamic programming relies on a mathematical problem with some elegant recursive substructure that you can exploit. Sometimes such elegant recursive substructure is going to be broken by additional requirements or side constraints that you discover during the project -- so the real valuable problem you actually need to solve has no such substructure, and you need to throw out your dynamic programming approach and go back to a blank sheet of paper to design a viable attack on the real valuable messy problem.
For lower-risk delivery of useful outcomes, it probably makes the most sense to use general purpose flexible black box solvers like constraint or MIP solvers early in the project, and focus efforts on rapidly discovering and extracting all the requirements, so you zero in on the real problem quicker. You don't want to prematurely invest a lot of time and effort building a highly efficient bespoke elegant solution to the wrong problem, then discover a month before a delivery deadline that everything needs to be thrown out.
Here's an easy ad-hoc Prolog program for the first problem:
% Given a set of coin denominations,
% find the minimum number of coins
% required to make change.
% IE for USA coinage and 37 cents,
% the minimum number is four
% (quarter, dime, 2 pennies).
num(0). num(1). num(2).
num(3). num(4). num(5).
?- num(Q), num(D), num(P),
37 is Q * 25 + D * 10 + P
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.
(Posting again what I already posted two days ago [2] here)
Of course, the challenge is that the next question after solving a leetcode problem is often to explain and optimize the performance characteristics, which in prolog can get stupidly hairy.
I've actually used pseudo-prolog to explain how to solve leetcode problems to a friend. Write the facts, then write the constraints, and then state your problem. Close to the last part, they've already understood how to solve it, or at least how to write the program that can answer the question.
I agree with the other comments here that using a constraint solver defeats the purpose of the interview. But this seems like a good case for learning how to use a constraint solver! Instead of spending hours coding a custom solution to a tricky problem, you could use a constraint solver at first and only write a custom solution if it turns out to be a bottleneck.
Here’s my empirical evidence based on several recent “coding session” interviews with a variety of software companies. Background: I have been developing software for over 30 years, I hold a few patents, I’ve had a handful of modestly successful exits. I kind of know a little bit about what I am doing. At this stage in my career, I am no longer interested in the super early stage startup lifestyle, I’m looking at IC/staff engineer type roles.
The mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.
It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.
I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.
And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.
The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.
The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.
Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.
The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.
Yes, it is a death spiral; if you are to lead them, you have to know what to fix when, to avoid making things worse.
The solution is typically not just to fix their code. They got in over their heads by charging ahead and building something they'll regret, but their culture (and likely the interviewer personal self-regard) depends on believing their (current) tech leaders.
So yes, the interviewer is most comfortable if you chase and find the ball they're hiding.
But the leadership question is whether you can relieve them of their ignorance without also stripping their dignity and future prospects.
I've found (mostly with luck) that they often have a sneaking suspicion that something isn't right, but didn't have the tools or pull to isolate and address it. As a leader if you can elicit that, and then show some strategies for doing so, you'll improve them and the code in a way that encourages them that what was hard to them is solvable with you, which helps them rely on you for other knotty problems.
It's not really that you only live once; it's that this opportunity is here now and should have your full attention, and to be a leader you have to address it directly but from everyone's perspective.
Even if you find you'd never want to work with them, you'd still want to leave them feeling clearer about their code and situation.
Clarifying my "YOLO" usage: I was being a little flippant, in the sense that when ending an interview early with direct critical feedback, the most likely outcome is a "burned bridge" with that company (you're never coming back).
Which reminds me one of my favorite twisted idioms: We'll burn that bridge when we get to it!
I guess I've finally found an acceptable real-world use case for this phrase :)
There is something to be said for being senior in a way where the people interviewing you are junior enough that they don't necessarily have the experience to necessarily "click" with the nuance that comes with said problems.
That being said, from a stoicism point of view, the interview ends up becoming a meta-challenge on how you approach a problem that is not necessarily appropriately framed, and how you'd go about doing and/or gently correcting things as well.
And if they're not able to appreciate it, then success! You have found that it is not the right organization for you. No need to burn the door down on the way out, just feel relief in that you dodged a bullet (hopefully).
MiniZinc is a really great modeling language for constraint programming. Back in August I gave a talk at NordConstNet25 on how we used it to build a product configurator in what's (probably) the worlds largest MiniZinc model
As an interviewer, I gave one pretty simple task (people solved it in as little as 8 minutes), wasn't using any real CS, even though I'm good at it.
The reason was that aboint 70% of candidates couldn't write a simple loop -- to filter those out. The actual solution didn't matter much, I gave a binary decision. The actual conversation matters more.
This. Main point of giving candidates CS problems was always to weed out those who couldn't program at all, but somehow were still in the industry. I worked with such people - it's unpleasant.
Somehow someone figured that giving harder problems should result in better candidates. Personally, despite having passed most of the tests I've been subjected to, I don't see the connection.
My beef with someone using a constraint solver here is that they almost certainly wouldn't be able to guarantee anything about their solution other than that, if it produces an output, it will be correct. They won't be able to guarantee running time, space usage, or (probably for most tools) even a useful progress indicator. The problem isn't merely that they used another tool - the problem is that they abstracted away critical details. Had they provided a handwritten solution from scratch with the same characteristics, it would've exhibited the same problems.
This doesn't mean they can't provide a constraint solver solution, but if they do, they'd better be prepared to address the obvious follow-ups. If they're prepared to give an efficient solution afterward in the time left, then more power to them.
Everyone misunderstands what LC focuses on. It focuses on - did you grind like everyone else that did to get into this company/region/tech? It allows for people who didn't go to the most specific schools (e.g. Cal, Stanford, etc.) to still get into silicon valley companies if they show they are willing to fit the mold. It's about showing you are a conformist and are willing to work very hard to do something that you won't realistically use much in your day to day job.
It's about signaling. That's all it is. At least it's not finance where it's all dictated by if you were born into the right family that got you into the elite boarding schools for high school, etc. I would've never made it into finance unless I did a math phd and became a quant.
> The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.
Really? This kind of interview needs to go away.
However, coding interviews are useful. It's just that "knowing the trick" shouldn't be the point. The point is whether the candidate knows how to code (without AI), can explain themselves and walk through the problem, explain their thought processes, etc. If they do a good enough reasoning job but fail to solve the problem (they run out of time, or they go on an interesting tangent that ultimately proves fruitless) it's still a "passed the test" situation for me.
Failure would mean: "cannot code anything at all, not even a suboptimal solution. Cannot reason about the problem at all. Cannot describe a single pitfall. When told about a pitfall, doesn't understand it nor its implications. Cannot communicate their thoughts."
I agree with this approach. With the exception of testing for specific domain knowledge relevant to the work role, the coding interview should just be about testing the applicant's problem-solving skills and grasp of their language of choice. I would even prefer a take-home style problem that we can review in-person over some high-pressure puzzle. The leetcode interview doesn't seem to correspond to anything a developer actually does day to day.
The bar is so high nowadays that simply being able to talk intelligentyly about the problem, ask clarifying questions, getting an inefficient solution and coding it up well does not pass muster.
Even getting an efficient algorithm basically right, is no guarantee.
In some cases there might be alternative solutions which have some tradeoffs, and you might have to come up with those, as well
Miss a counterexample? Even if you get it after a single hint?. Fuck you, you're out. I can find someone who doesn't need the hint
No. They use sophisticated algorithms called propagators to prune the invalid solutions from the domains of possible solutions in conjunction with a search strategy, like branch and bound
Been working on a calendar scheduling app that uses a constraint solver to auto schedule events based on scheduling constraints (time of day preferences and requirements, recurrence rules), and track goal progress (are you slipping on your desired progress velocity? Get a notification). It’s also a meal planner: from a corpus of thousands of good, healthy recipes, schedule a meal plan that reuses ingredients nearing expiration, models your pantry, estimates grocery prices, meets your nutritional goals. Constraint solvers are black magic.
Constraint programming complement ML/AI/LLM nicely in system solving real world problems [1],[2],[3],[4]. One of the highest stakes and hardest engineering problems namely wireless spectrum frequency allocation was solved by constraint programming and presented in [1]. Now anyone can try to solve the problem in a MiniZinc tutorial. Fun fact, there's also Knuth in the presentation audience trying to get some info on the subject matter perhaps for his next book.
[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:
Terrible question for an interview, and further highlights how our interviews are broken.
Greedy algorithms tell you nearly nothing about the candidate's ability to code. What are you going to see? A single loop, some comparison and an equality. Nearly every single solution that can be solved with a greedy algorithm is largely a math problem disguised as programming. The entire question hinges on the candidate finding the right comparison to conduct.
The author himself finds that these are largely math problems:
> Lots of similar interview questions are this kind of mathematical optimization problem
So we're not optimizing to find good coders, we're optimizing to find mathematicians who have 5 minutes of coding experience.
At the risk of self-promotion, I'm fairly opinionated on this subject. I have a podcast episode where I discuss exactly this problem (including discuss greedy algorithms), and make some suggestions where we could go as an industry to avoid these kind of bad-signal interviews:
This is how I prefer to interview. I don’t understand the mindset of LeetCode interviewers. It’s a weak signal because it’s easily gamed (false positives), and has misses too many strong candidates who have better things to do in their spare time (false negatives, bias towards one type of candidate -> lack of diversity in experience).
I've always maintained that solving LeetCode is more about finding the hidden "trick" that makes the solution,
if not easy,
one that is already "solved" in the general sense.
Look at the problem long enough and realize "oh that's a sliding window problem" or somesuch known solution,
and do that.
I've never heard of a "dynamic programming algorithm". Read wikipedia and it seems to mean....use a recursive function? The coin problem is an easy recursive problem (I just wrote the code for it to make sure my old brain can still do it).
It's usually covered in a first or second year algorithms course. It's a recursive problem definition paired with tabling to eliminate redundant work. Critically, the recursive subproblems have to be overlapping (they'll do some of the same work as the other recursive steps) to see any benefit. You can implement it top-down and add a cache (memoization) or you can implement it bottom-up and fill out the table iteratively rather than through recursion.
If you just implement it recursively without tabling then you end up re-doing work and it's often an exponential runtime instead of polynomial.
To clarify on overlapping, consider Fibonacci:
F(n) = F(n-1) + F(n-2) # and the base cases
F(n-1) includes F(n-2) in its definition, and both F(n-2) and F(n-1) include F(n-3). If you implement this naively it produces an exponential runtime. Once you add the table, the single initial recursive call to F(n-1) will end up, through its chain of calls, storing the result of F(n-2) and now the implementation is linear instead of exponential.
> Read wikipedia and it seems to mean....use a recursive function?
Yes, that's one (common) approach to dynamic programming. The recursive function call are memoized so that previous calculations are remembered for future function calls. Overlapping subproblems become trivial if you can reuse previously computed values. The recursion with memoization is top-down dynamic programming.
My Leetcode ability is unpredictable. I either ace the test or I can't finish it in time. The only way to make outcomes predictable is practice, but I have too much agency and not enough time for that.
Leetcode requires a very different set of skills from software engineering. Software engineering isn't so much about solving puzzles as it is about making good decisions. It's about knowing what's important and knowing where the boundaries are. It's about anticipating problems in their broadest form; creating just the right amount of flexibility and allowing the solution to solidify as your understanding of the problem deepens.
I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9).
That's a bad algorithm, then, not a greedy algorithm. Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
If a candidate's only options are to either use a constraint solver or to implement a naïver-than-usual greedy algorithm, well, sorry, but that's a no-hire.
The algorithm they're using must be "Until you hit the limit, take the highest denomination coin that fits beneath the limit. If you can't hit the limit, fall back one step."
That fits your definition of "use as many coins as possible of a given large denomination before dropping back to the next-lower denomination" but will find 10-10-10-1-1-1-1-1-1-1 and stop before it even tries 10-9-anything.
> Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
Yes, and it won't work on the problem described. The greedy algorithm only works on certain sets of coins (US coin denominations are one of those sets), and fails in at least some cases with other coin sets (as illustrated in the bit you quoted).
An interesting meta problem is to determine antagonistic set of denominations, like the [10,9,1] example given in the post, to maximize the number of coins selected by the gradient method.
Most leetcode problems fall into the same ~15 patterns, and hard problems most of the time require you to use a combination of two patterns to solve them.
> Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram.
Maybe it's my graphics programmer brain firing on all cylinders, but isn't this just a linear scan, maintaining a list of open rectangles?
Yes, you just need to maintain a stack of rectangles ordered from lowest to highest. You only ever have to push and pop the top of the stack, so the runtime is O(n).
All problems cited are about testing if you can write if's, loops and recursion (or a stack/queue).
They aren't testing if you can write a solver. They are testing if you can use bricks that solvers are built out of because other software when it gets interesting is built out of the same stuff.
It's insane how many of these new "AI" companies don't let you use AI or even your own IDE for coding interviews. And most questions from such companies are LC type problems so they know any AI tool can one shot it.
I discourage it but I let them use it and then give them a specific problem that I know your average Claude 4 or GPT 5 will just not get it right.
Actually people perform worse in an interview using AI because they spend time trying to understand what the tool is proposing and then time to figure out why that doesn’t work.
My experience has been quite different. With Cursor/Claude code, I've ended up writing full fledge solutions (running cli/web servers with loggers and unit tests for each functionality). We're talking crawlers, cab booking service like uber, search engines with seed data. All within the hour.
Definitely not insane. Ironic is the correct term. The field is evolving, a lot of these companies talk about replacing outdated practices using AI. Asking software engineers to not use their own tools to solve problems falls under the same bucket.
I tried a couple of times long time ago to solve them with cp/integer programming.
The interviewers were clueless so after 10 minutes of trying to explain to them I quit and fell back to just writing the freaking algo they were expecting to see.
The first example doesn't make a lot of sense, because coins only ever exist in "well-behaved" denominations. There is no currency with a nine unit coin.
It would have been worthwhile if this article had briefly touched upon how the constraint solvers are implemented, rather than avoiding this altogether
I think LeetCode tests two things. First, your willingness to grind to pass an exam, which is actually a good proxy for some qualities you need to thrive in a corporate environment: work is often grungy and you need to push through without getting distracted or discouraged.
Second, it's a covert test for culture fit. Are you young (and thus still OK with grinding for tests)? Are you following industry trends? Are you in tune with the Silicon Valley culture? For the most part, a terrible thing to test, but also something that a lot of "young and dynamic" companies want to select for without saying so publicly. An AI startup doesn't want people who have family life and want to take 6 weeks off in the summer. You can't put that in a job req, but you can come up with a test regime that drives such people away.
It has very little to do with testing the skills you need for the job, because quite frankly, probably fewer than 1% of the SWE workforce is solving theoretical CS problems for a living. Even if that's you, that task is more about knowing where to look for answers or what experiments to try, rather than being able to rattle off some obscure algorithm.
One level of nested for loop for each type of coin. (Run them until i*coin is larger than the input)
Populate a 2d lookup array. $7,50 becomes arr[750] = [7,1,0,0,0,0] which represents [7x100,1x50,0x25,0x10,0x5,0x1]
With each loop check if the array entry exists, if so check if that number of coins is larger. [7,1,0... is better than [7,0,2...] because 8 is a better solution than 9!
Reminder that the research says the interview process should match the day to day expectations as closely as possible, even to a trial day/week/month. All these brain teasers are low on signal, not to mention bad for women and minorities.
You're right to point that out, thanks for keeping things honest. My intention was simply to offer a helpful TL;DR for a very long thread, as it's a feature of a project I'm working on (mentioned in my profile).
I'm trying to contribute value without being spammy. If this crosses a line for the community, I'm happy to listen and adjust.
It seems like a moot point since there are already too many people who can clear leetcode hard than there are available positions that require it.
It’s like scoring over 2 standard deviations on an IQ test, great, but by definition millions of people can do that.
Edit: I’ve heard HFT firms are now moving to doing it on paper in person to prevent any kind of cheating for their interviews, which would make it a better signal.
Yes,that is better.
From my own experience, LeetCode does indeed enhance one's logical thinking and coding skills. I believe this is the reason why FAG has always tested algorithm and data structure skills in interviews over the years.
Very interesting article and good points.
But how about we flip the original statement around:
Many problems thought to require giant software packages/libraries are very solvable locally if you know what you are doing
This is why LC is actually meaningful - imagine if you faced the coin challenge problem IRLin prod and you decided to pull in a constraint solver - what would've been a 25 line function now is a giant python library that carries 30MB of native dependencies around.
These solutions are often lacking in many ways, since mentally offloading some of the work means you don't have to understand the problem in depth or how the designated tool solves it, which can lead to nasty surprised in production.
We see this everywhere - people pulling in questionable npm packages to save themselves 30 mins of thinking or 20 mins of reading docs - people convinced that you do need that huge 3D framework that comes with an editor to make a WebGL widget on your landing page etc.
You are more willing to accept bloat of others, just because they pulled in Electron, because they were too afraid to learn how native UI works.
People need to be more curious, and strive to be more knowledgeable.
You are right to be sceptical of “dependency bloat” - many many systems could be made simpler to debug and maintain if that advice was followed.
But: I urge you to give constraint solvers a try. This is something very different from some hipster ORM library. The good ones will be a single dependency containing extremely carefully optimised matrix math code that let you solve optimisation problems faster and with much more maintainable code.
Try some toy problem with the HiGHS solver for instance, it has good python bindings and even ships as a wasm package you can run in browsers. “Find the optimal times to run this heat pump given electricity prices”, “find the ideal train departure times that optimise for when people need to travel and minimises transfers”
Wait are you making the opposite claim? That one should eschew the "correct" formulation in favor of a bespoke one? Despite the stated (and hopefully obvious) difficulties that brings with maintenance, generalization, etc?
You probably haven't see front-end projects that pulls tons of library for a simple sorting or grouping task. Sometimes even solvable with build-in array function alone. It's a true nightmare when you have to deal with that kind of projects.
6 replies →
I'm sorry but after reading your comment I can't seem to be able to decide if you favor writing the dynamic programming version or pulling in the constraint solver.
Both are correct in the sense that they give the right output, and I don't think pulling in a huge library (maintained by who knows and for how long) is going to be beneficial for maintenace. And having a good understanding of both the precise requirements, and the algorithms used to solve them also helps maintenance.
It's just having two dozen lines tucked of code away in a function in the repo seems infinitely more maintainable to me then using some giant framework (of possibly unknown quality) to solve the issue.
This is a general argument I'm making, not just applying to this constraint solver/
3 replies →
I agree with essentially everything you said. While reinventing the wheel isn't always to most efficient solution, such actions have sometimes paid dividends in my past.
> People need to be more curious, and strive to be more knowledgeable.
Absolutely, though I am not sure lack of curiosity nor strive is always the issue. It's one thing to green field one's own solution in an attempt to better one's abilities, but it's also a bit idealistic in a working world of impending deadlines and death-march Agile sprints.
In this case, a 25 line function to solve a constraint would likely take me far less time than grokking an external library's documentation, but in many cases, I feel developers jump on external dependencies because time is of the essence.
hard disagree:
Given ANY problem: first ask yourself have others solved this before? that is a hard question to answer, since we don't know in which context a similar puzzle was solved before (structures in different domains can boil down to the same mathematical puzzle). A literature search would be very time costly.
The most important point is the flexibility in being able to change the puzzle (a small change in the puzzle can result in a large change in the type of solution), as the author of the article points out. The bespoke algorithm is brittle. Description of the problem itself is less brittle (you can reuse most of the problem statement).
It may sound incredibly expensive to pull in a constraint solver, but if the application warrants constraint-solver-quality results, it should afford either the dependency, or the data structures and solvers used in the solver dependency, to optimize for the application,
Its just bizarre to ask people to beat the feats of those standing on the shoulders of giants, without allowing them to stand on the shoulders of giants too.
Think about why one is recruiting employees with data structure skills (so more than just information plumbers). Is it really so strange that the most qualified people understand the reality that the state of the art is constantly changing, but understand at least the basics of how these solvers work internally?
Viewed through this lens, the ideal job candidate are those who design, implement and maintain... constraint solvers! Assuming their familiarity with the constraint-solver code-base they could profile the solver package while its solving the puzzle. Do this on many instances of the puzzle, and keep track of the dead-ends and optimal solutions, to figure out which functionality can be ripped out of the solver, and which must be kept.
So in order of preference:
1. Programmers or mathematicians (or equivalent, think physicists, etc.) familiar with 1 or more constraint solver source code bases.
2. End-users of a constraint solver package, with sufficient familiarity (as a user) with constraint solvers. WITH data structure and algorithms experience.
3. End-users of a constraint solver package as above, but without data structure and algorithms knowledge
4. People with data structure and algorithms knowledge.
If 1. is too expensive you'll have to combine skills over multiple hires.
If industry is really interested in good profiles for these criteria, it should sponsor universities / students to get familiar with constraint solver usage, and if possible constraint solver development and contributions. After a few years the candidates will pop into existence.
>>It may sound incredibly expensive to pull in a constraint solver
Like this is actually kinda the point of the article. On every constraint programming article half the comments insist that they could do the (example) problem in a dozen lines of C so bringing in SCIP or OR-Tools is "too much" for the (example) problem. Wayne's point (here and in other articles he's written) is that actually constraint problems are fucking everywhere.
This is one of the real lessons of learning Prolog: data queries, scheduling, SAT, LP, integer programming, optimization... etc; it's all constraint programming, hell parsing is a constraint problem if you do it right. No one sees it like that though so they see their entire business application logic as containing at most one knapsack.
I feel like if I'm being asked this in an interview, they're not asking me to use a constraint solver, they're asking me to _write_ a constraint solver. Just for a specific constraint problem, not a more general constraint solver.
You're right, but that just shows how fundamentally silly this interview approach is.
In any real engineering situation I can solve 100% of these problems. That's because I can get a cup of coffee, read some papers, look in a textbook, go for a walk somewhere green and think hard about it... and yes, use tooling like a constraint solver. Or an LLM, which knows all these algorithms off by heart!
In an interview, I could solve 0% of these problems, because my brain just doesn't work that way. Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.
I was told to use ANY language in an interview. I asked them if they were sure, so I solved it with J. They were not too pleased and asked me if I could use another language, so I did prolog and we moved on to the next question. Then the idiot had the audacity to say I should not use "J and Prolog" but any common known language. I asked if assembly was fine, and they said no. Perhaps python or javascript. I did the rest in python, needless to say I didn't get the job. :-)
46 replies →
I haven't been asked leetcode questions in a while and when I was asked, it was an easy level problem. I don't know where they ask hard leetcode problems, I also never solved a hard leetcode problem on my own.
20 replies →
More exactly, you can't invent algorithms on a spot which took who knows how many years for others to invent. I.e. the question ends up being more if you know about a specific algorithm, which results in "invent it if you don't know about it". It's absolutely silly to test for ability to invent one on the spot, so it's a pretty pointless interview question really.
3 replies →
> Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.
Hrm. So what you're saying is you've never actually taken or given this style of interview. Nor presumably ever worked at a company that did this interview. So if on the off-chance these interviews actually were a somewhat successful tool for filtering candidates you wouldn't actually know it?
That feels like a miss.
3 replies →
Depends on your experience and what you’re interviewing for. At a high enough level, the questions are pulled from the easier side, and the interviewer doesn’t want you to fail.
If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
Do you know how few people in this world even know what a constraint solver is, let alone how to correctly define the problem into one?
I used a constraint solver to solve a homework problem once in my CS degree 3rd year. My god just writing the damn constraints was a huge cognitive load!
I did this, wrote an Essence-prime program to generate Minion solver code for a simple instance of the knapsack problem, as part of a startups "solve one of these and get an interview" challenges. Because I had used those tools recently for a contract job (and wrote/presented a paper on invitation of the solver authors,) I thought it would be fun and didn't really want the job. Got an interview but every dev was like "why did you use a cannon to swat a fly?" and were clearly concerned that without strict supervision I would create baroque towers of garbage for them to clean up.
1 reply →
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
I do hope you're exagerating here, but in case you aren't: this is an extremely simplistic view of what (software) engineers have to do, and thus what hiring managers should optimize for. I'd put "ability to work in a team" above "raw academic/reasoning ability" for the vast majority of engineering roles, any day.
Not that the latter doesn't matter, of course, but it's by no means the one and only measure.
8 replies →
I've won a couple hackathons with just CP-SAT & Linear Programming which led to my first jobs. I'm surprised not more people know/use it. Very inefficient compared to the "correct" answer but the development speed is much faster.
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot
Sometimes you just don't want someone that takes these shortcuts. I think being able to solve the problem without a constraint solver is much more impressive
This - the only downside to a constraint solver is it's usually slower. If you want them to write a fast algorithm, then specify that. Have an actual metric for it, if they can pass it with the declarative language, then great. If not, they should have written a more complicated algorithm.
you might be interested in trying out JuMP.jl. It's a Julia package that abstracts over constraint solvers and can do some very complex reformulations automatically to take the declarative definition of your problem and turn it into the types of constraints that the solver you're using supports.
Yes and no: I've asked questions like this in interviews, and I'd count it as a plus if the candidate reached for a constraint solver. They're criminally underused in real-world software engineering and this would show the candidate probably knows how to get the right answer faster instead of wasting a bunch of time.
Now, if they did answer with a constraint solver, I'd probably ask some followup whiteboard questions to make sure they do actually know how to code. But just giving a constraint solver as an answer definitely wouldn't be bad.
Yes, especially if the interviewee said something like 'this may not be asymptomatically optimal, but if it's not a known bottleneck, then I might start with constraint solver to get something working quickly and then profile later.' Especially if it's a case where even the brute-force solution is tricky.
Otherwise penalizing interviewees for suggesting quick-and-dirty solutions reinforces bad habits. "Premature optimization is the root of all evil," after all.
4 replies →
It’d be a positive in my book if they used a constraint solver.
General constraint solver would be terribly inefficient for problems like these. It's a linear problem and constraint solver just can't handle O(10^6) variables without some beefy machine.
10 replies →
This will be true in some interviews, but not in all.
I'm generally against using leetcode in interviews, but wherever I've seen it used it's usually for one reason & one reason alone: known dysfunctional hiring processes. These are processes where the participants in the hiring process are aware of the dysfunction in their process but are either powerless or - more often - too disorganised to properly reform the process.
Sometimes this is semi-technical director level staff leveraging HR to "standardise" interview techniques by asking the same questions across a wide range of teams within a large corp. Other times this is a small underresourced team cobbling together interview questions from online resources in a hurry, not having the cycles to write a tailored process for themselves.
In these cases, you're very likely to be dealing with a technical interviewer who is not an advocate of leetcode interviewing & is attempting to "look around" the standardised interview scoring approach to identify innovative stand out candidates. In a lot of cases I'd hazard even displaying an interest in / some knowledge of solvers would count significantly in your favour.
This. Literally every problem in NP can be cast as a constraint problem. The question of whether a solver is the right solution varies a lot depending on the application, and in an interview , it’s almost by definition not the right solution.
They can also be dreadfully slow (and typically are) compared to just a simple dynamic program.
LeetCode problems typically are in P. The challenge is finding out why.
2 replies →
Great insight. But this is sadly not applicable to interviews.
> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem
This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.
>The point of these problems is to test your cleverness.
In my experience, interviewers love going to the Leetcode "Top Interview 150" list and using problems in the "Array String" category. I'm not a fan of these problems for the kind of jobs I've interviewed for (backend Python mostly), as they are almost always a "give me a O(n) runtime O(1) memory algorithm over this array" type challenge that really doesn't resemble my day to day work at all. I do not regularly do in-place array algorithms in Python because those problems are almost always handled by other languages (C, Rust, etc.) where performance is critical.
I wish interviewers would go to the "Hashmap" section for interviews in Python, JavaScript, etc., type of languages. They are much less about cleverness and more about whether you can demonstrate using the appropriate tools in your language to solve problems that actually do resemble ones I encounter regularly.
There's also the problem of difficulty tuning on some of these. Problem 169 (Majority Element) being rated "Easy" for getting a O(n) runtime O(1) memory solution is hilarious to me. The algorithm first described in 1981 that does it (Boyer–Moore majority vote algorithm) has a Wikipedia page. It's not a difficult to implement or understand algorithm, but its correctness is not obvious until you think about it a bit, at which point you're at sufficient "cleverness" to get a Wikipedia page about an algorithm named after you. Seems excessive for an "Easy" problem.
Interviews should not be about cleverness. They should test that you can code. I almost never write an algorithm because all the important algorithms are in my standard library already. Sure back in school I did implement a red-black tree - I don't remember if it worked, but I implemented it: I can do that again if you need me to, but it will take me several days to get all the details right (most of it looking up how it works again). I use red-black trees all the time, but they are in the language.
You need to make sure a candidate can program so asking programing question make sense. However the candidate should not be judged on if they finish or get an optimal or even correct answer. You need to know if they write good code that you can understand, and are on a path that if given a reasonable amount of time on a realistic story would finish it and get it correct. If someone has seen the problem before they may get the correct answer, but if they have not seen it they won't know and shouldn't expected to get the right answer in an hour.
1 reply →
Majority Element is rated easy because it can be trivially solved with a hashmap in O(N) space and that's enough to pass the question on Leetcode. The O(1) space answer is probably more like a medium.
1 reply →
Honestly in day to day programming I find data types & associated APIs are so so much more important than algorithms.
I would rather work with a flexible data type with suboptimal performance than a brittle data type that maybe squeezes out some extra performance.
Your example of in-place array mutation feels like a good example of such a thing. I feel like there should be a category of interviewing questions for "code-safety" not just performance.
1 reply →
> The point of these problems is to test your cleverness.
Last round I did at Meta it was clearly to test that you grinded their specific set of problems, over and over again, until you could reproduce them without thinking. It's clear because the interviewers are always a bit surprised when you answer with whatever is not the text-book approach on both leetcode and on the interview guide they studied.
Cleverness is definitely not high on the list of things they're looking for.
Interviewers are typically surprised when you do something new because they reuse questions with a lot of people and eventually you end up seeing most variations of a solution.
1 reply →
Cheekily using counting sort ended things the one and only time I agreed to interview with Meta. Definitely improved my inbox for a couple years though.
Bottom up dynamic programming algorithms require some cleverness.
All of the ones listed can be solved with a top down dynamic programing algorithm. Which just means "write recursive solution, add caching to memoize it".
For some of these, you can get cleverer. For example the coin change problem is better solved with an A* search.
Still, very few programmers will actually need these algorithms. The top thing we need is to recognize when we accidentally wrote a quadratic algorithm. A quick scan of https://accidentallyquadratic.tumblr.com/ shows that even good people on prominent projects make that mistake on a constant basis. So apparently being able to produce an algorithm on the test, doesn't translate to catching an algorithmic mistake in the wild.
For the love of me I still can't consistently solve dynamic programming problems. Because "write a clever brute force solution that can be cached" is so broad that there are tons of variations out there, and a slight twist can bring you out of the loop fast.
3 replies →
When I interview with problem solving problems, the point is to understand how the candidate thinks, communicates, and decomposes problems. Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Interviewers learn nothing from an instant epiphany, and they learn next to nothing from someone being stumped.
Unfortunately, this is why we can't have nice things. Problem solving questions in interviews can be immensely useful tools that, sadly, are rarely usefully used.
> the point is to understand how the candidate thinks, communicates, and decomposes problems.
100% and it's a shame that over time this has become completely lost knowledge, on both sides of the interview table, and "leetcode" is now seen as an arbitrary rote memorization hurdle/hazing ritual that software engineers have to pass to enter a lucrative FAANG career. Interviewees grind problems until they've memorized every question in the FAANG interview bank, and FAANG interviewers will watch a candidate spit out regurgitated code on a whiteboard in silence, shrug, and say "yep, they used the optimal dynamic programming solution, they pass."
3 replies →
Would a good answer be "I can do it as a constraint problem, but since I guess you are not asking for this, the solution is..." and then proceed as usual?
2 replies →
> the point is to understand how the candidate thinks, communicates, and decomposes problems
Interviewers always say this, but consider: would you endorse a candidate who ultimately is unable to solve the problem you've presented them, even if they think, communicate, and decompose problems well? No interview in this industry prizes those things over getting the answer right.
3 replies →
> Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Absolutely agree. When I interview, I start with a simple problem and add complexity as they go. Can they write X? Can they combine it with Y? Do they understand how Z is related?
1 reply →
>The point of these problems is to test your cleverness.
No it's just memorization of 12 or so specific patterns. The stakes are too high that virtually everyone going in will not be staking passing on their own inherent problem solving ability. LeetCode has been so thoroughly gamified that it has lost all utility of differentiability beyond willingness to prepare.
Given this consider that LeetCode solving is rarely ever part of your work. So then, what are they selecting for with the habit?
9 replies →
Yeah, it tests if the candidate enjoys the programming-adjacent puzzle game of LeetCode, which is a perfectly decent game to play, but it is just a signal.
If somebody grinds LeetCode while hating it, it signals they are really desperate for a job and willing to jump through hoops for you.
If somebody actually enjoys this kind of stuff, that is probably a signal that they are a rare premium nerd and you should hire them. But the probably play Project Euler as well (is that still up?).
If somebody figures out a one-trick to minmax their LeetCode score… I dunno, I guess it means they are aware of the game and want to solve it efficiently. That seems clever to me…
In defense of questions like this, “willingness to prepare” is a significant differentiator
8 replies →
Constraint solvers are also often not applicable to the real world either.
Many formulations scale in a way that is completely unusable in practice.
Knowing how to get tools like Z3 or Gurobi to solve your problems is it's own skill and one that some companies will hire for, but it's not a general purpose technology you can throw at everything.
This post is the unironic version of "FizzBuzz in TendorFlow", where just because you have a big hammer doesn't mean everything is a nail. And I say that as an enjoyer of bug hammers including SMT solvers.
I always prefer the "The Soviets used a pencil" type engineers. Simplicity is quite close to greatness.
No its not a measure of cleverness. Its about whether you can break down problems and apply common methods. Thats the entire job. Its a learnable skill and honestly resisting learning because of personal biases is a red flag in my book.
The point is to test whether or not you put in the time to sharpen common patterns and also to test your communication ability
Super common patterns like dynamic programming?
2 replies →
Most interviews are based on the premise that if a diabetic can't synthesize their own insulin in their basement, they are somehow cheating at the game of life.
If my wife's blood sugar is high, she takes insulin. If you need to solve a constraint problem, use a constraint solver.
If your company doesn't make and sell constraint solving software, why do you need me to presume that software doesn't exist and invent it from scratch?
It’s explicitly not testing if you can synthesize insulin in a crisis, it’s a general aptitude test for “if we tell you you need to cram this textbook on how to synthesize insulin by next week and then ask you how to do it on a call, can you coherently repeat that back to us?”
If you can figure out that a problem can be efficiently solved with a constraint solver then you can also write the two for loops and maybe some auxiliary recursive function to solve the given toy instance.
In defense of coding tests, most people who can't solve simple dynamic programming problems generally turn out to be pretty poor programmers IRL.
At least that's been my experience. I'm sure there are exceptions.
What?
My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.
It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.
edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.
The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way.
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
https://github.com/ensisoft/detonator
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.
5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
32 replies →
>The LC interviews are like testing people how fast they can run 100m after practice
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
37 replies →
>how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way
I've always explained it as demonstrating your ping pong skills to get on the basketball team.
Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!"
Yes. If work was leetcode problem solving, I would actually enjoy it. Updating npm packages and writing tiny features that get canned a week later is all not that stimulating.
> SMEGMA companies
Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two?
3 replies →
"SMEGMA companies." :D
And nowadays people are blatantly using AI to answer questions like this (https://www.finalroundai.com/coding-copilot). Even trying to stumble through design questions using AI
100%. I just went through an interview process where I absolutely killed the assignment (had the best one they'd seen), had positive signal/feedback from multiple engineers, CEO liked me a lot etc, only to get sunk by a CTO who thought it would be cool to give me a surprise live test because of "vibe coding paranoia". 11 weeks in the process, didn't get the role. Beyond fucking stupid.
This was the demo/take-home (for https://monumental.co): https://github.com/rublev/monumental
It's funny because this repo really does seem vibe-coded. Obviously I have no reason not to believe you, but man! All those emojis in the install shell script - I've never seen anyone other than an AI do that :) Maybe you're the coder that the AI companies trained their AI on.
Sorry about the job interview. That sucks.
16 replies →
Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem: https://github.com/jonnycoder1/merck_coding_challenge
4 replies →
A surprise live test is absolutely the wrong approach for validating whether someone's done the work. IMO the correct approach is to go through the existing code with the applicant and have them explain how it works. Someone who used AI to build it (or in the past had someone else build it for them) wouldn't be able to do a deep dive into the code.
2 replies →
Damn... that's WAY more than I'll do for an interview process assignment... I usually time box myself to an hour or two max. I think the most I did was a tic-tac-toe engine but ran out of time before I could make a UI over it.
1 reply →
That is an insane amount of work for a job application. Were you compensated for it at all?
8 replies →
This repo has enough red flags to warrant some suspicion.
You have also not attempted to hide that, which is interesting.
Wait, what.. you did this as a take home for a position? Damn that looks excessive.
7 replies →
how much did this job pay?
5 replies →
Its not really memorizing solutions. Yes you can get quite far by doing so but follow ups will trip people up. However if you have memorized it and can answer follow ups, I dont see a problem with Leetcode style problems. Problem solving is about pattern matching and the more patterns you know and can match against, the better your ability to solve problems.
Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems.
These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not.
It is and isn't. I'd argue it's not memorizing exact solutions(think copy paste) but memorizing fastest algos to accomplish X.
And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps.
That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now.
5 replies →
I'm fine with that in an interview... I'm not fine with that, in a literally AI graded assignment where you cannot ask clarifying questions. In those cases, if you don't have a memorized answer a lot of times I cannot always grasp the question at hand.
I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves.
I'd say that learning to solve tough LeetCode problems has very little (if not precisely zero) value in terms of you as a programmer learning to do something useful. You will extremely rarely need to solve these type of tougher select-the-most efficient-algorithm problems in most real-world S/W dev jobs, and nowadays if you do then just as AI.
Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job.
1 reply →
There's probably a general positive correlation between knowing a lot of specific algorithms/techniques (i.e. as tested by LC) and being a great developer. HOWEVER I think the scenario of a real world job is far more a subset of that.
Firstly these questions you get like 30 mins to do, which is small compared to the time variance introduced by knowing or not knowing the required algorithm. If you know it you'll be done in like 10 mins with a perfect answer. Whereas if you don't know you could easily spend 30 mins figuring it out and fail. So while on average people passed by LC may be good engineers, in any one scenario it's likely you reject a good engineer because the variance is large. And then it's easy to see why people get upset, because yeah it feels dodgy to be rejected when you happen to not know some obscure algorithm off the top of your head. The process could be fairer.
Secondly, as many say, the actual job is rarely this technical stuff under such time pressure. Knowing algorithms or not means basically nothing when the job is like debugging CI errors for half your day.
Ironic that you’re touting these puzzles as useful interviewing techniques while also admitting that ChatGPT can solve them just fine.
If you’re hiring software engineers by asking them questions that are best answered by AI, you’re living in the past.
1 reply →
Been in software development for 30 years. I have no idea what "Leetcode" is. As far as I know I've never been interviewed with "Leetcode", and it seems like I should be happy about that.
And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me.
Leetcode with no prep is a pretty decent coding skill test
The problem is that it is too amenable to prep
You can move your score like 2stddev with practice, which makes the test almost useless in many cases
On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
7 replies →
Few people are in both circles of "can memorize answers" and "dont understand what they are doing".
You would need "photographic" memory
1 reply →
leetcode just shows why interviews are broken. As a former senior dev (retired now, thanks to almost dying) I can tell you that the ability to write code is like 5% of the job. Every interview I've ever attended has wasted gazillions of dollars and has robbed the company of 10X that amount.
Until companies can focus on things like problem solving, brainstorming, working as a team, etc. the situation won't improve. If I am wrong, why is it that the vast majority of my senior dev and dev management career involved the things I just mentioned?
(I had to leave the field, sadly, due to disability)
Oh and HR needs to stop using software to filter. Maybe ask for ID or something, however, the filters are flagging everyone and the software is sinking the ship, with you all with it.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided.
One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best"
The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test.
These interviews seem designed to filter out applicants with active jobs. In fact, I'd say that they seem specifically for selecting new CS graduates and H1B hires.
The one's i've gotten have all seemed more like tests of my puzzle solving skills than coding.
The worst ones i've had though had extra problems though:
one i was only told about when i joined the interview and that they would be watching live.
One where they wanted me streaming my face the whole time (maybe some people people are fine with that)
And one that would count it against me if i tabbed to another page. So no documentation because they assume i'm just googling it.
Still it's mostly on me to prepare and expect this stuff now.
You can make up API calls which you can say you'd implement later. As long as these are not tricky blocks, you'll be fine.
1 reply →
Which isn't that the main skill actually being tested? How the candidate goes about solving problems? I mean if all we did was measure peoples' skills at making sweeping assumptions we'd likely end up with people who oversimplify problems and all of software would go to shit and get insanely complex... Is the hard part writing the lines of code or solving the problem?
Skill? LC is testing rote memorization of artificial problems you most likely never encounter in actual work.
1 reply →
> My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that.
Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue.
IMO leetcode has multiple problems.
1. People can be hired to take the test for you - surprise surprise 2. It is akin to deciding if someone can write a novel from reading a single sentence.
Hiring people for the test is only valid for online assessment. For an onsite, its very obvious if the candidates have cheated on the OA. I've been on the other side and its transparent.
> It is akin to deciding if someone can write a novel from reading a single sentence.
For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test.
1 reply →
Where I interviewed you had effectively 1 or 2 LC question but the interviewer offered clarifying questions making for a real time discussion and coding exercise.
This solves one problem but it does add performance anxiety to the mix having to live code.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation.
> I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers
If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free.
> It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well.
I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have.
> Without any chance of additional info/questions it's literally a setup to fail.
Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed.
Last year, I saw a lot of places do effectively AI/Automated pre-inverview screenings with a leetcode web editor, and a video capture... This is what I'm talking about.
I'm fine with hard questions in an actual interview.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection).
1 reply →
Many interviews now involve automated exercises on websites that track your activity (don't think about triggering a focus change event on your browser, it gets reported).
Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability).
You have few minutes and you're on your own.
If you pass that abomination, maybe, you have in person ones.
It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none.
7 replies →
How does asking clarifying questions work when a non-programmer is tasked with performing the assessment, because their programmers are busy doing other things, or find it degrading and pointless?
Whenever constraint programming languages come up, you can’t miss mentioning Håkan Kjellerstrand. He’s put together an amazing collection of problems and examples—including plenty for MiniZinc—on his site: https://www.hakank.org/minizinc/
Not only has he made a great website, he's also a super nice guy
> Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?"
This completely undermines the author's main point. Constraint solvers don't solve hard leetcode problems if they can't solve large instances quickly enough.
Many hard leetcode problems can be solved fairly simply with more lax runtime requirements -- coming up with an efficient solution is a large part of the challenge.
> coming up with an efficient solution is a large part of the challenge
More of my work tends to be "rapidly adopting solution to additional and changing requirements" than "come up with most efficient solution", so why are we interviewing for something where in practice we just throw a couple extra instances at it? (Your specific job role may vary, of course, but I usually just increase the scaling factor)
Author's point is that coming up with the most efficient solution might not actually be a good measure of your real-world performance.
And that's been a longrunning critique of leetcode, of course. However, this is a neat framing where you can still use the same problems but give solutions that perform better when measured by "how adaptable is this to new requirements?"
A loonnngggg time ago when I was green, and wasn't taught about constraint solving in my State University compsci program, I encountered the problem when trying to help a friend with his idea.
He wanted to make an app to help sports club owners schedule players for the day based on a couple simple rules. I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know.
I often look back on that as a lesson of my own hubris. And it's helped me a lot when discussing estimates and timelines and expectations.
This might be a dumb question (as I'm not familiar with constraint solvers) but would a linear optimization approach be better? I've used linear optimization for scheduling in the past. The nice thing is that linear optimization handles rule conflicts well, because you just set weights on all your rules and the optimizer will find the "least bad" solution to the conflicts.
This is what major sports leagues use for season scheduling (source: https://mathstodon.xyz/@j2kun/108975072813565989)
Well if your using MiniZinc you're free to use a CP solver, MIP solver, SAT solver, CP-SAT-LP solver. In general the model is roughly the same, even though some formulations work better for some solvers than others.
But CP (and CP-SAT) solvers tend to do very well on scheduling problems
> I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know
This reminds me of high school ~25 years ago when I just started learning TI-Basic on my calculator and was dabbling in VB6 on my PC, and I was flipping burgers at Steak n Shake as my part time job. The manager moaned about how hard it was to write the employee schedules out each week (taking into account requested days off, etc) and I thought “ooh, I know how to write software now, I’ll make a scheduling program!” I told the manager I bet I could do it.
… it took a very short time for 16 year old me to realize writing scheduling software to solve for various constraints is pretty damned hard. I never brought it up after that.
> This was a question in a different interview (which I thankfully passed):
> Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.
It was funny to see this, because I give that question in our interviews. If someone suggested a constraint solver... I don't know what I'd have done before reading this post (since I had only vaguely even heard of a constraint solver), but after reading it...
Yeah, I would still expect them to be able to produce a basic algorithm, but even if their solution was O(n^2) I would take it as a strong sign we should hire them, since I know there are several different use cases for our product that require generalized constraint solving (though I know it by other names) and having a diverse toolset on hand is more important in our domain than writing always-optimal code.
Something that works poorly is often better than something that doesn't work in an instant. This is what I have to tell myself every time I step into a massive, excessively complex mess of a codebase. Many business rules aren't clearly defined ahead of time in a way that always translates well to the code and starting over is a mistake more often than not imo.
Update... refactor... update... break off... etc. A lot of times, I'm looking at something where the tooling is 8+ years old, and the first order of business should be to get it working on a current and fully patched version of whatever is in place... replacing libraries that are no longer supported, etc. From there, refactor what you can, break off what makes sense into something new, refactor again. This process, in my experience, has been far more successful than ground up, new versions.
I say this while actively working on a "new" version of a software. New version being web based, "old" version being a winforms VB.Net app from over a decade ago. Old version has bespoke auth, new verion will rely on Azure Entra... Sometimes, starting over is the answer, but definitely not always.
I would be blown away if a candidate solved it using DP and then said “but let me show you how to use a constraint solver”. Immediate hire.
This. Jump through their hoops first.
+1
- Constraint solvers? That's a nice concept, I heard about this once. However, for the purposes of the interview, let's just write some Python code, I wanna see your way of thinking...
(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)
import z3
from ortools.sat.python import cp_model
Have you ever tried or this is your assumption of what the interviewer would say.
Long time ago, just for fun, I wrote a constraint solver problem that could figure out which high yield banks to put money into that were recommended on doctor of credit(https://www.doctorofcredit.com/high-interest-savings-to-get/) based on <= `X` money and <= `Y` # of transactions on debit cards maximize the yield and other constraints(boolean and real valued)
I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)
Repo?
I find this post interesting independent of the question of whether leetcode problems are a good tool for interviews. It's: here are some kinds or problems constraint solvers are useful for. I can imagine a similar post about non-linear least squared solvers like ceres.
Yeah, especially for learning how to use a solver!
> Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.
He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.
SAT, SMT, and constraint solvers are criminally underutilized in the software industry. We need more education about what they are, how they work, and what sorts of problems they can solve.
At least personally, I've been very underwhelmed by their performance when I've tried using them. Usually past a few dozen variables or so is when I start hitting unacceptable exponential runtimes, especially for problem instances that are unsatisfiable or barely-satisfiable. Maybe their optimizations are well-suited for knapsack problems and other classic OR stuff, but if your problem doesn't fit the mold, then it's very hit-or-miss.
I'm surprised to hear this. Modern SAT solvers can easily handle many problems with hundreds of thousands of variables and clauses. Of course, there are adversarial problems where CDCL solvers fail, but I would be fascinated if you can find industrial (e.g. human written for a specific purpose) formulas with "dozens of variables" that a solver can't solve fairly quickly.
1 reply →
I've worked on a model with thousands of variables and hundreds of thousands of parameters with a hundred constraints. There are pitfalls you need to avoid, like reification, but it's definitely doable.
Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.
Using local-search are very useful for scaling but at the cost of proven optimality
I think this hits the nail on the head: performance is the obstacle, and you can't get good performance without some modeling expertise, which most people don't have.
1 reply →
I wish I knew better how to use them for these coding problems, because I agree with GP they're underutilized.
But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.
6 replies →
Well, they aren’t magic. You have to use them correctly and apply them to problems that match how they work. Proving something is unsat is worst case NP. These solvers don’t change that.
2 replies →
In what way? They're useful for toy problems like this but they're very slow on larger problems.
SAT solvers are used daily to generate solutions for problems that have literally millions of variables. So, what you said is just wrong on the face. Yes, some talented people can write custom code that solves specific problems faster than a general purpose solver, particularly for easy special cases of the general problem, but most of the time that results in the programmer recreating the guts of a solver customized to a specific problem. There’s sort of a corollary of Greenspun’s Tenth Rule that every sufficiently complicated program also contains an ad hoc, informally-specified, bug-ridden, slow-implementation of half of a SAT or SMT solver.
1 reply →
Define large. We've written model which solves real business issues in 8K lines of MiniZinc and it wasn't slow.
The conventional wisdom is the larger you make an NP hard problem, the slower is going to get. Irregardless of algorithm.
What are some good books to get started on the subject?
> The real advantage of solvers, though, is how well they handle new constraints.
Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.
Agreed - adaptability to changing requirements is a very strong advantage for real-world work.
The first attempt to formalise some practical problem into a mathematical optimisation problem is often not quite right. You discover new things that change the problem statement after reviewing example solutions with experts who exclaim "that solution couldn't possibly work, because <discovery of additional requirements>".
A neat dynamic programming solution is a glorious thing, but dynamic programming relies on a mathematical problem with some elegant recursive substructure that you can exploit. Sometimes such elegant recursive substructure is going to be broken by additional requirements or side constraints that you discover during the project -- so the real valuable problem you actually need to solve has no such substructure, and you need to throw out your dynamic programming approach and go back to a blank sheet of paper to design a viable attack on the real valuable messy problem.
For lower-risk delivery of useful outcomes, it probably makes the most sense to use general purpose flexible black box solvers like constraint or MIP solvers early in the project, and focus efforts on rapidly discovering and extracting all the requirements, so you zero in on the real problem quicker. You don't want to prematurely invest a lot of time and effort building a highly efficient bespoke elegant solution to the wrong problem, then discover a month before a delivery deadline that everything needs to be thrown out.
Here's an easy ad-hoc Prolog program for the first problem:
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.
(Posting again what I already posted two days ago [2] here)
[1]: https://news.ycombinator.com/item?id=45205030
Of course, the challenge is that the next question after solving a leetcode problem is often to explain and optimize the performance characteristics, which in prolog can get stupidly hairy.
I've actually used pseudo-prolog to explain how to solve leetcode problems to a friend. Write the facts, then write the constraints, and then state your problem. Close to the last part, they've already understood how to solve it, or at least how to write the program that can answer the question.
The documentation for Googles OR tools comes with many interesting examples of constraint problems, e.g.
https://developers.google.com/optimization/lp/stigler_diet
I agree with the other comments here that using a constraint solver defeats the purpose of the interview. But this seems like a good case for learning how to use a constraint solver! Instead of spending hours coding a custom solution to a tricky problem, you could use a constraint solver at first and only write a custom solution if it turns out to be a bottleneck.
Here’s my empirical evidence based on several recent “coding session” interviews with a variety of software companies. Background: I have been developing software for over 30 years, I hold a few patents, I’ve had a handful of modestly successful exits. I kind of know a little bit about what I am doing. At this stage in my career, I am no longer interested in the super early stage startup lifestyle, I’m looking at IC/staff engineer type roles.
The mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.
It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.
I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.
And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.
The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.
The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.
Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.
The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.
Yes, it is a death spiral; if you are to lead them, you have to know what to fix when, to avoid making things worse.
The solution is typically not just to fix their code. They got in over their heads by charging ahead and building something they'll regret, but their culture (and likely the interviewer personal self-regard) depends on believing their (current) tech leaders.
So yes, the interviewer is most comfortable if you chase and find the ball they're hiding.
But the leadership question is whether you can relieve them of their ignorance without also stripping their dignity and future prospects.
I've found (mostly with luck) that they often have a sneaking suspicion that something isn't right, but didn't have the tools or pull to isolate and address it. As a leader if you can elicit that, and then show some strategies for doing so, you'll improve them and the code in a way that encourages them that what was hard to them is solvable with you, which helps them rely on you for other knotty problems.
It's not really that you only live once; it's that this opportunity is here now and should have your full attention, and to be a leader you have to address it directly but from everyone's perspective.
Even if you find you'd never want to work with them, you'd still want to leave them feeling clearer about their code and situation.
I agree with everything you've written.
Clarifying my "YOLO" usage: I was being a little flippant, in the sense that when ending an interview early with direct critical feedback, the most likely outcome is a "burned bridge" with that company (you're never coming back).
Which reminds me one of my favorite twisted idioms: We'll burn that bridge when we get to it!
I guess I've finally found an acceptable real-world use case for this phrase :)
1 reply →
There is something to be said for being senior in a way where the people interviewing you are junior enough that they don't necessarily have the experience to necessarily "click" with the nuance that comes with said problems.
That being said, from a stoicism point of view, the interview ends up becoming a meta-challenge on how you approach a problem that is not necessarily appropriately framed, and how you'd go about doing and/or gently correcting things as well.
And if they're not able to appreciate it, then success! You have found that it is not the right organization for you. No need to burn the door down on the way out, just feel relief in that you dodged a bullet (hopefully).
In a few cases, I really liked the company and what they were doing, got along wonderfully with the hiring manager. Then bombed their leetcode BS.
So when I say I’d politely tell them why they’re fucked, it’s actually out of a genuine desire to help the company.
But you’re right, I’m also thankful that they showed their red flag so visibly, early enough, and I’m happy to not move forward!
Maybe the process works as designed. It's just that "hiring the best developer" isn't necessarily the goal here
MiniZinc is a really great modeling language for constraint programming. Back in August I gave a talk at NordConstNet25 on how we used it to build a product configurator in what's (probably) the worlds largest MiniZinc model
https://pierre-flener.github.io/research/NordConsNet/NordCon...
As an interviewer, I gave one pretty simple task (people solved it in as little as 8 minutes), wasn't using any real CS, even though I'm good at it.
The reason was that aboint 70% of candidates couldn't write a simple loop -- to filter those out. The actual solution didn't matter much, I gave a binary decision. The actual conversation matters more.
This. Main point of giving candidates CS problems was always to weed out those who couldn't program at all, but somehow were still in the industry. I worked with such people - it's unpleasant.
Somehow someone figured that giving harder problems should result in better candidates. Personally, despite having passed most of the tests I've been subjected to, I don't see the connection.
My beef with someone using a constraint solver here is that they almost certainly wouldn't be able to guarantee anything about their solution other than that, if it produces an output, it will be correct. They won't be able to guarantee running time, space usage, or (probably for most tools) even a useful progress indicator. The problem isn't merely that they used another tool - the problem is that they abstracted away critical details. Had they provided a handwritten solution from scratch with the same characteristics, it would've exhibited the same problems.
This doesn't mean they can't provide a constraint solver solution, but if they do, they'd better be prepared to address the obvious follow-ups. If they're prepared to give an efficient solution afterward in the time left, then more power to them.
[flagged]
> First of all, Nice ChatGPT response
What the heck are you talking about? I didn't even visit ChatGPT today.
That's an en-dash, not an em-dash
2 replies →
Interview:
> We can solve this with a constraint solver
Ok, using your favorite constraint solver, please write a solution for this.
> [half an hour later]
Ok, now how would you solve it if there was more than 100 data points? E.g. 10^12?
Maybe some preprocessing, maybe column generation, depends on the problem.
Well how would you solve it if there were 10^12 data points
Everyone misunderstands what LC focuses on. It focuses on - did you grind like everyone else that did to get into this company/region/tech? It allows for people who didn't go to the most specific schools (e.g. Cal, Stanford, etc.) to still get into silicon valley companies if they show they are willing to fit the mold. It's about showing you are a conformist and are willing to work very hard to do something that you won't realistically use much in your day to day job.
It's about signaling. That's all it is. At least it's not finance where it's all dictated by if you were born into the right family that got you into the elite boarding schools for high school, etc. I would've never made it into finance unless I did a math phd and became a quant.
> The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.
Really? This kind of interview needs to go away.
However, coding interviews are useful. It's just that "knowing the trick" shouldn't be the point. The point is whether the candidate knows how to code (without AI), can explain themselves and walk through the problem, explain their thought processes, etc. If they do a good enough reasoning job but fail to solve the problem (they run out of time, or they go on an interesting tangent that ultimately proves fruitless) it's still a "passed the test" situation for me.
Failure would mean: "cannot code anything at all, not even a suboptimal solution. Cannot reason about the problem at all. Cannot describe a single pitfall. When told about a pitfall, doesn't understand it nor its implications. Cannot communicate their thoughts."
An interview shouldn't be an university exam.
I agree with this approach. With the exception of testing for specific domain knowledge relevant to the work role, the coding interview should just be about testing the applicant's problem-solving skills and grasp of their language of choice. I would even prefer a take-home style problem that we can review in-person over some high-pressure puzzle. The leetcode interview doesn't seem to correspond to anything a developer actually does day to day.
The bar is so high nowadays that simply being able to talk intelligentyly about the problem, ask clarifying questions, getting an inefficient solution and coding it up well does not pass muster.
Even getting an efficient algorithm basically right, is no guarantee.
In some cases there might be alternative solutions which have some tradeoffs, and you might have to come up with those, as well
Miss a counterexample? Even if you get it after a single hint?. Fuck you, you're out. I can find someone who doesn't need the hint
It might be true in the general case, I haven't interviewed for a job for some years, so I may be out of touch.
All I can say is that I do conduct interviews, and that I follow the above philosophy (at least for my round).
Internally, do contraint solvers just do brute force?
It's interesting how powerful contraint solvers are (Ive never used one).
But actually all of these problems are fairly simple if we allow brute force solutions. They just become stacked loops.
No. They use sophisticated algorithms called propagators to prune the invalid solutions from the domains of possible solutions in conjunction with a search strategy, like branch and bound
I've never used constraint solvers, seems like black magic. Need to fill that gap in my knowledge.
But how do they work, what is the complexity of the solution, for example for the stock prices, is it O(n^2)?
General constraint satisfaction problem is NP-hard.
Been working on a calendar scheduling app that uses a constraint solver to auto schedule events based on scheduling constraints (time of day preferences and requirements, recurrence rules), and track goal progress (are you slipping on your desired progress velocity? Get a notification). It’s also a meal planner: from a corpus of thousands of good, healthy recipes, schedule a meal plan that reuses ingredients nearing expiration, models your pantry, estimates grocery prices, meets your nutritional goals. Constraint solvers are black magic.
Which solver do you use?
Google ORTools’ CpSolver, with IntervalVars for the calendar portion.
4 replies →
Constraint programming complement ML/AI/LLM nicely in system solving real world problems [1],[2],[3],[4]. One of the highest stakes and hardest engineering problems namely wireless spectrum frequency allocation was solved by constraint programming and presented in [1]. Now anyone can try to solve the problem in a MiniZinc tutorial. Fun fact, there's also Knuth in the presentation audience trying to get some info on the subject matter perhaps for his next book.
[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:
https://www.youtube.com/live/TknN8fCQvRk
[2] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:
https://youtube.com/watch?v=HB5TrK7A4pI
[3] Google OR-Tools:
https://developers.google.com/optimization
[4] MiniZinc:
https://www.minizinc.org/
I avoided all this just by becoming a contractor, i ship solution, no me tests me for leetcode ability
> faangguyindia
> contractor
Do FAANG hire contractor in India?
I mean, yeah, they do.
[flagged]
apex predator of grug is complexity
1 reply →
No me no nice
Does using a constraint solver actually solve the question under the time ... constraints?
If not, how can you claim you have solved the problem?
Terrible question for an interview, and further highlights how our interviews are broken.
Greedy algorithms tell you nearly nothing about the candidate's ability to code. What are you going to see? A single loop, some comparison and an equality. Nearly every single solution that can be solved with a greedy algorithm is largely a math problem disguised as programming. The entire question hinges on the candidate finding the right comparison to conduct.
The author himself finds that these are largely math problems:
> Lots of similar interview questions are this kind of mathematical optimization problem
So we're not optimizing to find good coders, we're optimizing to find mathematicians who have 5 minutes of coding experience.
At the risk of self-promotion, I'm fairly opinionated on this subject. I have a podcast episode where I discuss exactly this problem (including discuss greedy algorithms), and make some suggestions where we could go as an industry to avoid these kind of bad-signal interviews:
https://socialengineering.fm/episodes/the-problem-with-techn...
My best interview consisted of: -what projects have you done
-what tech you worked with and some questions about decisions
-debugging an issue they encountered before
-talking about interests and cultural fit
Instant green flag for me. Too bad that after receiving my offer covid happened and they had a hiring freeze.
This is how I prefer to interview. I don’t understand the mindset of LeetCode interviewers. It’s a weak signal because it’s easily gamed (false positives), and has misses too many strong candidates who have better things to do in their spare time (false negatives, bias towards one type of candidate -> lack of diversity in experience).
I've seen some cases where someone bragged about projects they participated in but then struggle to write a simple loop.
You can try to sus out smooth talking faker or just tell them to write a thing and then talk only after they demonstrate basic comprehension.
I've always maintained that solving LeetCode is more about finding the hidden "trick" that makes the solution, if not easy, one that is already "solved" in the general sense. Look at the problem long enough and realize "oh that's a sliding window problem" or somesuch known solution, and do that.
I've never heard of a "dynamic programming algorithm". Read wikipedia and it seems to mean....use a recursive function? The coin problem is an easy recursive problem (I just wrote the code for it to make sure my old brain can still do it).
It's usually covered in a first or second year algorithms course. It's a recursive problem definition paired with tabling to eliminate redundant work. Critically, the recursive subproblems have to be overlapping (they'll do some of the same work as the other recursive steps) to see any benefit. You can implement it top-down and add a cache (memoization) or you can implement it bottom-up and fill out the table iteratively rather than through recursion.
If you just implement it recursively without tabling then you end up re-doing work and it's often an exponential runtime instead of polynomial.
To clarify on overlapping, consider Fibonacci:
F(n-1) includes F(n-2) in its definition, and both F(n-2) and F(n-1) include F(n-3). If you implement this naively it produces an exponential runtime. Once you add the table, the single initial recursive call to F(n-1) will end up, through its chain of calls, storing the result of F(n-2) and now the implementation is linear instead of exponential.
> Read wikipedia and it seems to mean....use a recursive function?
Yes, that's one (common) approach to dynamic programming. The recursive function call are memoized so that previous calculations are remembered for future function calls. Overlapping subproblems become trivial if you can reuse previously computed values. The recursion with memoization is top-down dynamic programming.
So all in all pretty basic stuff. Why would anyone worth their salt should have problem with that?
2 replies →
The coin problem is like the intro to it. Try some of the codeforces one :skull:
My Leetcode ability is unpredictable. I either ace the test or I can't finish it in time. The only way to make outcomes predictable is practice, but I have too much agency and not enough time for that.
Leetcode requires a very different set of skills from software engineering. Software engineering isn't so much about solving puzzles as it is about making good decisions. It's about knowing what's important and knowing where the boundaries are. It's about anticipating problems in their broadest form; creating just the right amount of flexibility and allowing the solution to solidify as your understanding of the problem deepens.
I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9).
That's a bad algorithm, then, not a greedy algorithm. Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
If a candidate's only options are to either use a constraint solver or to implement a naïver-than-usual greedy algorithm, well, sorry, but that's a no-hire.
The algorithm they're using must be "Until you hit the limit, take the highest denomination coin that fits beneath the limit. If you can't hit the limit, fall back one step."
That fits your definition of "use as many coins as possible of a given large denomination before dropping back to the next-lower denomination" but will find 10-10-10-1-1-1-1-1-1-1 and stop before it even tries 10-9-anything.
There is no greedy solution to the problem. A greedy algorithm would start by taking 3 10-cent coins to make 37 which is wrong.
> Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
Yes, and it won't work on the problem described. The greedy algorithm only works on certain sets of coins (US coin denominations are one of those sets), and fails in at least some cases with other coin sets (as illustrated in the bit you quoted).
D'oh, that makes sense, I didn't consider the case where it would keep returning 10.
By the way, ChatGPT was able to solve this problem and give the correct solution.
It's in numerous algorithms textbooks and probably a lot of code repositories, so that's not surprising.
1 reply →
Interesting that an informative comment, without any implications or insinuations, got downvoted.
HN sucks sometimes. "Intellectual discourse" and "hacker curiosity" my ass.
1 reply →
SAT & CSP are criminally under utilized in CS classes, because profs have no clue about them.
That's why in so many industries they prefer to hire engineers and OR grads and teach them python, than hire SWE and teach them modeling
An interesting meta problem is to determine antagonistic set of denominations, like the [10,9,1] example given in the post, to maximize the number of coins selected by the gradient method.
Isn't it trivially [1]?
Perhaps what is meant is "maximize the difference between the optimal result and the one calculated by the naive greedy algorithm".
1 reply →
https://codeforces.com/problemset/problem/1889/D
Most leetcode problems fall into the same ~15 patterns, and hard problems most of the time require you to use a combination of two patterns to solve them.
> Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram.
Maybe it's my graphics programmer brain firing on all cylinders, but isn't this just a linear scan, maintaining a list of open rectangles?
Yes, you just need to maintain a stack of rectangles ordered from lowest to highest. You only ever have to push and pop the top of the stack, so the runtime is O(n).
All problems cited are about testing if you can write if's, loops and recursion (or a stack/queue).
They aren't testing if you can write a solver. They are testing if you can use bricks that solvers are built out of because other software when it gets interesting is built out of the same stuff.
You: Oh I know, I can use a constraint solver for this problem!
Interviewer: You can't use a constraint solver
Use the right tool for the right job!
Would love to know how to actually assess the runtime complexity of constraint solvers like this.
It's insane how many of these new "AI" companies don't let you use AI or even your own IDE for coding interviews. And most questions from such companies are LC type problems so they know any AI tool can one shot it.
I discourage it but I let them use it and then give them a specific problem that I know your average Claude 4 or GPT 5 will just not get it right.
Actually people perform worse in an interview using AI because they spend time trying to understand what the tool is proposing and then time to figure out why that doesn’t work.
My experience has been quite different. With Cursor/Claude code, I've ended up writing full fledge solutions (running cli/web servers with loggers and unit tests for each functionality). We're talking crawlers, cab booking service like uber, search engines with seed data. All within the hour.
Why is that insane? Seems logical to me.
Definitely not insane. Ironic is the correct term. The field is evolving, a lot of these companies talk about replacing outdated practices using AI. Asking software engineers to not use their own tools to solve problems falls under the same bucket.
I tried a couple of times long time ago to solve them with cp/integer programming.
The interviewers were clueless so after 10 minutes of trying to explain to them I quit and fell back to just writing the freaking algo they were expecting to see.
So LeetCode has fallen into the same trap as ProjectEuler (anyone remember that?)
The first example doesn't make a lot of sense, because coins only ever exist in "well-behaved" denominations. There is no currency with a nine unit coin.
It would have been worthwhile if this article had briefly touched upon how the constraint solvers are implemented, rather than avoiding this altogether
I think LeetCode tests two things. First, your willingness to grind to pass an exam, which is actually a good proxy for some qualities you need to thrive in a corporate environment: work is often grungy and you need to push through without getting distracted or discouraged.
Second, it's a covert test for culture fit. Are you young (and thus still OK with grinding for tests)? Are you following industry trends? Are you in tune with the Silicon Valley culture? For the most part, a terrible thing to test, but also something that a lot of "young and dynamic" companies want to select for without saying so publicly. An AI startup doesn't want people who have family life and want to take 6 weeks off in the summer. You can't put that in a job req, but you can come up with a test regime that drives such people away.
It has very little to do with testing the skills you need for the job, because quite frankly, probably fewer than 1% of the SWE workforce is solving theoretical CS problems for a living. Even if that's you, that task is more about knowing where to look for answers or what experiments to try, rather than being able to rattle off some obscure algorithm.
Nice post, I wasn't aware that there were so many dedicated constraint solving systems.
Any problem can be solved by a sufficient number of nested for loops.
(if you have enough time)
One level of nested for loop for each type of coin. (Run them until i*coin is larger than the input)
Populate a 2d lookup array. $7,50 becomes arr[750] = [7,1,0,0,0,0] which represents [7x100,1x50,0x25,0x10,0x5,0x1]
With each loop check if the array entry exists, if so check if that number of coins is larger. [7,1,0... is better than [7,0,2...] because 8 is a better solution than 9!
And a stack.
Coudnt help myself sorry
me neither
Or....
Coding on a Phone is hell
1 reply →
Whoever agrees to do LC problems during interview has zero dignity.
Gröbner basis is an interesting way to solve the 1st problem.
Reminder that the research says the interview process should match the day to day expectations as closely as possible, even to a trial day/week/month. All these brain teasers are low on signal, not to mention bad for women and minorities.
"Follow-up question since you solved that so quickly: implement a constraint solver."
“Many hard problems are, to me, quite easy”
[dead]
[flagged]
I understand that this is an advertisement for your product, but please do not do this here. Thank you.
You're right to point that out, thanks for keeping things honest. My intention was simply to offer a helpful TL;DR for a very long thread, as it's a feature of a project I'm working on (mentioned in my profile).
I'm trying to contribute value without being spammy. If this crosses a line for the community, I'm happy to listen and adjust.
1 reply →
Honestly, I didn’t realise it was an advert until you told me it was.
It seems like a moot point since there are already too many people who can clear leetcode hard than there are available positions that require it.
It’s like scoring over 2 standard deviations on an IQ test, great, but by definition millions of people can do that.
Edit: I’ve heard HFT firms are now moving to doing it on paper in person to prevent any kind of cheating for their interviews, which would make it a better signal.
Yes,that is better. From my own experience, LeetCode does indeed enhance one's logical thinking and coding skills. I believe this is the reason why FAG has always tested algorithm and data structure skills in interviews over the years.
A little off topic, but I don't know much about greedy algorithms or dynamic programming. I got curious. This conversation was very insightful and now it's very clear in my mind: https://chatgpt.com/share/68c46d0b-8858-8004-aa03-f7ce321988...