Many hard LeetCode problems are easy constraint problems

10 hours ago (buttondown.com)

My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.

Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.

It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.

edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.

  • The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way.

    But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.

    I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)

    https://github.com/ensisoft/detonator

    but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.

    • > SMEGMA companies

      Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two?

    • >how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way

      I've always explained it as demonstrating your ping pong skills to get on the basketball team.

    • Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!"

    • >The LC interviews are like testing people how fast they can run 100m after practice

      Ah, but, the road to becoming good at Leetcode/100m sprint is:

      >a slow arduous never ending jog with multiple detours and stops along the way

      Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.

      Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.

      That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.

      8 replies →

  • Its not really memorizing solutions. Yes you can get quite far by doing so but follow ups will trip people up. However if you have memorized it and can answer follow ups, I dont see a problem with Leetcode style problems. Problem solving is about pattern matching and the more patterns you know and can match against, the better your ability to solve problems.

    Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems.

    These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not.

    • It is and isn't. I'd argue it's not memorizing exact solutions(think copy paste) but memorizing fastest algos to accomplish X.

      And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.

      Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps.

      That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now.

      4 replies →

    • I'm fine with that in an interview... I'm not fine with that, in a literally AI graded assignment where you cannot ask clarifying questions. In those cases, if you don't have a memorized answer a lot of times I cannot always grasp the question at hand.

      I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves.

    • I'd say that learning to solve tough LeetCode problems has very little (if not precisely zero) value in terms of you as a programmer learning to do something useful. You will extremely rarely need to solve these type of tougher select-the-most efficient-algorithm problems in most real-world S/W dev jobs, and nowadays if you do then just as AI.

      Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job.

      1 reply →

    • Ironic that you’re touting these puzzles as useful interviewing techniques while also admitting that ChatGPT can solve them just fine.

      If you’re hiring software engineers by asking them questions that are best answered by AI, you’re living in the past.

    • Few people are in both circles of "can memorize answers" and "dont understand what they are doing".

      You would need "photographic" memory

      1 reply →

    • Been in software development for 30 years. I have no idea what "Leetcode" is. As far as I know I've never been interviewed with "Leetcode", and it seems like I should be happy about that.

      And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me.

    • Leetcode with no prep is a pretty decent coding skill test

      The problem is that it is too amenable to prep

      You can move your score like 2stddev with practice, which makes the test almost useless in many cases

      On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it

      3 replies →

  • 100%. I just went through an interview process where I absolutely killed the assignment (had the best one they'd seen), had positive signal/feedback from multiple engineers, CEO liked me a lot etc, only to get sunk by a CTO who thought it would be cool to give me a surprise live test because of "vibe coding paranoia". 11 weeks in the process, didn't get the role. Beyond fucking stupid.

    This was the demo/take-home (for https://monumental.co): https://github.com/rublev/monumental

    • It's funny because this repo really does seem vibe-coded. Obviously I have no reason not to believe you, but man! All those emojis in the install shell script - I've never seen anyone other than an AI do that :) Maybe you're the coder that the AI companies trained their AI on.

      Sorry about the job interview. That sucks.

      9 replies →

    • A surprise live test is absolutely the wrong approach for validating whether someone's done the work. IMO the correct approach is to go through the existing code with the applicant and have them explain how it works. Someone who used AI to build it (or in the past had someone else build it for them) wouldn't be able to do a deep dive into the code.

      2 replies →

    • Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard.

      I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.

      For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.

      Out of respect, I posted the challenge and the solution on my github after waiting one year.

      Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem: https://github.com/jonnycoder1/merck_coding_challenge

      1 reply →

    • Damn... that's WAY more than I'll do for an interview process assignment... I usually time box myself to an hour or two max. I think the most I did was a tic-tac-toe engine but ran out of time before I could make a UI over it.

      1 reply →

  •   > you can't ask clarifying questions
    

    Which isn't that the main skill actually being tested? How the candidate goes about solving problems? I mean if all we did was measure peoples' skills at making sweeping assumptions we'd likely end up with people who oversimplify problems and all of software would go to shit and get insanely complex... Is the hard part writing the lines of code or solving the problem?

    • Skill? LC is testing rote memorization of artificial problems you most likely never encounter in actual work.

  • Where I interviewed you had effectively 1 or 2 LC question but the interviewer offered clarifying questions making for a real time discussion and coding exercise.

    This solves one problem but it does add performance anxiety to the mix having to live code.

  • > My biggest problem with leetcode type questions is that you can't ask clarifying questions.

    What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided.

    One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best"

    The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test.

  • > My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.

    The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that.

    Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue.

  • These interviews seem designed to filter out applicants with active jobs. In fact, I'd say that they seem specifically for selecting new CS graduates and H1B hires.

  • IMO leetcode has multiple problems.

    1. People can be hired to take the test for you - surprise surprise 2. It is akin to deciding if someone can write a novel from reading a single sentence.

    • Hiring people for the test is only valid for online assessment. For an onsite, its very obvious if the candidates have cheated on the OA. I've been on the other side and its transparent.

      > It is akin to deciding if someone can write a novel from reading a single sentence.

      For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test.

      1 reply →

  • > My biggest problem with leetcode type questions is that you can't ask clarifying questions.

    Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation.

    > I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers

    If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free.

    > It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well.

    I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have.

    > Without any chance of additional info/questions it's literally a setup to fail.

    Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed.

    • > My biggest problem with leetcode type questions is that you can't ask clarifying questions.

      Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection).

      1 reply →

    • Last year, I saw a lot of places do effectively AI/Automated pre-inverview screenings with a leetcode web editor, and a video capture... This is what I'm talking about.

      I'm fine with hard questions in an actual interview.

    • Many interviews now involve automated exercises on websites that track your activity (don't think about triggering a focus change event on your browser, it gets reported).

      Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability).

      You have few minutes and you're on your own.

      If you pass that abomination, maybe, you have in person ones.

      It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none.

      4 replies →

    • How does asking clarifying questions work when a non-programmer is tasked with performing the assessment, because their programmers are busy doing other things, or find it degrading and pointless?

  • The one's i've gotten have all seemed more like tests of my puzzle solving skills than coding.

    The worst ones i've had though had extra problems though:

    one i was only told about when i joined the interview and that they would be watching live.

    One where they wanted me streaming my face the whole time (maybe some people people are fine with that)

    And one that would count it against me if i tabbed to another page. So no documentation because they assume i'm just googling it.

    Still it's mostly on me to prepare and expect this stuff now.

I feel like if I'm being asked this in an interview, they're not asking me to use a constraint solver, they're asking me to _write_ a constraint solver. Just for a specific constraint problem, not a more general constraint solver.

  • You're right, but that just shows how fundamentally silly this interview approach is.

    In any real engineering situation I can solve 100% of these problems. That's because I can get a cup of coffee, read some papers, look in a textbook, go for a walk somewhere green and think hard about it... and yes, use tooling like a constraint solver. Or an LLM, which knows all these algorithms off by heart!

    In an interview, I could solve 0% of these problems, because my brain just doesn't work that way. Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.

    • I was told to use ANY language in an interview. I asked them if they were sure, so I solved it with J. They were not too pleased and asked me if I could use another language, so I did prolog and we moved on to the next question. Then the idiot had the audacity to say I should not use "J and Prolog" but any common known language. I asked if assembly was fine, and they said no. Perhaps python or javascript. I did the rest in python, needless to say I didn't get the job. :-)

      23 replies →

    • I haven't been asked leetcode questions in a while and when I was asked, it was an easy level problem. I don't know where they ask hard leetcode problems, I also never solved a hard leetcode problem on my own.

      10 replies →

    • More exactly, you can't invent algorithms on a spot which took who knows how many years for others to invent. I.e. the question ends up being more if you know about a specific algorithm, which results in "invent it if you don't know about it". It's absolutely silly to test for ability to invent one on the spot, so it's a pretty pointless interview question really.

      2 replies →

  • This. Literally every problem in NP can be cast as a constraint problem. The question of whether a solver is the right solution varies a lot depending on the application, and in an interview , it’s almost by definition not the right solution.

    They can also be dreadfully slow (and typically are) compared to just a simple dynamic program.

  • This will be true in some interviews, but not in all.

    I'm generally against using leetcode in interviews, but wherever I've seen it used it's usually for one reason & one reason alone: known dysfunctional hiring processes. These are processes where the participants in the hiring process are aware of the dysfunction in their process but are either powerless or - more often - too disorganised to properly reform the process.

    Sometimes this is semi-technical director level staff leveraging HR to "standardise" interview techniques by asking the same questions across a wide range of teams within a large corp. Other times this is a small underresourced team cobbling together interview questions from online resources in a hurry, not having the cycles to write a tailored process for themselves.

    In these cases, you're very likely to be dealing with a technical interviewer who is not an advocate of leetcode interviewing & is attempting to "look around" the standardised interview scoring approach to identify innovative stand out candidates. In a lot of cases I'd hazard even displaying an interest in / some knowledge of solvers would count significantly in your favour.

  • If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.

    Do you know how few people in this world even know what a constraint solver is, let alone how to correctly define the problem into one?

    I used a constraint solver to solve a homework problem once in my CS degree 3rd year. My god just writing the damn constraints was a huge cognitive load!

    • I did this, wrote an Essence-prime program to generate Minion solver code for a simple instance of the knapsack problem, as part of a startups "solve one of these and get an interview" challenges. Because I had used those tools recently for a contract job (and wrote/presented a paper on invitation of the solver authors,) I thought it would be fun and didn't really want the job. Got an interview but every dev was like "why did you use a cannon to swat a fly?" and were clearly concerned that without strict supervision I would create baroque towers of garbage for them to clean up.

      1 reply →

    • > If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.

      I do hope you're exagerating here, but in case you aren't: this is an extremely simplistic view of what (software) engineers have to do, and thus what hiring managers should optimize for. I'd put "ability to work in a team" above "raw academic/reasoning ability" for the vast majority of engineering roles, any day.

      Not that the latter doesn't matter, of course, but it's by no means the one and only measure.

      7 replies →

    • I've won a couple hackathons with just CP-SAT & Linear Programming which led to my first jobs. I'm surprised not more people know/use it. Very inefficient compared to the "correct" answer but the development speed is much faster.

      > If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot

      Sometimes you just don't want someone that takes these shortcuts. I think being able to solve the problem without a constraint solver is much more impressive

    • This - the only downside to a constraint solver is it's usually slower. If you want them to write a fast algorithm, then specify that. Have an actual metric for it, if they can pass it with the declarative language, then great. If not, they should have written a more complicated algorithm.

  • Yes and no: I've asked questions like this in interviews, and I'd count it as a plus if the candidate reached for a constraint solver. They're criminally underused in real-world software engineering and this would show the candidate probably knows how to get the right answer faster instead of wasting a bunch of time.

    Now, if they did answer with a constraint solver, I'd probably ask some followup whiteboard questions to make sure they do actually know how to code. But just giving a constraint solver as an answer definitely wouldn't be bad.

    • Yes, especially if the interviewee said something like 'this may not be asymptomatically optimal, but if it's not a known bottleneck, then I might start with constraint solver to get something working quickly and then profile later.' Especially if it's a case where even the brute-force solution is tricky.

      Otherwise penalizing interviewees for suggesting quick-and-dirty solutions reinforces bad habits. "Premature optimization is the root of all evil," after all.

      3 replies →

    • General constraint solver would be terribly inefficient for problems like these. It's a linear problem and constraint solver just can't handle O(10^6) variables without some beefy machine.

      9 replies →

Great insight. But this is sadly not applicable to interviews.

> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem

This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.

  • >The point of these problems is to test your cleverness.

    In my experience, interviewers love going to the Leetcode "Top Interview 150" list and using problems in the "Array String" category. I'm not a fan of these problems for the kind of jobs I've interviewed for (backend Python mostly), as they are almost always a "give me a O(n) runtime O(1) memory algorithm over this array" type challenge that really doesn't resemble my day to day work at all. I do not regularly do in-place array algorithms in Python because those problems are almost always handled by other languages (C, Rust, etc.) where performance is critical.

    I wish interviewers would go to the "Hashmap" section for interviews in Python, JavaScript, etc., type of languages. They are much less about cleverness and more about whether you can demonstrate using the appropriate tools in your language to solve problems that actually do resemble ones I encounter regularly.

    There's also the problem of difficulty tuning on some of these. Problem 169 (Majority Element) being rated "Easy" for getting a O(n) runtime O(1) memory solution is hilarious to me. The algorithm first described in 1981 that does it (Boyer–Moore majority vote algorithm) has a Wikipedia page. It's not a difficult to implement or understand algorithm, but its correctness is not obvious until you think about it a bit, at which point you're at sufficient "cleverness" to get a Wikipedia page about an algorithm named after you. Seems excessive for an "Easy" problem.

    • Interviews should not be about cleverness. They should test that you can code. I almost never write an algorithm because all the important algorithms are in my standard library already. Sure back in school I did implement a red-black tree - I don't remember if it worked, but I implemented it: I can do that again if you need me to, but it will take me several days to get all the details right (most of it looking up how it works again). I use red-black trees all the time, but they are in the language.

      You need to make sure a candidate can program so asking programing question make sense. However the candidate should not be judged on if they finish or get an optimal or even correct answer. You need to know if they write good code that you can understand, and are on a path that if given a reasonable amount of time on a realistic story would finish it and get it correct. If someone has seen the problem before they may get the correct answer, but if they have not seen it they won't know and shouldn't expected to get the right answer in an hour.

    • Majority Element is rated easy because it can be trivially solved with a hashmap in O(N) space and that's enough to pass the question on Leetcode. The O(1) space answer is probably more like a medium.

      1 reply →

    • Honestly in day to day programming I find data types & associated APIs are so so much more important than algorithms.

      I would rather work with a flexible data type with suboptimal performance than a brittle data type that maybe squeezes out some extra performance.

      Your example of in-place array mutation feels like a good example of such a thing. I feel like there should be a category of interviewing questions for "code-safety" not just performance.

  • > The point of these problems is to test your cleverness.

    Last round I did at Meta it was clearly to test that you grinded their specific set of problems, over and over again, until you could reproduce them without thinking. It's clear because the interviewers are always a bit surprised when you answer with whatever is not the text-book approach on both leetcode and on the interview guide they studied.

    Cleverness is definitely not high on the list of things they're looking for.

    • Cheekily using counting sort ended things the one and only time I agreed to interview with Meta. Definitely improved my inbox for a couple years though.

  • When I interview with problem solving problems, the point is to understand how the candidate thinks, communicates, and decomposes problems. Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".

    Interviewers learn nothing from an instant epiphany, and they learn next to nothing from someone being stumped.

    Unfortunately, this is why we can't have nice things. Problem solving questions in interviews can be immensely useful tools that, sadly, are rarely usefully used.

    • > the point is to understand how the candidate thinks, communicates, and decomposes problems.

      100% and it's a shame that over time this has become completely lost knowledge, on both sides of the interview table, and "leetcode" is now seen as an arbitrary rote memorization hurdle/hazing ritual that software engineers have to pass to enter a lucrative FAANG career. Interviewees grind problems until they've memorized every question in the FAANG interview bank, and FAANG interviewers will watch a candidate spit out regurgitated code on a whiteboard in silence, shrug, and say "yep, they used the optimal dynamic programming solution, they pass."

      2 replies →

    • > Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".

      Absolutely agree. When I interview, I start with a simple problem and add complexity as they go. Can they write X? Can they combine it with Y? Do they understand how Z is related?

      1 reply →

    • > the point is to understand how the candidate thinks, communicates, and decomposes problems

      Interviewers always say this, but consider: would you endorse a candidate who ultimately is unable to solve the problem you've presented them, even if they think, communicate, and decompose problems well? No interview in this industry prizes those things over getting the answer right.

      1 reply →

  • Bottom up dynamic programming algorithms require some cleverness.

    All of the ones listed can be solved with a top down dynamic programing algorithm. Which just means "write recursive solution, add caching to memoize it".

    For some of these, you can get cleverer. For example the coin change problem is better solved with an A* search.

    Still, very few programmers will actually need these algorithms. The top thing we need is to recognize when we accidentally wrote a quadratic algorithm. A quick scan of https://accidentallyquadratic.tumblr.com/ shows that even good people on prominent projects make that mistake on a constant basis. So apparently being able to produce an algorithm on the test, doesn't translate to catching an algorithmic mistake in the wild.

  • >The point of these problems is to test your cleverness.

    No it's just memorization of 12 or so specific patterns. The stakes are too high that virtually everyone going in will not be staking passing on their own inherent problem solving ability. LeetCode has been so thoroughly gamified that it has lost all utility of differentiability beyond willingness to prepare.

    • Yeah, it tests if the candidate enjoys the programming-adjacent puzzle game of LeetCode, which is a perfectly decent game to play, but it is just a signal.

      If somebody grinds LeetCode while hating it, it signals they are really desperate for a job and willing to jump through hoops for you.

      If somebody actually enjoys this kind of stuff, that is probably a signal that they are a rare premium nerd and you should hire them. But the probably play Project Euler as well (is that still up?).

      If somebody figures out a one-trick to minmax their LeetCode score… I dunno, I guess it means they are aware of the game and want to solve it efficiently. That seems clever to me…

  • No its not a measure of cleverness. Its about whether you can break down problems and apply common methods. Thats the entire job. Its a learnable skill and honestly resisting learning because of personal biases is a red flag in my book.

Most interviews are based on the premise that if a diabetic can't synthesize their own insulin in their basement, they are somehow cheating at the game of life.

If my wife's blood sugar is high, she takes insulin. If you need to solve a constraint problem, use a constraint solver.

If your company doesn't make and sell constraint solving software, why do you need me to presume that software doesn't exist and invent it from scratch?

  • It’s explicitly not testing if you can synthesize insulin in a crisis, it’s a general aptitude test for “if we tell you you need to cram this textbook on how to synthesize insulin by next week and then ask you how to do it on a call, can you coherently repeat that back to us?”

  • If you can figure out that a problem can be efficiently solved with a constraint solver then you can also write the two for loops and maybe some auxiliary recursive function to solve the given toy instance.

  • In defense of coding tests, most people who can't solve simple dynamic programming problems generally turn out to be pretty poor programmers IRL.

    At least that's been my experience. I'm sure there are exceptions.

A loonnngggg time ago when I was green, and wasn't taught about constraint solving in my State University compsci program, I encountered the problem when trying to help a friend with his idea.

He wanted to make an app to help sports club owners schedule players for the day based on a couple simple rules. I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know.

I often look back on that as a lesson of my own hubris. And it's helped me a lot when discussing estimates and timelines and expectations.

  • > I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know

    This reminds me of high school ~25 years ago when I just started learning TI-Basic on my calculator and was dabbling in VB6 on my PC, and I was flipping burgers at Steak n Shake as my part time job. The manager moaned about how hard it was to write the employee schedules out each week (taking into account requested days off, etc) and I thought “ooh, I know how to write software now, I’ll make a scheduling program!” I told the manager I bet I could do it.

    … it took a very short time for 16 year old me to realize writing scheduling software to solve for various constraints is pretty damned hard. I never brought it up after that.

  • This might be a dumb question (as I'm not familiar with constraint solvers) but would a linear optimization approach be better? I've used linear optimization for scheduling in the past. The nice thing is that linear optimization handles rule conflicts well, because you just set weights on all your rules and the optimizer will find the "least bad" solution to the conflicts.

    • Well if your using MiniZinc you're free to use a CP solver, MIP solver, SAT solver, CP-SAT-LP solver. In general the model is roughly the same, even though some formulations work better for some solvers than others.

      But CP (and CP-SAT) solvers tend to do very well on scheduling problems

> Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?"

This completely undermines the author's main point. Constraint solvers don't solve hard leetcode problems if they can't solve large instances quickly enough.

Many hard leetcode problems can be solved fairly simply with more lax runtime requirements -- coming up with an efficient solution is a large part of the challenge.

  • > coming up with an efficient solution is a large part of the challenge

    More of my work tends to be "rapidly adopting solution to additional and changing requirements" than "come up with most efficient solution", so why are we interviewing for something where in practice we just throw a couple extra instances at it? (Your specific job role may vary, of course, but I usually just increase the scaling factor)

    Author's point is that coming up with the most efficient solution might not actually be a good measure of your real-world performance.

    And that's been a longrunning critique of leetcode, of course. However, this is a neat framing where you can still use the same problems but give solutions that perform better when measured by "how adaptable is this to new requirements?"

- Constraint solvers? That's a nice concept, I heard about this once. However, for the purposes of the interview, let's just write some Python code, I wanna see your way of thinking...

(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)

> The real advantage of solvers, though, is how well they handle new constraints.

Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.

Here's an easy ad-hoc Prolog program for the first problem:

    % Given a set of coin denominations,
    % find the minimum number of coins
    % required to make change.
    % IE for USA coinage and 37 cents,
    % the minimum number is four
    % (quarter, dime, 2 pennies).
    num(0). num(1). num(2).
    num(3). num(4). num(5).
    ?- num(Q), num(D), num(P),
       37 is Q * 25 + D * 10 + P

You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.

(Posting again what I already posted two days ago [2] here)

[1]: https://news.ycombinator.com/item?id=45205030

  • I've actually used pseudo-prolog to explain how to solve leetcode problems to a friend. Write the facts, then write the constraints, and then state your problem. Close to the last part, they've already understood how to solve it, or at least how to write the program that can answer the question.

  • Of course, the challenge is that the next question after solving a leetcode problem is often to explain and optimize the performance characteristics, which in prolog can get stupidly hairy.

SAT, SMT, and constraint solvers are criminally underutilized in the software industry. We need more education about what they are, how they work, and what sorts of problems they can solve.

  • At least personally, I've been very underwhelmed by their performance when I've tried using them. Usually past a few dozen variables or so is when I start hitting unacceptable exponential runtimes, especially for problem instances that are unsatisfiable or barely-satisfiable. Maybe their optimizations are well-suited for knapsack problems and other classic OR stuff, but if your problem doesn't fit the mold, then it's very hit-or-miss.

    • I've worked on a model with thousands of variables and hundreds of thousands of parameters with a hundred constraints. There are pitfalls you need to avoid, like reification, but it's definitely doable.

      Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.

      Using local-search are very useful for scaling but at the cost of proven optimality

    • I think this hits the nail on the head: performance is the obstacle, and you can't get good performance without some modeling expertise, which most people don't have.

      1 reply →

    • I wish I knew better how to use them for these coding problems, because I agree with GP they're underutilized.

      But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.

      5 replies →

    • Well, they aren’t magic. You have to use them correctly and apply them to problems that match how they work. Proving something is unsat is worst case NP. These solvers don’t change that.

      2 replies →

  • In what way? They're useful for toy problems like this but they're very slow on larger problems.

    • Define large. We've written model which solves real business issues in 8K lines of MiniZinc and it wasn't slow.

      The conventional wisdom is the larger you make an NP hard problem, the slower is going to get. Irregardless of algorithm.

    • SAT solvers are used daily to generate solutions for problems that have literally millions of variables. So, what you said is just wrong on the face. Yes, some talented people can write custom code that solves specific problems faster than a general purpose solver, particularly for easy special cases of the general problem, but most of the time that results in the programmer recreating the guts of a solver customized to a specific problem. There’s sort of a corollary of Greenspun’s Tenth Rule that every sufficiently complicated program also contains an ad hoc, informally-specified, bug-ridden, slow-implementation of half of a SAT or SMT solver.

      1 reply →

An interesting meta problem is to determine antagonistic set of denominations, like the [10,9,1] example given in the post, to maximize the number of coins selected by the gradient method.

Long time ago, just for fun, I wrote a constraint solver problem that could figure out which high yield banks to put money into that were recommended on doctor of credit(https://www.doctorofcredit.com/high-interest-savings-to-get/) based on <= `X` money and <= `Y` # of transactions on debit cards maximize the yield and other constraints(boolean and real valued)

I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)

I agree with the other comments here that using a constraint solver defeats the purpose of the interview. But this seems like a good case for learning how to use a constraint solver! Instead of spending hours coding a custom solution to a tricky problem, you could use a constraint solver at first and only write a custom solution if it turns out to be a bottleneck.

I find this post interesting independent of the question of whether leetcode problems are a good tool for interviews. It's: here are some kinds or problems constraint solvers are useful for. I can imagine a similar post about non-linear least squared solvers like ceres.

  • Yeah, especially for learning how to use a solver!

    > Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.

    He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.

Here’s my empirical evidence based on several recent “coding session” interviews with a variety of software companies. Background: I have been developing software for over 30 years, I hold a few patents, I’ve had a handful of modestly successful exits. I kind of know a little bit about what I am doing. At this stage in my career, I am no longer interested in the super early stage startup lifestyle, I’m looking at IC/staff engineer type roles.

The mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.

It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.

I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.

And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.

The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.

The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.

Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.

The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.

  • There is something to be said for being senior in a way where the people interviewing you are junior enough that they don't necessarily have the experience to necessarily "click" with the nuance that comes with said problems.

    That being said, from a stoicism point of view, the interview ends up becoming a meta-challenge on how you approach a problem that is not necessarily appropriately framed, and how you'd go about doing and/or gently correcting things as well.

    And if they're not able to appreciate it, then success! You have found that it is not the right organization for you. No need to burn the door down on the way out, just feel relief in that you dodged a bullet (hopefully).

    • In a few cases, I really liked the company and what they were doing, got along wonderfully with the hiring manager. Then bombed their leetcode BS.

      So when I say I’d politely tell them why they’re fucked, it’s actually out of a genuine desire to help the company.

      But you’re right, I’m also thankful that they showed their red flag so visibly, early enough, and I’m happy to not move forward!

  • Yes, it is a death spiral; if you are to lead them, you have to know what to fix when, to avoid making things worse.

    The solution is typically not just to fix their code. They got in over their heads by charging ahead and building something they'll regret, but their culture (and likely the interviewer personal self-regard) depends on believing their (current) tech leaders.

    So yes, the interviewer is most comfortable if you chase and find the ball they're hiding.

    But the leadership question is whether you can relieve them of their ignorance without also stripping their dignity and future prospects.

    I've found (mostly with luck) that they often have a sneaking suspicion that something isn't right, but didn't have the tools or pull to isolate and address it. As a leader if you can elicit that, and then show some strategies for doing so, you'll improve them and the code in a way that encourages them that what was hard to them is solvable with you, which helps them rely on you for other knotty problems.

    It's not really that you only live once; it's that this opportunity is here now and should have your full attention, and to be a leader you have to address it directly but from everyone's perspective.

    Even if you find you'd never want to work with them, you'd still want to leave them feeling clearer about their code and situation.

    • I agree with everything you've written.

      Clarifying my "YOLO" usage: I was being a little flippant, in the sense that when ending an interview early with direct critical feedback, the most likely outcome is a "burned bridge" with that company (you're never coming back).

      Which reminds me one of my favorite twisted idioms: We'll burn that bridge when we get to it!

      I guess I've finally found an acceptable real-world use case for this phrase :)

      1 reply →

  • Maybe the process works as designed. It's just that "hiring the best developer" isn't necessarily the goal here

Interview:

> We can solve this with a constraint solver

Ok, using your favorite constraint solver, please write a solution for this.

> [half an hour later]

Ok, now how would you solve it if there was more than 100 data points? E.g. 10^12?

Does using a constraint solver actually solve the question under the time ... constraints?

If not, how can you claim you have solved the problem?

It's insane how many of these new "AI" companies don't let you use AI or even your own IDE for coding interviews. And most questions from such companies are LC type problems so they know any AI tool can one shot it.

  • I discourage it but I let them use it and then give them a specific problem that I know your average Claude 4 or GPT 5 will just not get it right.

    Actually people perform worse in an interview using AI because they spend time trying to understand what the tool is proposing and then time to figure out why that doesn’t work.

    • My experience has been quite different. With Cursor/Claude code, I've ended up writing full fledge solutions (running cli/web servers with loggers and unit tests for each functionality). We're talking crawlers, cab booking service like uber, search engines with seed data. All within the hour.

  • Why is that insane? Seems logical to me.

    • Definitely not insane. Ironic is the correct term. The field is evolving, a lot of these companies talk about replacing outdated practices using AI. Asking software engineers to not use their own tools to solve problems falls under the same bucket.

Been working on a calendar scheduling app that uses a constraint solver to auto schedule events based on scheduling constraints (time of day preferences and requirements, recurrence rules), and track goal progress (are you slipping on your desired progress velocity? Get a notification). It’s also a meal planner: from a corpus of thousands of good, healthy recipes, schedule a meal plan that reuses ingredients nearing expiration, models your pantry, estimates grocery prices, meets your nutritional goals. Constraint solvers are black magic.

You: Oh I know, I can use a constraint solver for this problem!

Interviewer: You can't use a constraint solver

> Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram.

Maybe it's my graphics programmer brain firing on all cylinders, but isn't this just a linear scan, maintaining a list of open rectangles?

  • Yes, you just need to maintain a stack of rectangles ordered from lowest to highest. You only ever have to push and pop the top of the stack, so the runtime is O(n).

I tried a couple of times long time ago to solve them with cp/integer programming.

The interviewers were clueless so after 10 minutes of trying to explain to them I quit and fell back to just writing the freaking algo they were expecting to see.

Terrible question for an interview, and further highlights how our interviews are broken.

Greedy algorithms tell you nearly nothing about the candidate's ability to code. What are you going to see? A single loop, some comparison and an equality. Nearly every single solution that can be solved with a greedy algorithm is largely a math problem disguised as programming. The entire question hinges on the candidate finding the right comparison to conduct.

The author himself finds that these are largely math problems:

> Lots of similar interview questions are this kind of mathematical optimization problem

So we're not optimizing to find good coders, we're optimizing to find mathematicians who have 5 minutes of coding experience.

At the risk of self-promotion, I'm fairly opinionated on this subject. I have a podcast episode where I discuss exactly this problem (including discuss greedy algorithms), and make some suggestions where we could go as an industry to avoid these kind of bad-signal interviews:

https://socialengineering.fm/episodes/the-problem-with-techn...

  • My best interview consisted of: -what projects have you done

    -what tech you worked with and some questions about decisions

    -debugging an issue they encountered before

    -talking about interests and cultural fit

    Instant green flag for me. Too bad that after receiving my offer covid happened and they had a hiring freeze.

    • This is how I prefer to interview. I don’t understand the mindset of LeetCode interviewers. It’s a weak signal because it’s easily gamed (false positives), and has misses too many strong candidates who have better things to do in their spare time (false negatives, bias towards one type of candidate -> lack of diversity in experience).

    • I've seen some cases where someone bragged about projects they participated in but then struggle to write a simple loop.

      You can try to sus out smooth talking faker or just tell them to write a thing and then talk only after they demonstrate basic comprehension.

All problems cited are about testing if you can write if's, loops and recursion (or a stack/queue).

They aren't testing if you can write a solver. They are testing if you can use bricks that solvers are built out of because other software when it gets interesting is built out of the same stuff.

> This was a question in a different interview (which I thankfully passed):

> Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.

It was funny to see this, because I give that question in our interviews. If someone suggested a constraint solver... I don't know what I'd have done before reading this post (since I had only vaguely even heard of a constraint solver), but after reading it...

Yeah, I would still expect them to be able to produce a basic algorithm, but even if their solution was O(n^2) I would take it as a strong sign we should hire them, since I know there are several different use cases for our product that require generalized constraint solving (though I know it by other names) and having a diverse toolset on hand is more important in our domain than writing always-optimal code.

  • Something that works poorly is often better than something that doesn't work in an instant. This is what I have to tell myself every time I step into a massive, excessively complex mess of a codebase. Many business rules aren't clearly defined ahead of time in a way that always translates well to the code and starting over is a mistake more often than not imo.

    Update... refactor... update... break off... etc. A lot of times, I'm looking at something where the tooling is 8+ years old, and the first order of business should be to get it working on a current and fully patched version of whatever is in place... replacing libraries that are no longer supported, etc. From there, refactor what you can, break off what makes sense into something new, refactor again. This process, in my experience, has been far more successful than ground up, new versions.

    I say this while actively working on a "new" version of a software. New version being web based, "old" version being a winforms VB.Net app from over a decade ago. Old version has bespoke auth, new verion will rely on Azure Entra... Sometimes, starting over is the answer, but definitely not always.

I've never heard of a "dynamic programming algorithm". Read wikipedia and it seems to mean....use a recursive function? The coin problem is an easy recursive problem (I just wrote the code for it to make sure my old brain can still do it).

  • It's usually covered in a first or second year algorithms course. It's a recursive problem definition paired with tabling to eliminate redundant work. Critically, the recursive subproblems have to be overlapping (they'll do some of the same work as the other recursive steps) to see any benefit. You can implement it top-down and add a cache (memoization) or you can implement it bottom-up and fill out the table iteratively rather than through recursion.

    If you just implement it recursively without tabling then you end up re-doing work and it's often an exponential runtime instead of polynomial.

    To clarify on overlapping, consider Fibonacci:

      F(n) = F(n-1) + F(n-2) # and the base cases
    

    F(n-1) includes F(n-2) in its definition, and both F(n-2) and F(n-1) include F(n-3). If you implement this naively it produces an exponential runtime. Once you add the table, the single initial recursive call to F(n-1) will end up, through its chain of calls, storing the result of F(n-2) and now the implementation is linear instead of exponential.

  • > Read wikipedia and it seems to mean....use a recursive function?

    Yes, that's one (common) approach to dynamic programming. The recursive function call are memoized so that previous calculations are remembered for future function calls. Overlapping subproblems become trivial if you can reuse previously computed values. The recursion with memoization is top-down dynamic programming.

    • So all in all pretty basic stuff. Why would anyone worth their salt should have problem with that?

Most leetcode problems fall into the same ~15 patterns, and hard problems most of the time require you to use a combination of two patterns to solve them.

I've always maintained that solving LeetCode is more about finding the hidden "trick" that makes the solution, if not easy, one that is already "solved" in the general sense. Look at the problem long enough and realize "oh that's a sliding window problem" or somesuch known solution, and do that.

Everyone misunderstands what LC focuses on. It focuses on - did you grind like everyone else that did to get into this company/region/tech? It allows for people who didn't go to the most specific schools (e.g. Cal, Stanford, etc.) to still get into silicon valley companies if they show they are willing to fit the mold. It's about showing you are a conformist and are willing to work very hard to do something that you won't realistically use much in your day to day job.

It's about signaling. That's all it is. At least it's not finance where it's all dictated by if you were born into the right family that got you into the elite boarding schools for high school, etc. I would've never made it into finance unless I did a math phd and became a quant.

> The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.

Really? This kind of interview needs to go away.

However, coding interviews are useful. It's just that "knowing the trick" shouldn't be the point. The point is whether the candidate knows how to code (without AI), can explain themselves and walk through the problem, explain their thought processes, etc. If they do a good enough reasoning job but fail to solve the problem (they run out of time, or they go on an interesting tangent that ultimately proves fruitless) it's still a "passed the test" situation for me.

Failure would mean: "cannot code anything at all, not even a suboptimal solution. Cannot reason about the problem at all. Cannot describe a single pitfall. When told about a pitfall, doesn't understand it nor its implications. Cannot communicate their thoughts."

An interview shouldn't be an university exam.

  • I agree with this approach. With the exception of testing for specific domain knowledge relevant to the work role, the coding interview should just be about testing the applicant's problem-solving skills and grasp of their language of choice. I would even prefer a take-home style problem that we can review in-person over some high-pressure puzzle. The leetcode interview doesn't seem to correspond to anything a developer actually does day to day.

Reminder that the research says the interview process should match the day to day expectations as closely as possible, even to a trial day/week/month. All these brain teasers are low on signal, not to mention bad for women and minorities.

I think LeetCode tests two things. First, your willingness to grind to pass an exam, which is actually a good proxy for some qualities you need to thrive in a corporate environment: work is often grungy and you need to push through without getting distracted or discouraged.

Second, it's a covert test for culture fit. Are you young (and thus still OK with grinding for tests)? Are you following industry trends? Are you in tune with the Silicon Valley culture? For the most part, a terrible thing to test, but also something that a lot of "young and dynamic" companies want to select for without saying so publicly. An AI startup doesn't want people who have family life and want to take 6 weeks off in the summer. You can't put that in a job req, but you can come up with a test regime that drives such people away.

It has very little to do with testing the skills you need for the job, because quite frankly, probably fewer than 1% of the SWE workforce is solving theoretical CS problems for a living. Even if that's you, that task is more about knowing where to look for answers or what experiments to try, rather than being able to rattle off some obscure algorithm.

Internally, do contraint solvers just do brute force?

It's interesting how powerful contraint solvers are (Ive never used one).

But actually all of these problems are fairly simple if we allow brute force solutions. They just become stacked loops.

  • No. They use sophisticated algorithms called propagators to prune the invalid solutions from the domains of possible solutions in conjunction with a search strategy, like branch and bound

I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9).

That's a bad algorithm, then, not a greedy algorithm. Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?

If a candidate's only options are to either use a constraint solver or to implement a naïver-than-usual greedy algorithm, well, sorry, but that's a no-hire.

  • There is no greedy solution to the problem. A greedy algorithm would start by taking 3 10-cent coins to make 37 which is wrong.

  • > Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?

    Yes, and it won't work on the problem described. The greedy algorithm only works on certain sets of coins (US coin denominations are one of those sets), and fails in at least some cases with other coin sets (as illustrated in the bit you quoted).

  • The algorithm they're using must be "Until you hit the limit, take the highest denomination coin that fits beneath the limit. If you can't hit the limit, fall back one step."

    That fits your definition of "use as many coins as possible of a given large denomination before dropping back to the next-lower denomination" but will find 10-10-10-1-1-1-1-1-1-1 and stop before it even tries 10-9-anything.

  • By the way, ChatGPT was able to solve this problem and give the correct solution.

    • It's in numerous algorithms textbooks and probably a lot of code repositories, so that's not surprising.

My beef with someone using a constraint solver here is that they almost certainly wouldn't be able to guarantee anything about their solution other than that, if it produces an output, it will be correct. They won't be able to guarantee running time, space usage, or (probably for most tools) even a useful progress indicator. The problem isn't merely that they used another tool - the problem is that they abstracted away critical details. Had they provided a handwritten solution from scratch with the same characteristics, it would've exhibited the same problems.

This doesn't mean they can't provide a constraint solver solution, but if they do, they'd better be prepared to address the obvious follow-ups. If they're prepared to give an efficient solution afterward in the time left, then more power to them.