Comment by michaelmior
19 hours ago
What threw me off is the expectation that I use the same variable names and exact same code structure. There are many ways to implement effectively the same thing. I understand that it would be very challenging to implement a way to validate solutions in this way, but memorizing exact fragments of code feels like it's optimizing for the wrong thing.
Some might consider that a kind of commentary on the leet code interview format.
After hearing people complain about these fearsome "leetcode interviews" for what feels like a decade now, I have to wonder when I am finally going to encounter one. All I get are normal coding problems.
One man's leet code is another man's simple programming question which involves minimal domain knowledge...
I've had candidates describe what I'd loosely call "warm-up" questions as leet code problems. Thing like finding the largest integer in an array or figuring out if a word is a palindrome.
2 replies →
Thanks for taking the time to try it and write this up.
You are right that the current check still leans too much toward my reference solution. It already ignores formatting and whitespace, but it is still quite literal about structure and identifiers, which nudges you toward writing my version instead of your own. There are many valid ways to express the same idea and I do not want to lock people into only mine.
Where I want to take it is two clear modes. One mode tracks the editorial solution for people who want to learn that exact version for an interview, while still allowing harmless changes like different variable names and small structural tweaks. Another mode is more flexible and is meant to accept your own code as long as it is doing the same job. Over time the checker should be able to recognise your solution and adapt its objectives and feedback to what you actually wrote, instead of pushing you into my template. It should care more about whether you applied the right logic under time pressure than whether you matched my phrasing.
There is also a small escape hatch already in the ui. If you completely blank or realise you have missed something, you can press the Stuck button to reveal the reference line and a short explanation, so you still move forward instead of getting blocked by one detail.
You are pushing exactly on the area I plan to invest in most. The first version is intentionally literal so the feedback is never vague, but the goal is for the checker to become more adaptive over time rather than rigid, so it can meet people where they are instead of forcing everyone through one exact solution.
This by itself completely un-sold me. Requiring such rote memorization is a hard pass for me, it seems the user should just be able to self-assess whether they got it “right” (like Anki cards).