← Back to context

Comment by danielbln

20 hours ago

Now take Google away, and LSP. And the computer. Write CTEs with a pencil or bust.

I'm exaggerating of ourse, and I hear what you're saying, but I'd rather hire someone who is really really good at squeezing the most out of current day AI (read: not vibe coding slop) than someone who can do the work manually without assistance or fizz buzz on a whiteboard.

I think the point is how can you squeeze anything out of the AI without knowing the stuff at a deep enough level?

  • Being able to memorise things that are easily looked up (like syntax) doesn’t demonstrate deep knowledge. It’s a bad interview question.

    • I mean maybe these juniors are geniuses, but I often find it very non-obvious why LLM-generated code it wrong and it requires me to have an even deeper knowledge. Sometimes the code is correct, but overly complicated.

      One small example was a coworker that generated random numbers with AI using `dd count=30 if=/dev/urandom | tr -c "[a-z][A-Z]" | base64 | head -c20` instead of just `head -c20 /dev/urandom | base64`. I didn't actually know `dd` beyond that it's used for writing to usb-sticks, but I suddenly became really unsure if I was missing something and needing to double check the documentation. All that to say that I think if you vibe-code, you really need to know what you're generating and to keep in mind that other will need to be able to read and understand what you've written.

  • Ask most folks about the code generated by the compiler or interpreter and you’ll get blank stares. Even game devs now barely know assembly, much less efficient assembly.

    There is still a place for someone who is going to rewrite your inner-loops with hand-tuned assembly, but most coding is about delivering on functional requirement. And using tools to do this, AI or not, tend to be the prudent path in many if not most cases.

    • Apart from the whole argument about compilers being deterministic and not LLMS.

      You don't collaborate on compiled code. They are artifacts. But you're collaborating on source code, so whatever you write, someone else (or you in the future) will need to understand it and alter it. That's what the whole maintainability, testability,... is about. And that's why code is a liability, because it takes times for someone else to understand it. So the less you write, the better it is (there's some tradeoffs about complexity).

    • I don't think these are comparable though. Compiler generation is deterministic and more or less provably correct. LLM code is a mile away from that.

For your examples, honestly yeah. A dev should familiar with the basic concepts of their language and tech stack. So yes, they should be able to understand a basic snippet of code without Google, an LSP, or even a computer. They should even be able to "write CTEs with a pencil and paper". I don't expect them to get the syntax perfect, but they should just know the basic tools and concepts enough to have something at least semantically correct. And they certainly should be able to understand the code produced by an AI tool for a take home toy project.

I say this as someone who would definitely be far less productive without Google, LSP, or Claude Code.

  • I’ve written huge queries and CTE’s in my career. But I haven’t done it recently. Personally, I’d need 10 minutes of google time to refresh my memory before being able to write much sql on paper, even with bad syntax. It doesn’t mean I’m a bad engineer because I don’t bother to memorise stuff that’s easily googleable.

> I'd rather hire someone [...] than someone who can do the work manually without assistance or fizz buzz on a whiteboard

and the reason for you to do that would be to punish the remaining bits of competence in the name of "the current thing"? What's your strategy?