← Back to context

Comment by palmotea

13 hours ago

> AI/LLM knowledge without programming knowledge can make a mess.

That makes sense.

> Programming knowledge without AI/LLM knowledge can also make a mess.

How? I'd imagine that most typically means continuing to program by hand. But even someone like that would probably know enough to not mindlessly let an LLM agent go to town.

> How? I'd imagine that most typically means continuing to program by hand.

I think the use of LLMs is assumed by that statement. The point is that even experienced programmers can get poor results if they're not aware of the tech's limitations and best-practices. It doesn't mean you get poor results by default.

There is a lot of hype around the tech right now; plenty of it overblown, but a lot of it also perfectly warranted. It's not going to make you "ten times more productive" outside of maybe laying the very first building blocks on a green field; the infamous first 80% that only take 20% of the time anyway. But it does allow you to spend a lot more time designing and drafting, and a lot less time actually implementing, which, if you were spec-driven to begin with, has always been little more than a formality in the first place.

For me, the actual mental work never happened while writing code; it happened well in advance. My workflow hasn't changed that much; I'm just not the one who writes the code anymore, but I'm still very much the one who designs it.

  • Yes, I've seen many people become _too_ hands-off after an initial success with LLMs, and get bitten by not understanding the system.

    Hirers, above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.

"How?" <- It shows a lack of curiosity?

"probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.

  • > "How?" <- It shows a lack of curiosity?

    We're talking about a codebase, here. How does "lack of curiosity" about LLMS "make a mess"

    > "probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.

    Probably knows enough about what's a good vs bad change. If you're "clueless about AI/LLM" but know a bad change when you see one, how do you "make a mess?"

    It's 2026, even a developer who's never touched an LLM before has heard about LLM hallucinations. If you've got programming knowledge, you should know how to make changes (e.g. you're not going to commit 200 files for a tiny change, because you know that doesn't smell right), which should guard against "making a mess."

    My point it doesn't seem reasonable to assume symmetry here. That if you don't know both things, you'll make a mess. That also implies everything built before 2022 was a mess, because those developers new programming but not LLMs, which is an unreasonable claim to make.