I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
I'd say the new problem is knowing when understanding is important and where it's okay to delegate.
It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.
My argument is that understanding is always important, even if you delegate. But perhaps you mean sometimes a lower degree of understanding may be okay, which may be true, but I’d be cautious on that front. AI coding is a very leaky abstraction.
We already see the damage of a lack of understanding when we have to work with old codebases. These behemoths can become very difficult to work in over time as the people who wrote it leave, and new people don’t have the same understanding to make good effective changes. This slows down progress tremendously.
Fundamentally, code changes you make without understanding them immediately become legacy code. You really don’t want too much of that to pile up.
I'm writing a blog post on this very thing actually.
Outsourcing learning and thinking is a double edged sword that only comes back to bite you later. It's tempting: you might already know a codebase well and you set agents loose on it. You know enough to evaluate the output well. This is the experience that has impressed a few vocal OSS authors like antirez for example.
Similarly, you see success stories with folks making something greenfield. Since you've delegated decision making to the LLM and gotten a decent looking result it seems like you never needed to know the details at all.
The trap is that your knowledge of why you've built what you've built the way it is atrophies very quickly. Then suddenly you become fully dependent on AI to make any further headway. And you're piling slop on top of slop.
I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
I'd say the new problem is knowing when understanding is important and where it's okay to delegate.
It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.
My argument is that understanding is always important, even if you delegate. But perhaps you mean sometimes a lower degree of understanding may be okay, which may be true, but I’d be cautious on that front. AI coding is a very leaky abstraction.
We already see the damage of a lack of understanding when we have to work with old codebases. These behemoths can become very difficult to work in over time as the people who wrote it leave, and new people don’t have the same understanding to make good effective changes. This slows down progress tremendously.
Fundamentally, code changes you make without understanding them immediately become legacy code. You really don’t want too much of that to pile up.
I'm writing a blog post on this very thing actually.
Outsourcing learning and thinking is a double edged sword that only comes back to bite you later. It's tempting: you might already know a codebase well and you set agents loose on it. You know enough to evaluate the output well. This is the experience that has impressed a few vocal OSS authors like antirez for example.
Similarly, you see success stories with folks making something greenfield. Since you've delegated decision making to the LLM and gotten a decent looking result it seems like you never needed to know the details at all.
The trap is that your knowledge of why you've built what you've built the way it is atrophies very quickly. Then suddenly you become fully dependent on AI to make any further headway. And you're piling slop on top of slop.
[dead]
[flagged]