Comment by xmodem

5 days ago

> A robust stdlib or framework is in line with what I'm suggesting, not a counterexample.

Maybe I didn't argue this well, but my point is that it's a spectrum. What about libraries in the java ecosystem like Google's Guava and Apache Commons? These are not stdlbibs, but they almost might as well be. Every non-trivial java codebase I've worked in has pulled in Guava and at least some of the Apache commons libraries. Unless you have some other mitigating factor or requirement, I think it'd be silly not to pull these in as dependencies to a project the first time you encounter something they solve. They're still large codebaes you're not using 99% of though.

I don't feel like my position on this is black-and-white. It is not always correct to solve a problem by adding a new dependency - and in the situation you describe - adding a sprawling UI framework would be a mistake. Maybe the situation is different in front-end land, but I don't see how AI really shifts that balance. My colleagues were not doing the bad or wrong thing by copying that incorrect code - tasked with displaying a human-readable file size I would probably either write out the boundaries by hand or copy-paste the first reasonable looking result from stack overflow without much thought too.

> At no point have I advised copying code from libraries instead of importing them.

I didn't say copying, though. I said replicating. If you ask AI to implement something that appears in its training data, there is a high probability it will produce something that looks very similar and even a non-zero possibility it will replicate it exactly. Setting aside value judgements, this is functionally the same as a copy, even if what was done to produce it was not copying.

Sure, by all means use whatever is the best tool for the job. I never said not to; I've consistently said the opposite of that.

My position is that where a developer might have historically said "ideally I'd do X, but given my current timeline and resource constraints, doing Y with some new dependency Z would be the better short-term option", today that tradeoff would be influenced by the lower and decreasing cost of ideal solution X.

Maybe you understood my initial comment differently. If you are saying you disagree with that, then either you believe that X is never ideal — with X being any given solution to a problem that doesn't involve installing a new dependency — which is a black-and-white position; or you disagree that AI is ever capable of actually reducing the cost of X, in which case I can tell you from experience that you would be incorrect.

> If you ask AI to implement something that appears in its training data

This qualifier undermines everything that comes after. Based on what are you assuming that an exact implementation of X would always appear in the training data? It's a hypothetical unspecified widget; it could be anything.

> Maybe the situation is different in front-end land

Frontend definitely has more obvious examples of X. There are many scenarios where it wouldn't be that complicated to implement an isolated UI component that does exactly what you need without any clear vulnerabilities, where in the past it would have saved time to build on top of a third-party subset or variation of that UI even when it wasn't the optimal long-term solution.

It's not a frontend-specific comment, but maybe frontend better illustrates the principle. While backend examples might be more niche and system-specific, the same tradeoff logic applies there too; e.g. in areas like custom middleware or data processing utilities.

Ultimately, the crux of what I'm saying has nothing to do with what those X and Y scenarios are. Continuing to bring up scenarios where dependencies are useful is a non sequitur to my original comment, which was that AI gives us a lot more optionality on this front.