Comment by SupremumLimit
1 day ago
The case the article tries to make doesn’t stack up for me.
What you get when it becomes easier to generate code/applications is a whole lot more code and a whole lot more noise to deal with. Sure, some of it is going to be well crafted – but a lot of it will not be.
It’s like the mobile app stores. Once these new platforms became available, everyone had a go at building an app. A small portion of them are great examples of craftsmanship – but there is an ocean of badly designed, badly implemented, trivial, and copycat apps out there as well. And once you have this type of abundance, it creates a whole new class of problems for the users but potentially also developers.
The other thing is, it really doesn’t align with the priorities of most companies. I’m extremely skeptical that any of them will suddenly go: “Right, enough of cutting corners and tech debt, we can really sort that out with AI.”
No, instead they will simply direct extra capacity towards new features, new products, and trying to get more market share. Complexity will spiral, all the cut corners and tech debt will still be there, and the end result will be that things will be even further down the hole.
Unless I’m totally misreading the article they are saying what you’re saying and then taking it and using it as an argument for why we should care about quality. They aren’t saying quality will necessarily happen. They are saying that because there will be a whole lot more noise it will be important to focus on quality and those who don’t will drown in complexity.
I don't know if it's been documented or studied, but it seems the availability argument is a fallacy. It just open the floodgates and you get 90% of small effort attempts and not much more. The old world where the barrier was higher guaranteed that only interesting things would happen.
It seems there's some kind of corollary to what you're saying to when (in the US) we went from three major television networks to many cable networks or, later, when streaming video platforms began to proliferate and take hold -- YouTube, Netflix, etc.: The barriers to entry dropped for creators, and the market fragmented. There is still quality creative content out there, some it as good as or better than ever. But finding it, and finding people to share the experience of watching it with you is harder.
Same could be said of traditional desktop software development and the advent of web apps I suppose.
I guess I'm not that worried, other than being worried about personally finding myself in a technological or cultural eddy.
Trivially, fewer interesting things happen if the barrier is incidental to some degree.
I think the more pressing issues are costs: opportunity cost, sunk cost, signal to noise ratio.
Increasing energy input to a closed system increases entropy.
Why on earth people expect to attach gpu farms to render characters into their codebase to not only not increase its entropy but to lower it?
I wrote this many years ago, when I moved from symbian (with very few apps available) to android, with a lot of apps, but having to spend several hours to find a half decent one.
> No, instead they will
The article is making a normative argument. It is not saying what people "will" do but instead what they "should" do.
It's Carlyle's idea of "the cheap and nasty" in the age of software.