Comment by aorth
9 hours ago
Brilliant! I hate it. The author will surely admit that there was "joy" in creating this suite of software, but it's a different kind of joy than most of us here would recognize. I am looking forward to being a part of the group of detractors doing things the old way, similar to the "small web" or other counter cultures on the Internet. I fantasize about being here to pick up the pieces after all the others went full-on into AI-assisted everything and lost their critical thinking capacity, programming skills, knowledge of Unix command line, etc.
There is part of me that understands the appeal of the all-in on AI and personalized software approach. It's a bit cyberpunk! In terms of open-source software, the downsides outweigh the benefits in my opinion, though. Important principles like community ownership and commitment are absent, and this approach is even radically antisocial. And then there's the inevitable issues with maintainability, to say noting about dependence on big tech companies.
To each their own, but this is not for me.
I read somewhere (in the myriad blog posts dealing with this Cambrian LLM explosion) that software developers could be put into two camps: those that just want the thing to exist, and those that want to build and understand the thing (in addition to wanting it to exist).
those in the first camp are having a great time.
those in the second camp (which is how you're describing yourself, and how I'd describe myself) are wary and suspicious.
it is somewhat paradoxical, we've watched/read sci-fi/cyberpunk for years and dreamed of this kind of world. after all, when did you see any members of the Enterprise writing code? they just asked the computer to "write a subroutine" and that was that. what a world!
but here we are, with the craft in danger, not entirely impressed by the idea of "just ask and walk away".
i, too, fear for my loss of critical thinking, raw skills, and design sense, as do i think about being one of the few (in 2, 3, 5, 10 years) that didn't abdicate their cognition, their craft, to the tech overlords.
but i wonder if it will matter anyway. i wonder if "source code" will be a deep abstraction that nobody thinks about anyway, similar to how 99% of us don't care/need to care what the machine code we're eventually emitting does or looks like.
in any case, i'll keep my thinking for now.
> I read somewhere (…) that software developers could be put into two camps (…)
Surely you read it more than once, because that has become a talking point. It’s a false dichotomy that, you’ll notice, is most often used by the people who put themselves in the first camp to steer the conversation. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
You don't have to pick one camp over the other. Good, high quality craft makes good products.
> after all, when did you see any members of the Enterprise writing code?
When did you see anyone in any media taking a dump, or sleeping, or doing any of the boring bits? Rarely, because if it’s not relevant to the story they don’t show it, but it doesn’t mean it didn’t happen.
I’m more of a DS9 fan, and I remember them having computer problems all the time. O’Brien, despite being highly competent and the chief of engineering with a team, was constantly overworked.
And their computers were infinitely superior to the LLMs we have now. When they gave you an answer, you could be confident it was correct. And if they didn’t know, they’d tell you!
I think a notable difference is that the AI that is portrayed in most sci-fi (that I have read/watched anyway) tend to be "logical machines" that act deterministically based on the data available to them.
What we got are "statistical machines" that tend to do the right thing under the right conditions, but can go completely off the rail every now and then.
The former are more akin to a generalization of computers as we typically think of it, whereas the latter is something else. Maybe that something else is closer to human behavior in some ways, but also so very different - unlike humans, where you get to know people, build relationships, know who to trust in what ways, and so forth, you can never really trust an LLM with any critical tasks without close supervision.
I kinda like the woodworking analogy of this.
I, in theory, can plane a piece of wood with a hand planer. But I'll never do it again, we did it at school in ye olden times before the millennium and it was boring then as it is boring now.
I know people who get satisfaction from it, they take one sliver off with the hand planer, feel the wood with their hand and figure out the perfect angle for the next tiny sliver of wood to come out off, repeating the process over and over again.
I, personally, will just feed the damn plank to a mechanical planer with the exact specs of the resulting board set up. I just want the board smooth so I can get to the next step of the process. I'm not doubting the "wood-slop" the machine produces, I can see and measure if it's good enough or not. I don't need to be involved in the process.
We're both making a table, mine will be done faster. It might not be hand-crafted to perfection, but it will hold the stuff I intend to put on it just fine. If I find out it sucks later on, I can make a new one that's slightly better or fix the existing one. My goal was a functional product, not a piece of handcrafted art.
Your analogy is not really apples to apples though, is it?
More close is: if there was a table making machine, you just push a button and something like a table comes out, would you still be a woodworker? You haven't planed, nor measured, nor cut, nor jointed, you've only pressed on "make me a table"
1 reply →
I don't think the analogy works. You're focusing on the "how", not the "what". Using a mechanical planer, you still need to dial in numbers yourself. You design your own table, the more modern tools just make it easier to realize your vision.
Another example: I enjoy writing with a good pen. But whether I write by pen or on a keyboard, it's still me writing it.
However, AI does basically all the real work, only leaving you to guide it. Make a table? AI gives you one with 2 legs. More legs? Guess I can live with 5 legs.
And you wouldn't be making that table, AI is. You cannot have pride in something that you never made yourself. It's the same as 3D-printing something from Thingiverse and claiming you made it.
People who create AI blog posts are not writing. Those that prompt their way to a piece of software are not doing software engineering. The ones that generate AI images are not being artists.
3 replies →
I'm in the second camp.
Part of it's that the whole point of going into this industry is that I love coding and have been doing it since I was 8. Part of it is that I'm a control freak and it makes me uncomfortable to have to trust AI generated code. Sure, I already trust interpreters and compilers, but those are much more deterministic, and they don't generally do anything I have to be wary of. Part of it is that anytime I've used Claude to write stuff (using Opus 4.7 via an API key), I've had to handhold it when doing simple things (telling it repeatedly that a given column doesn't exist in Snowflake's task history table and eventually just giving up and taking it out by hand) and had to remove tons of completely pointless Python code it generates. The big difference is that the people in the first camp don't seem to care enough to check. Someone at my company used Claude to write 20k lines of code this past Friday. No way he read and scrutinized all of that in one day.
The other big thing I've noticed is that a lot of the people using it extensively seem to just be spitting out API endpoint after endpoint. Just doing endless CRUD with some light business logic. Yeah, it's not too hard to automate that with AI without any major issues. Hell, back when Ruby on Rails was hot, it was so fast to write those kinds of things with it that I could spin up things as fast as AI is doing now. Full websites or APIs in an hour or two because its syntactic sugar and scaffolding did what AI does with the FastAPI codebases I see these days. You could go from an ER diagram to a working app in minutes sometimes. I don't care that much if that kind of work is automated.
I was in the second camp until last summer, having been hand-writing code since 1979.