Comment by dang
17 hours ago
This is so exactly right and I've been saying it to whoever will put up with me...(and now am embarrassed I have no link to show for it. oh well, shame is good for writing. envy too!)
Software production is now so easy that everything is a .emacs file (pronounced "dot emacs" btw): meaning, each individual has their own entirely personal, endlessly customizable software cocoon. As tptacek says in the OP, it's "easier to build your own solution than to install an existing one" - or to learn an existing one.
Another good analogy, not by concidence, is to Lisp in general. The classic knock against it—one I never agreed with but used to hear all the time—is that Lisp with its macros is so malleable that every programmer ends up turning it into their own private language which no one else can read.
Tangential to that was Mark Tarver's 2007 piece "The Bipolar Lisp Programmer" which had much discussion over the years (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6726702
There's something about this whole situation that rhymes with the issue of LLM-generated prose. It's not that GPT 5.5 writes bad prose (I mean, it doesn't write good prose, but it's not awful). It's that once I pick up on the text being GPT 5.5's, my brain switches into a mode where it starts reminding me "this is just GPT output, you could just ask GPT 5.5 these questions yourself, and get answers better tailored to what you want to know". Why am I reading this one particular artifact of a conversation with the LLM? Once I know what the conversation is about, I can just have a better one myself.
Same deal with a lot of this software. I guess there's some "taste" to it, but mostly what you care about are the ideas and the "recipe".
Also, you should just do a monthly "Vibe HN" thread.
Those are great points and it leads right back to the solipsism thing. Also, you snuck a "It's not that X, it's that Y." in there. Nice.
> you should just do a monthly "Vibe HN" thread
It wouldn't stop people from feeding them into the Show HN stream, which is the problem. If we had a good enough way to tell them apart, we could factor them into two streams, but we don't yet.
> It wouldn't stop people from feeding them into the Show HN stream, which is the problem. If we had a good enough way to tell them apart, we could factor them into two streams, but we don't yet.
But it would allow for a culture to grow where the posters would self-contain their submissions into those threads.
I don't want to make this about people's faith or whatever, but to put it frankly, I've heard a lot of contemporary Christian music, I don't care for it, and [I like to think] I can reliably recognize it in three notes or fewer,¹ which may or may not bear out in rigorous testing, but saves me a lot of time either way. This feels like it parallels strongly with the topic at hand
1. erring on the side of sounding cooler
> As tptacek says in the OP, it's "easier to build your own solution than to install an existing one" - or to learn an existing one.
I can install WhatsApp in a few tens of seconds. You most definitely spent more time than that writing this comment.
Would you mind sharing a video of you building a custom WhatsApp in less time? Not even starting to think about getting other people to talk to you on your instantly-built messaging solution...
> Even more importantly, what happens to teamwork?
I can concur with that thought direction. We used to pair and group-program on my team, we have a "Zoom office". Now it has become "let me take this ticket and feed it to Claude, you try the same thing with Copilot, and then we compare the results", or "I'd make a PR with my clunker, you use yours to review it". This shit honestly feels almost pointless. The pair-programming is absolutely dead. Who wants to watch me run several agents, trying to fix multiple things in different work-trees, while I'm juggling them around and fixing inconsistencies in my agents.md?
I've been pushing the idea of building a self-governing, fully autonomous cloud pipelines so we'd stop playing "stupid tokenomy" games, and it seems my management is just quietly trying to "keep it down", because I think there's a simple understanding - the moment that shit proves airworthy and actually can fly, a bunch of them are guaranteed to lose their cushy seats.
> Even more importantly, what happens to teamwork? If we are all a BBM now—or rather, if we all have personal armies of BBMs, permanently locked in a manic state, springloaded at all hours to generate things for us-and-only-us—how do we work together? How do cocoons communicate, interoperate? What does a team of ai solipsists look like? It sounds oxymoronic.
One example of teamwork is how the programmers and researchers worked together to build the UNIX SYSTEM (https://www.cs.dartmouth.edu/~doug/reader.pdf). It is not a product but an environment optimized for building tools and solving practical problems with tools written in C (while BBMs were busy with Lisp in Boston .;-)
C++ is a totally different story and you need an IDE for that.
If WASM succeeded in being the one universal ABI, it could be the perfect successor to the unix pipe for the AI age. Wasm modules for libraries, that double as terminal tools.. One could only imagine
I highly agree with the pro-Lisp sentiment. The main article that comes to mind while reading this was also posted a little while back on this forum: https://isene.org/2026/05/Audience-of-One.html
So cool to see a dang comment comment. Rather than moderating comment.
Tarver's piece was new to me, and fun, and spot on. Yes, LLMs bring the emacs cruft heap to the masses. A throwaway culture on disk is a lot less worrisome than one on soil.
I wrote a little bit about my experience with this sort of stuff a little while back if you're interested:
https://news.ycombinator.com/item?id=47393437
I would add to that a few more open questions that I haven't seen addressed:
- As more engineers (and non-engineers) pick up coding agents, everyone is authenticating multiple MCPs, creating an n * n explosion of complexity that is impossible to centralise. Multiply this by the number of distinct coding agents for every platform and visibility is very tough. A lot of platforms also don't support scopes so you can't enforce safety short of a network proxy I suppose
- For non-developers mainly, lacking mental models such as <agent> for Y desktop app does not imply that there is a local LLM running on your machine. I suppose it's a question of trust and education versus starting conservative and progressively onboarding where we're more of the former.
- We talk a bit about the idea of sharing prompts but that fundamentally a prompt does not in itself contain quality. I've had internal tools I've made where it's mentioned that Claude made it when I mean, yes to a degree but I did many iterations using my own taste to refine things and held opinions about how things should operate. Giving someone a prompt won't inherently guarantee anything of quality. I often think about the idea of ie; give a screenshot of Github to an LLM but in a way, you're saying to create a clone, not of what exists today but is a dead echo of the design taste and choices made years ago that persist today. You can create things cheaply but without taste and good judgment, how can you continue to evolve it in a way that isn't like that draw the rest of the horse meme.
- I personally wonder about tokenmaxxing stories you hear about from other companies and like, logically what happens to glue roles? Does someone like a Microsoft just stack rank on token count and fire those who actually get work done? I suppose they already hollow out knowledge anyway so maybe it's nothing new.
- Definitely the thing with internal tooling where eventually you generate so much that you fundamentally have no mental model. It's fine for non-critical stuff and I'm kind of coming around to the idea that it's actually a better position to have no idea of the code and a strong "theory" of how a thing should work than it is to fully understand the code and have zero "theory". Ideally both of course.
Anyway, this isn't a comprehensive ramble but I've also been a bit disappointed that there hasn't been more talk about the second order effects. Many things can be true at once where you can see value in LLMs while still being critical of them and the whole DC situation ie; Colossus 1 etc.
> easier to build your own solution than to install an existing one
seriously?
Maybe it’s just another cocoon but I’ve been working on a framework for modular CLIs which allow different humans or agents to spin different features simultaneously but with some enforcement of shared dictionary, aliases, help, logging, formatting, semantic parsing, a few other things.
It works, it’s powerful, and certainly one way to answer the question you pose. I would argue it’s the optimal answer, it’s an answer to RPC, REST, and MCP at the same time, but it’s definitely an example of an answer and approach. In any case it is a good question and something I’ve given a lot of thought to.
Unfortunately in the age we’re in now there’s something lackluster in sharing any solution or design you have. Though the architecture and design of what I’m describing came 0% from AI everything is assumed to be and therefore unimportant? But it is the direct answer to your question so if anyone’s curious lmk.