← Back to context

Comment by jackfranklyn

1 month ago

The "doing it badly" principle changed everything for me. I spent weeks planning the perfect architecture for some automation tools I was building. Then I just... stopped planning and built the ugly version that solved my own pain point.

What surprised me was how much the ugly first version taught me that planning never could. You learn what users actually care about (often not what you expected), which edge cases matter in practice, and what "good enough" looks like in context.

The hardest part is giving yourself permission to ship something you know is flawed. But the feedback loop from real usage is worth more than weeks of hypothetical architecture debates.

While I do agree with the content, this tone of writing feels awfully similar to LLM generated posts that flood some productivity subreddits recently. Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves? I don't buy that.

Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates." - "What actually worked" - "This hits close to home" - "Where it really shines is the tedious stuff - writing tests for edge cases, refactoring patterns across multiple files, generating boilerplate that follows existing conventions."

  • > While I do agree with the content, this tone of writing feels awfully similar to LLM generated posts

    > Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates."

    Wow it's obvious in the full comment history. What is the purpose for this stuff? Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing? On Twitter I already have scroll down to find the one human reply on many posts.

    And when the bots get a bit better (or people get less lazy prompting them, I'm pretty sure I could prompt to avoid this classic prose style) we'll have no chance of knowing what's a bot. How long until the majority of the Internet be essentially a really convincing version of r/SubredditSimulator? When I stop being able to recognize the bots, I wonder how I'll feel. They would probably be writing genuinely helpful/funny posts, or telling a touching personal story I upvote, but it's pure bot creative writing.

    • Building up karma, for its own sake or to gain the right to flag politically disagreeable content

    • > Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing?

      Russia and Israel are known to run full time operations doing this for well over a decade. Twitter by their own account, 25% of users are/were bots back in 2015 (their peak user year). Even here on HN if you go look at the most trafficked Israel/Palestine threads, there are lots of people complaining about getting modded into oblivion, turning the conversation into neutral/pro israel, and silencing negative comments via a ghost army of modders.

  • > The tone of writing feels awfully similar to LLM.

    This particular piece is LinkedIn “copy pasta” with many verbatim or mildly variant copies.

    Example: https://www.linkedin.com/posts/chriswillx_preparing-to-do-th...

    And in turn, see: https://strangestloop.io/essays/things-that-arent-doing-the-...

    Relatedly, LLMs clearly picked the "LinkedIn influencer" style up.

    My guess is some cross-over between those who write this way on LinkedIn and those who engage with chatbot A/B testing or sign up for the human reinforcement learning / fine tuning / tagging jobs, training in a preference for it.

  • > Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves? I don't buy that.

    I understand that it's not the main point in your comment (you're trying to determine if the parent comment was written using an LLM), but yes, we do exist: I've spent years planning personal projects that remain unimplemented. Don't underestimate the power of procrastination and perfectionism. Oliver Burkeman ("Four Thousand Weeks", etc.) could probably explain that dynamic better than me.

    • Fascinating how differently people can work.

      My struggle is having enough patience to do any planning before I start building. As soon as there's even the remote hint of a half-baked idea in my head, it's incredibly tempting to just start building and figure out stuff as I go along.

      1 reply →

  • I didn't catch it immediately, but after you pointed it out I totally agree. That comment is for sure LLM written. If that involved a human in the loop or was fully automated I cannot say.

    We currently live in the very thin sliver of time where the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time before those Dead Internet Theory guys score another point and these comments are indistinguishable from novel human thought.

    • > … the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time …

      I don't think it will become significantly less visible⁰ in the near future. The models are going to hit the problem of being trained on LLM generated content which will cause the growth in their effectiveness quite a bit. It is already a concern that people are trying to develop mitigations for, and I expect it to hit hard soon unless some new revolutionary technique pops up¹².

      > those Dead Internet Theory guys score another point

      I'm betting that us Habsburg Internet predictors will have our little we-told-you-so moment first!

      --------

      [0] Though it is already hard to tell when you don't have your thinking head properly on sometimes. I bet it is much harder for non-native speakers, even relatively fluent ones, of the target language. I'm attempting to learn Spanish and there is no way I'd see the difference at my level in the language (A1, low A2 on a good day) given it often isn't immediately obvious in my native language. It might be interesting to study how LLM generated content affects people at different levels (primary language, fluent second, fluent but in a localised creole, etc.).

      [1] and that revolution will likely be in detecting generated content, which will make generated content easier to flag for other purposes too, starting an arms race rather than solving the problem overall

      [2] such a revolution will pop up, it is inevitable, but I think (hope?) the chance of it happening soon is low

    • To me it seems like it'd only get more visible as it gets more normal, or at least more predictable.

      Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose

      2 replies →

  • This reminds me of how bad browsing the internet will likely get this year. There are a ton of 'Cursor for marketing' style startups going online now that basically spam every acquisition channel possible.

    Not sure about this user specifically, but interesting that a lot of their comments follow a pattern of '<x> nailed it'

    • This is true, but the need to read critically especially on the internet has become an indispensable skill anyway.

      Psy-ops, astroturfing, now LLM slop.

  • > Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves?

    Ironically, I see this very often with AI/vibe coding, and whilst it does happen with traditional coding too, it happens with AI to an extreme degree. Spend 5 minutes on twitter and you'll see a load of people talking about their insane new vibe coding setup and next to nothing of what they're actually building

    • Still would love to see somebody with a fresh install of windows set up their vibe coding suite and then build something worthwhile.

  • > Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves?

    Probably. I've been known to spend weeks planning something that I then forget and leave completely unstarted because other things took my attention!

    > Commenter's history is full of 'red flags'

    I wonder how much these red flags are starting to change how people write without LLMs, to avoid being accused of being a bot. A number of text checking tools suggested replacing ASCII hyphens with m-dashes in the pre-LLM-boom days¹ and I started listening to them, though I no longer do. That doesn't affect the overall sentence structure, but a lot of people jump on m-/n- dashes anywhere in text as a sign, not just in “it isn't <x> - it is <y>” like patterns.

    It is certainly changing what people write about, with many threads like this one being diverted into discussing LLM output and how to spot it!

    --------

    [1] This is probably why there are many of them in the training data, so they are seen as significant by tokenisation steps, so they come out of the resulting models often.

    • It’s already happening. This came up in a webinar attended by someone from our sales team:

      > "A typo or two also helps to show it’s not AI (one of the biggest issues right now)."

      2 replies →

  • I'm not so sure. There's a fair amount of voice and first person in their writing. I wonder if they just use LLMs so much that the language and style of LLMs have rubbed off on them.

Yeah; this is such a hard intuition to teach beginners. And something I think will be lost as we move more and more toward vibe coding.

There is so much to be learned about a problem - and programming in general - by implementing stuff and then refactoring it into the ground. Most of the time the abstractions I think up at first are totally wrong. Like, I imagine my program will model categories A, B and C. But when I program it up, the code for B and C is kinda similar. So I combine them, and realise C is just a subset of B. And sometimes then I realise A is a distinct subset of B as well, and I rewrite everything. Or sometimes I realise B and C differ in one dimension, and A and B in another. And that implies there's a fourth kind of thing with both properties.

Do this enough and your code ends up in an entirely unrecognisable place from where you started. But very, very beautiful.

> What surprised me was how much the ugly first version taught me that planning never could.

Fred Brooks, author of “The Mythical Man Month” wrote an essay called “Plan to Throw One Away” in 1975.

He argues much what you’ve described.

Of course, in reality we seldom do actually throw away the first version. We’ve got the tools and skills and processes now to iterate, iterate, iterate.

+1, if you can get positive feelings from doing something bad, i think that gives real improvement to one’s life. “The first step to getting good is being bad”.

Of course you’ll also maintain the satisfaction of doing something well.

> The hardest part is giving yourself permission to ship something you know is flawed. But the feedback loop from real usage is worth more than weeks of hypothetical architecture debates.

Nice statement.

I think there is another equally pervasive problem: balancing between shipping something and strategizing a complete "operating system" but in the eyes of OTHER stakeholders.

I'm in this muck now. Working with an insurance co that's building internal tools. On one had we have a COO that wants an operating model for everything and what feels like strategy/process diagrams as proof of work.

Meanwhile I am encouraging not overplanning and instead building stuff, shipping, seeing what works, iterating, etc.

But that latter version causes anxiety as people "don't know what you're doing" when, in fact, you're doing plenty but it's just not the slide-deck-material things and instead the tangible work.

There is a communication component too, of course. Almost an entirely separate discipline.

I've never arrived at acceptable comfort on either side of this debate but lean towards "perfect is the enemy of good enough"

Depends what "doing it badly" means.

The most important aspect of software design, at least with respect to software that you intend not to completely throw away and will be used by at least one other person, is that it is easy to change, and remains easy to change.

Whether it works properly or not, whether it's ugly and hacky or not, or whether it's slow... none of that matters. If it's easy to change you can fix it later.

Put a well thought out but minimal API around your code. Make it a magic black box. Maintain that API forever. Test only the APIs you ship.

I guess the important (and hard) part is to not make a categorical error and mix up design of high level functionality and UI with the plumbing underneath it.

The plumbing also needs iteration and prototyping, but sound, forward looking decisions at the right time pay dividends later on. That includes putting extra effort and thinking into data structures, error handling, logging, naming etc. rather earlier than later. All of that stuff makes iterating on the higher levels much easier very quickly.

I completely agree and went by the proverb "everything worth doing is worth doing poorly" about a year ago now, it took some time for it to sink in but now I'm actually productive. My main blocker was waiting for other's approval, now I feel a lot more free.

I've forgotten where I've seen this now, but one of the best developers I've seen wrote code by writing it, deleting everything, then writing it again, sometimes many times in order to get their final code. I found it fascinating.

  • To me, that is the only way to write code.

    One of my friends calls it "development-driven development".

> ship something you know is flawed

There is a difference between shipping something that works but is not perfect, and shipping something knowingly flawed. I’m appalled at this viewpoint. Let’s hope no life, reputation or livelihood depends on your software.

  • This is the right point to mention "How Big Things Get Done" by Bent Flyvbjerg. You can iterate your design without putting lives into danger.

    "I spent weeks planning" -- using the terminology from that book: No, you didn't spend weeks planning, you spent weeks building something that you _thought_ was a plan. An actual plan would give you the information you got from actually shipping the thing, and in software in particular "a model" and "the thing" look very similar, but for buildings and bridges they are very different.

For my personal projects, which are under zero time constraints, I usually build an ugly version, to figure out the kinks. Then delete it and write a proper one using the lessons I learned the first time.

I want to do this with a multiplayer online game I'm working on but you just can't do it wrong and have it actually work though :/

Yes, but the experience you're describing is just getting stuck due to insufficient experience architecting a solution.

Not saying this is you, but it's so easy for people to give up and sour into hyper-pragmatists competing to become the world's worst management. Their insecurities take over and they actively suppress anyone trying to do their job by insisting everything be rewritten by AI, or push hard for no-code solutions.

This nails my issue with systems design insanity. There are so many things you learn through living with systems that are correct, though counterintuitive.

Do a thing. Write rubbish code. Build broken systems. Now scale scale. Then learn how to deal with the pattern changing as domains specific patterns emerge.

I watched this at play with a friend's startup. He couldn't get response times within the time period needed for his third party integration. After some hacking, we opted to cripple his webserver. Turns out that you can slice out mass amounts of the http protocol (and in that time server overhead) and still meett all of your needs. Sure it needs a recompile - but it worked and scaled, far more then anything else they did. Their exit proved that point.