← Back to context

Comment by vishnugupta

12 hours ago

Revolutionary or not it was very nice of the author to make time and effort to share their workflow.

For those starting out using Claude Code it gives a structured way to get things done bypassing the time/energy needed to “hit upon something that several of us have evolved to naturally”.

It's this line that I'm bristling at: "...the workflow I’ve settled into is radically different from what most people do with AI coding tools..."

Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity.

It was #6 in Boris's run-down: https://news.ycombinator.com/item?id=46470017

So, yes, I'm glad that people write things out and share. But I'd prefer that they not lead with "hey folks, I have news: we should *slice* our bread!"

  • But the author's workflow is actually very different from Boris'.

    #6 is about using plan mode whereas the author says "The built-in plan mode sucks".

    The author's post is much more than just "planning with clarity".

    • Since some time, Claude Codes's plan mode also writes file with a plan that you could probably edit etc. It's located in ~/.claude/plans/ for me. Actually, there's whole history of plans there.

      I sometimes reference some of them to build context, e.g. after few unsuccessful tries to implement something, so that Claude doesn't try the same thing again.

    • > The author's post is much more than just "planning with clarity".

      Not much more, though.

      It introduces "research", which is the central topic of LLMs since they first arrived. I mean, LLMs coined the term "hallucination", and turned grounding into a key concept.

      In the past, building up context was thought to be the right way to approach LLM-assisted coding, but that concept is dead and proven to be a mistake, like discussing the best way to force a round peg through the square hole, but piling up expensive prompts to try to bridge the gap. Nowadays it's widely understood that it's far more effective and way cheaper to just refactor and rearchitect apps so that their structure is unsurprising and thus grounding issues are no longer a problem.

      And planning mode. Each and every single LLM-assisted coding tool built their support for planning as the central flow and one that explicitly features iterations and manual updates of their planning step. What's novel about the blog post?

      2 replies →

  • I would say he’s saying “hey folks, I have news. We should slice our bread with a knife rather than the spoon that came with the bread.”

  • > Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity.

    That's obvious by now, and the reason why all mainstream code assistants now offer planning mode as a central feature of their products.

    It was baffling to read the blogger making claims about what "most people" do when anyone using code assistants already do it. I mean, the so called frontier models are very expensive and time-consuming to run. It's a very natural pressure to make each run count. Why on earth would anyone presume people don't put some thought into those runs?

This kind of flows have been documented in the wild for some time now. They started to pop up in the Cursor forums 2+ years ago... eg: https://github.com/johnpeterman72/CursorRIPER

Personally I have been using a similar flow for almost 3 years now, tailored for my needs. Everybody who uses AI for coding eventually gravitates towards a similar pattern because it works quite well (for all IDEs, CLIs, TUIs)

Its ai written though, the tells are in pretty much every paragraph.

  • I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”

    • >Most people use ai to rewrite or clean up content

      I think your sentence should have been "people who use ai do so to mostly rewrite or clean up content", but even then I'd question the statistical truth behind that claim.

      Personally, seeing something written by AI means that the person who wrote it did so just for looks and not for substance. Claiming to be a great author requires both penmanship and communication skills, and delegating one or either of them to a large language model inherently makes you less than that.

      However, when the point is just the contents of the paragraph(s) and nothing more then I don't care who or what wrote it. An example is the result of a research, because I'd certainly won't care about the prose or effort given to write the thesis but more on the results (is this about curing cancer now and forever? If yes, no one cares if it's written with AI).

      With that being said, there's still that I get anywhere close to understanding the author behind the thoughts and opinions. I believe the way someone writes hints to the way they think and act. In that sense, using LLM's to rewrite something to make it sound more professional than what you would actually talk in appropriate contexts makes it hard for me to judge someone's character, professionalism, and mannerisms. Almost feels like they're trying to mask part of themselves. Perhaps they lack confidence in their ability to sound professional and convincing?

      1 reply →

    • > I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”

      Unfortunately, there's a lot of people trying to content-farm with LLMs; this means that whatever style they default to, is automatically suspect of being a slice of "dead internet" rather than some new human discovery.

      I won't rule out the possibility that even LLMs, let alone other AI, can help with new discoveries, but they are definitely better at writing persuasively than they are at being inventive, which means I am forced to use "looks like LLM" as proxy for both "content farm" and "propaganda which may work on me", even though some percentage of this output won't even be LLM and some percentage of what is may even be both useful and novel.

    • I don't judge content for being AI written, I judge it for the content itself (just like with code).

      However I do find the standard out-of-the-box style very grating. Call it faux-chummy linkedin corporate workslop style.

      Why don't people give the llm a steer on style? Either based on your personal style or at least on a writer whose style you admire. That should be easier.

      3 replies →

    • Even though I use LLMs for code, I just can't read LLM written text, I kind of hate the style, it reminds me too much of LinkedIn.

      1 reply →

    • Very high chance someone that’s using Claude to write code is also using Claude to write a post from some notes. That goes beyond rewriting and cleaning up.

      1 reply →

    • ai;dr

      If your "content" smells like AI, I'm going to use _my_ AI to condense the content for me. I'm not wasting my time on overly verbose AI "cleaned" content.

      Write like a human, have a blog with an RSS feed and I'll most likely subscribe to it.

    • Well, real humans may read it though. Personally I much prefer real humans write real articles than all this AI generated spam-slop. On youtube this is especially annoying - they mix in real videos with fake ones. I see this when I watch animal videos - some animal behaviour is taken from older videos, then AI fake is added. My own policy is that I do not watch anything ever again from people who lie to the audience that way so I had to begin to censor away such lying channels. I'd apply the same rationale to blog authors (but I am not 100% certain it is actually AI generated; I just mention this as a safety guard).

    • The main issue with evaluating content for what it is is how extremely asymmetric that process has become.

      Slop looks reasonable on the surface, and requires orders of magnitude more effort to evaluate than to produce. It’s produced once, but the process has to be repeated for every single reader.

      Disregarding content that smells like AI becomes an extremely tempting early filtering mechanism to separate signal from noise - the reader’s time is valuable.

    • > I don’t think it’s that big a red flag anymore.

      It is to me, because it indicates the author didn't care about the topic. The only thing they cared about is to write an "insightful" article about using llms. Hence this whole thing is basically linked-in resume improvement slop.

      Not worth interacting with, imo

      Also, it's not insightful whatsoever. It's basically a retelling of other articles around the time Claude code was released to the public (March-August 2025)

    • If you want to write something with AI, send me your prompt. I'd rather read what you intend for it to produce rather than what it produces. If I start to believe you regularly send me AI written text, I will stop reading it. Even at work. You'll have to call me to explain what you intended to write.

      5 replies →

    • I think as humans it's very hard to abstract content from its form. So when the form is always the same boring, generic AI slop, it's really not helping the content.

      1 reply →

  • >the tells are in pretty much every paragraph.

    It's not just misleading — it's lazy. And honestly? That doesn't vibe with me.

    [/s obviously]

  • So is GP.

    This is clearly a standard AI exposition:

    LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.

  • Then ask your own ai to rewrite it so it doesn't trigger you into posting uninteresting thought stopping comments proclaiming why you didn't read the article, that don't contribute to the discussion.