← Back to context

Comment by Bukhmanizer

20 hours ago

This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.

I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.

I wrote it myself. But the irony isn't lost on me. "Who did what" is kind of the whole point of the article. Appreciate the feedback.

  • FWIW I reported your post to the mods because it reads completely AI generated to me. My judgement was that it might have been slightly edited but is largely verbatim LLM output.

    Some tells that you might wanna look at in your writing, if you truly did write it yourself without Any LLM input are these contrarian/pivoting statements. Your post is full of these and it is imo the most classic LLM writing tell atm. These are mostly variants of the 'Its not X but Y" theme:

    - "Not whether they've adopted every tool, but whether they're curious"

    - "I still drive the intuition. The agents just execute at a speed I never could alone."

    - "The model doesn't save you from bad decisions. It just helps you make them faster."

    - "That foundation isn't decoration. It's the reason the AI is useful to me in the first place."

    - "That's not prompting. That's engineering"

    It is also telling that the reader basically cant take a breather most of the sentences try to emphasize harder than the last one. There is no fluff thought, no getting side tracked. It reads unnatural, humans do not think like this usually.

    • The LLMs are training "us" now.

      First we develop the machines, then we contort the entire social and psychic order to serve their rhythms and facilitate their operation.

  • FWIW I thought it read fine and enjoyed the take. As I'm exploring more AI tooling I'm asking myself some of the same questions.

  • Yours is maybe the first good post on managing a team of AIs that I've read. There is no spoon.

    I've been shifting from being the know-it-all coder who fixes all of the problems to a middle manager of AIs over the past few months. I'm realizing that most of what I've been doing for the last 25 years of my career has largely been a waste of time, due to how the web went from being an academic pursuit to a profit-driven one. We stopped caring about how the sausage was made, and just rewarded profit under a results-driven economic model. And those results have been self-evidently disastrous for anyone who cares about process or leverage IMHO. So I ended up being a custodian solving other people's mistakes which I would never make, rather than architecting elegant greenfield solutions.

    For example, we went from HTML being a declarative markup language to something imperative. Now rather than designing websites like we were writing them in Microsoft Word and exporting them to HTML, we write C-like code directly in the build product and pretend that's as easy as WYSIWYG. We have React where we once had content management systems (CMSs). We have service-oriented architectures rather than solving scalability issues at the runtime level. I could go.. forever. And I have in countless comments on HN.

    None of that matters now, because AI handles the implementation details. Now it's about executive function to orchestrate the work. An area I'm finding that I'm exceptionally weak in, due to a lifetime of skirting burnout as I endlessly put out fires without the option to rest.

    So I think the challenge now is to unlearn everything we've learned. Somehow, we must remember why we started down this road in the first place. I'm hopeful that AI will facilitate that.

    Anyway, I'm sure there was a point I was making somewhere in this, but I forgot what it was. So this is more of a "you're not alone in this" comment I guess.

    Edit: I remembered my point. For kids these days immersed in this tech matrix we let consume our psyche, it's hard to realize that other paradigms exist. Much easier to label thinking outside the box as slop. In the age of tweets, I mean x's or whatever the heck they are now, long-form writing looks sus! Man I feel old.

  • Yeah, I came here to ask if you're Vibe Writing as well ;)

    I wasn't quite sure though. Sometimes it's clearly GPT, sometimes clearly Claude, and this article was like a blend.