Comment by thorum

3 days ago

Humorous that this article has a strong AI writing smell - the author should publish the prompts they used!

I don’t like to accuse, and the article is fine overall, but this stinks: “This transparency transforms git history from a record of changes into a record of intent, creating a new form of documentation that bridges human reasoning and machine implementation.”

  • > I don’t like to accuse, and the article is fine overall, but this stinks:

    Now consider your reasonable instinct to not accuse other people coupled with the possibility setting AI lose with “write a positive article about AI where you have some paragraphs about the current limitations based on this link. write like you are just following the evidence.” Meanwhile we are supposed to sit here and weigh every word.

    This reminds to write a prompt for a blogpost. How AI could be used for making personal-looking tech-guy who meditates and runs websites. (Do we have the technology? Yes we do)

  • Also: "This OAuth library represents something larger than a technical milestone—it's evidence of a new creative dynamic emerging"

    Em-dash baby.

    • The sentence itself is a smeLLM. Grandiose pronouncements aren't a bot exclusive, but man do they love making them, especially about creative paradigms and dynamics

    • I have used Em-dashes in many of my comments for years. It's just a result of reading books, where Em-dashes happen a lot.

    • Can we please stop using the em-dash as a metric to “detect” LLM writing? It’s lazy and wrong. Plenty of people use em-dashes, it’s a useful punctuation mark. If humans didn’t use them, they wouldn’t be in the LLM training data.

      There are better clues, like the kind of vague pretentious babble bad marketers use to make their products and ideas seem more profound than they are. It’s a type of bad writing which looks grandiose but is ultimately meaningless and that LLMs heavily pick up on.

      12 replies →

  • > this stinks: “This transparency transforms git history from a record of changes into a record of intent, creating a new form of documentation that bridges human reasoning and machine implementation.”

    That's where I stopped reading. If they needed "AI" for turning their git history into a record of intent ("transparency"), then they had been doing it all wrong, previously. Git commit messages have always been a "form of documentation that bridges human reasoning" -- namely, with another human's (the reader's) reasoning.

    If you don't walk your reviewer through your patch, in your commit message, as if you were teaching them, then you're doing it wrong.

    Left a bad taste in my mouth.

I did human notes -> had Claude condense and edit -> manually edit. A few of the sentences (like the stinky one below) were from Claude which I kept if it matched my own thoughts, though most were changed for style/prose.

I'm still experimenting with it. I find it can't match style at all, and even with the manual editing it still "smells like AI" as you picked up. But, it also saves time.

My prompt was essentially "here are my old blog posts, here's my notes on reading a bunch of AI generated commits, help me condense this into a coherent article about the insights I learned"

  • I wonder if those notes wouldn’t have been more interesting as-is, and possibly also more condensed.

    • I wish there were a way to opt-out of LLM generated text and see the prompt. In any context. It's always more informative, more human, more memorable, more accurate, and more representative of what the author was actually trying to convey.

  • Makes sense, I could see the human touch on the article too, so I figured it was something like that.