← Back to context

Comment by divvvyy

9 hours ago

Wild tale, but very annoying that he wrote it with an AI. It's horribly jarring to read.

The page background slowly fades in and out with a blue color. At first I thought my eyes were playing tricks on me.

How do you know?

I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.

So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.

  • Looks like AI to me too. Em dashes (albeit nonstandard) and the ‘it’s not just x, it’s y’ ending phrases were everywhere. Harder to put into words but there’s a sense of grandiosity in the article too.

    Not saying the article is bad, it seems pretty good. Just that there are indications

  • This blog post isn't human speech, it's typical AI slop. (heh, sorry.)

    Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.

    Once you notice these micro-patterns, you can't unsee them.

    Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?

  • Sorry but I think you just don't know a lot about LLMs. Why did they start spamming code with emojis? It's not because that is what people actually do, something that is in the training data. It's because someone reinforcement learned the LLM to do it by asking clueless people if they prefer code with emojis.

    And so at this point the excessive bullet points and similar filler trash is also just an expression of whatever stupid people think they prefer.

    Maybe I'm being too harsh and it's not the raters are stupid in this constellation, rather it's the ones thinking you could improve the LLM by asking them to make a few very thin judgements.

  • Just chiming in here - any time I've written something online that considers things from multiple angles or presents more detailed analysis, the liklihood that someone will ask if I just used ChatGPT go way up. I worry that people have gotten really used to short, easily digestible replies, and conflate that with "human". Because of course it would be crazy for a human to expend "that much effort" on something /s.

    EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.

    • Author here - yes, this was written using guided AI. I consider this different than giving a vague prompt and telling it to write an article. My process was to provide all the information, for example I used AI to: 1. transcribe the phone call into text using whisper model 2. review all the email correspondence 3. research industry news about the breach 4. brainstorm different topics and blog structures to target based on the information, pick one 5. Review the style of my other blog articles 6. write the article and redact any personal info 7. review the article and suggest iterate on changes multiple times. To me this is more akin to having a writer on staff who can save you a lot of time. I can do all the above in less than 30mins, where it could take a full day to do it manually. I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.

      There are some still some signs you can tell content is AI written based on verbosity, use of bold, specific HTML styling, etc. I see no issues with the approach. I noticed some people have an allergic reaction to any hint of AI, and when the content produced is "fluff" with no real content I get annoyed too - however that isn't the case for all content.

      2 replies →

    • You're getting downvoted for being right. Attempt being nuanced and people will call you a robot.

      Well if that's how we identify humans I for one prefer our new LLM overlords.

      A lot of people who say stuff like "boo AI!" are not only setting the bar for humanity very low, they're also discouraging intellectualism and intelligent discourse online. Honestly, if a LLM wrote a good think piece, I prefer that over "human slop".

      I just wish people would critique a text on its own merits instead of inventing strawman arguments about how it was written.

      Oh and, for the provocative effect — I'll end my comment with an em dash.

I don't know if he wrote it via AI, but he repeats himself over and over again. It could have been 1/3 the length and still conveyed the same amount of information.

I know I shouldn’t pile on with respect to the AI Slop Signature Style, but in the hopes of helping people rein in the AI-trash-filter excesses and avoid reactions like these…

The sentence-level stuff was somewhat improved compared to whatever “jaunty Linked-In Voice” prompt people have been using. You know, the one that calls for clipped repetitive phrases, needless rhetorical questions, dimestore mystery framing, faux-casual tone, and some out-of-proportion “moral of the story.” All of that’s better here.

But there’s a good ways left to go still. The endless bullet lists, the “red flags,” the weirdly toothless faux drama (“The Call That Changed Everything”, “Data Catastrophe: The 2025 Cyber Fallout”), and the Frankensteined purposes (“You can still protect yourself from falling victim to the scams that follow,” “The Timeline That Doesn't Make Sense,” etc.)…

The biggest thing that stands out to me here (besides the essay being five different-but-duplicative prompt/response sessions bolted together) are the assertions/conclusions that would mean something if real people drew them, but that don’t follow from the specifics. Consider:

“The Timeline That Doesn't Make Sense

Here's where the story gets interesting—and troubling:

[they made a report, heard back that it was being investigated, didn’t get individual responses to their follow-ups in the immediate days after, the result of the larger investigation was announced 4 months later]”

Disappointing, sure. And definitely frustrating. But like… “doesn’t make sense?” How not so? Is it really surprising or unreasonable that it takes a large organization time, for a major investigation into a foreign contractor, with law enforcement and regulatory implications, as well as 9-figure customer-facing damages? Doesn’t it make sense (even if it’s disappointing), when stuff that serious and complex happens, that they wait until they’re sure before they say something to an individual customer?

I’m not saying it’s good customer service (they could at least drop a reply with “the investigation is ongoing and we can’t comment til it’s done”). There’s lots of words we could use to capture the suckage besides “doesn’t make sense.” My issue is more that the AI presents it as “interesting—and troubling; doesn’t make sense” when those things don’t really follow directly from the bullet list of facts afterward.

Each big categorical that the AI introduced this way just… doesn’t quite match what it purports to describe. I’m not sure exactly how to pin it down, but it’s as if it’s making its judgments entirely without considering the broader context… which I guess is exactly what it’s doing.

Many people find whining about coherent, meaningful text based on the source identity to be far more annoying than reading coherent, meaningful text.

But I guess you knew that already, which is why you just made a fresh burner account to whine on rather than whining from your real account.

  • Coherent? It's really annoying to read.

    The post just repeats things over and over again, like the Brett Farmer thing, the "four months", telling us three times that they knew "my BTC balance and SSN" and repeatedly mentioning that it was a Google Voice number.

    • Almost sounds like the posts of people whining about LLMs.

      Of course, unlike those people, LLMs are capable of expressing novel ideas that add meaningful value to diverse conversations beyond loudly and incessantly ensuring everyone in the thread is aware of their objection to new technology they dislike.

      3 replies →