← Back to context

Comment by A_Venom_Roll

25 days ago

While I do agree with the content, this tone of writing feels awfully similar to LLM generated posts that flood some productivity subreddits recently. Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves? I don't buy that.

Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates." - "What actually worked" - "This hits close to home" - "Where it really shines is the tedious stuff - writing tests for edge cases, refactoring patterns across multiple files, generating boilerplate that follows existing conventions."

> While I do agree with the content, this tone of writing feels awfully similar to LLM generated posts

> Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates."

Wow it's obvious in the full comment history. What is the purpose for this stuff? Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing? On Twitter I already have scroll down to find the one human reply on many posts.

And when the bots get a bit better (or people get less lazy prompting them, I'm pretty sure I could prompt to avoid this classic prose style) we'll have no chance of knowing what's a bot. How long until the majority of the Internet be essentially a really convincing version of r/SubredditSimulator? When I stop being able to recognize the bots, I wonder how I'll feel. They would probably be writing genuinely helpful/funny posts, or telling a touching personal story I upvote, but it's pure bot creative writing.

  • Building up karma, for its own sake or to gain the right to flag politically disagreeable content

  • > Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing?

    Russia and Israel are known to run full time operations doing this for well over a decade. Twitter by their own account, 25% of users are/were bots back in 2015 (their peak user year). Even here on HN if you go look at the most trafficked Israel/Palestine threads, there are lots of people complaining about getting modded into oblivion, turning the conversation into neutral/pro israel, and silencing negative comments via a ghost army of modders.

> The tone of writing feels awfully similar to LLM.

This particular piece is LinkedIn “copy pasta” with many verbatim or mildly variant copies.

Example: https://www.linkedin.com/posts/chriswillx_preparing-to-do-th...

And in turn, see: https://strangestloop.io/essays/things-that-arent-doing-the-...

Relatedly, LLMs clearly picked the "LinkedIn influencer" style up.

My guess is some cross-over between those who write this way on LinkedIn and those who engage with chatbot A/B testing or sign up for the human reinforcement learning / fine tuning / tagging jobs, training in a preference for it.

> Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves? I don't buy that.

I understand that it's not the main point in your comment (you're trying to determine if the parent comment was written using an LLM), but yes, we do exist: I've spent years planning personal projects that remain unimplemented. Don't underestimate the power of procrastination and perfectionism. Oliver Burkeman ("Four Thousand Weeks", etc.) could probably explain that dynamic better than me.

  • Fascinating how differently people can work.

    My struggle is having enough patience to do any planning before I start building. As soon as there's even the remote hint of a half-baked idea in my head, it's incredibly tempting to just start building and figure out stuff as I go along.

    • I totally get that. I have a super corpo buddy who tells me every project is 80% planning and he uses that philosophy for his personal projects. That makes sense for a huge company.

      I resist working like that because I am mega ignorant and I know I will encounter problems that I won't recognize until I get to them.

      But, I also HATE having to rework my projects because of something I overlooked.

      My (attempted) solution is to slog through a chat with an AI to build a Project Requirements Document and to answer every question it asks about my blindspots. It mostly helps build stuff. And sometimes the friction prevents me from overloading myself with more unfinished projects!

I didn't catch it immediately, but after you pointed it out I totally agree. That comment is for sure LLM written. If that involved a human in the loop or was fully automated I cannot say.

We currently live in the very thin sliver of time where the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time before those Dead Internet Theory guys score another point and these comments are indistinguishable from novel human thought.

  • > … the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time …

    I don't think it will become significantly less visible⁰ in the near future. The models are going to hit the problem of being trained on LLM generated content which will cause the growth in their effectiveness quite a bit. It is already a concern that people are trying to develop mitigations for, and I expect it to hit hard soon unless some new revolutionary technique pops up¹².

    > those Dead Internet Theory guys score another point

    I'm betting that us Habsburg Internet predictors will have our little we-told-you-so moment first!

    --------

    [0] Though it is already hard to tell when you don't have your thinking head properly on sometimes. I bet it is much harder for non-native speakers, even relatively fluent ones, of the target language. I'm attempting to learn Spanish and there is no way I'd see the difference at my level in the language (A1, low A2 on a good day) given it often isn't immediately obvious in my native language. It might be interesting to study how LLM generated content affects people at different levels (primary language, fluent second, fluent but in a localised creole, etc.).

    [1] and that revolution will likely be in detecting generated content, which will make generated content easier to flag for other purposes too, starting an arms race rather than solving the problem overall

    [2] such a revolution will pop up, it is inevitable, but I think (hope?) the chance of it happening soon is low

  • To me it seems like it'd only get more visible as it gets more normal, or at least more predictable.

    Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose

    • we didn't really survive photoshop.

      The obviously fake ones were easy to detect, and the less obvious ones took some some sleuthing to detect. But the good fakes totally fly under the radar. You literally have no idea how much of the images you see are doctored well because you can't tell.

      Same for LLMs in the near future (or perhaps already). What will we do when we'll realize we have no way of distinguishing man from bot on the internet?

      1 reply →

This reminds me of how bad browsing the internet will likely get this year. There are a ton of 'Cursor for marketing' style startups going online now that basically spam every acquisition channel possible.

Not sure about this user specifically, but interesting that a lot of their comments follow a pattern of '<x> nailed it'

  • This is true, but the need to read critically especially on the internet has become an indispensable skill anyway.

    Psy-ops, astroturfing, now LLM slop.

> Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves?

Ironically, I see this very often with AI/vibe coding, and whilst it does happen with traditional coding too, it happens with AI to an extreme degree. Spend 5 minutes on twitter and you'll see a load of people talking about their insane new vibe coding setup and next to nothing of what they're actually building

  • Still would love to see somebody with a fresh install of windows set up their vibe coding suite and then build something worthwhile.

> Are there really people who "spend weeks planning the perfect architecture" to build some automation tools for themselves?

Probably. I've been known to spend weeks planning something that I then forget and leave completely unstarted because other things took my attention!

> Commenter's history is full of 'red flags'

I wonder how much these red flags are starting to change how people write without LLMs, to avoid being accused of being a bot. A number of text checking tools suggested replacing ASCII hyphens with m-dashes in the pre-LLM-boom days¹ and I started listening to them, though I no longer do. That doesn't affect the overall sentence structure, but a lot of people jump on m-/n- dashes anywhere in text as a sign, not just in “it isn't <x> - it is <y>” like patterns.

It is certainly changing what people write about, with many threads like this one being diverted into discussing LLM output and how to spot it!

--------

[1] This is probably why there are many of them in the training data, so they are seen as significant by tokenisation steps, so they come out of the resulting models often.

  • It’s already happening. This came up in a webinar attended by someone from our sales team:

    > "A typo or two also helps to show it’s not AI (one of the biggest issues right now)."

    • When it comes to forum posts, I think getting to the point quickly makes something worth reading whether or not it’s AI generated.

      The best marketing is usually brief.

      1 reply →

I'm not so sure. There's a fair amount of voice and first person in their writing. I wonder if they just use LLMs so much that the language and style of LLMs have rubbed off on them.