Comment by Wowfunhappy

5 months ago

This article is so interesting, but I can’t shake the feeling it was written by AI. The writing style has that feel for me.

Maybe that shouldn’t bother me? Like, maybe the author would never have had time to write this otherwise, and I would never have learned about his experience.

But I can't help wishing he'd just written about it himself. Maybe that's unreasonable--I shouldn't expect people to do extra work for free. But if this happened to me, I would want to write about it myself...

It’s incredibly annoying to read. So many super short sentences with the “not just X. Also Y” format. Little hooks like “The attack vector?”

“Not fancy security tools. Not expensive antivirus software. Just asking my coding assistant…”

I actually feel like AI articles are becoming easier to spot. Maybe we’re all just collectively noticing the patterns.

  • I'm regularly asked by coworkers why I don't run my writing through AI tools to clean it up and instead spend a time iterating over it, re-reading, perhaps with a basic spell checker and maybe grammar check.

    That's because, from what I've seen to date, it'd take away my voice. And my voice -- the style in which I write -- is my value. It's the same as with art... Yes, AI tools can produce passable art, but it feels soulless and generic and bland. It lacks a voice.

    • It also slopifies your work in a way that's immediately obvious. I can tell with high confidence when someone at work runs their email through ChatGPT and it makes me think less of the person now that I have to waste time reading through an overly verbose email with very little substance to it when they could have just sent the prompt and saved us all the time.

      1 reply →

    • I manage an employee from another country and speaks English as a second language. The way they learned English gives them a distinct speaking style that I personally find convincing, precise and engaging. I started noticing their writing losing that voice, so I asked if they were using an LLM and they were. It was a tough conversation because as a native English speaker I have it easy, so I tried to frame my side of the conversation as purely my personal observation that I could see the change in tone and missed the old one. They've modified their use of LLMs to restore their previous style, but I still wonder if I was out of line socially for saying anything. English is tough, and as a manager I have a level of authority that is there even when I think it isn't. I don't know the point, except that I'm glad you're keeping your voice.

      2 replies →

    • I often ask for ai to give only grammar and spelling corrections, and then only a change set I apply manually. In other words the same functionality as every word processor since…y2k?

      5 replies →

    • Every time you let AI speak for you, it gets better at sounding like you — and you get worse at it.

      That’s the trade: convenience for originality.

      The more you outsource your thoughts, your words, your tone — the easier it becomes to forget how to do it yourself.

      AI doesn’t steal your voice.

      It just trains you to stop using it.

      /a

    • I consider myself to be an above average writer and a great editor. I will just throw my random thoughts about something that happened at work, ask ChatGPT to keep digging deeper in my question, I will give it my opinion of what I should do. Ask it to give me the “devil’s advocate” and the “steel man opinion” and then ask it to write a blog post [1].

      I then edit it for tone, get rid of some of the obvious AI tells. Make some edits for voice, etc.

      Then I throw it into another season of ChatGPT and ask it does it sound “AI written”. It will usually call out some things and give me “advice”. I take the edits that sound like me.

      Then I put the text through Grok, Gemini and ask it the same thing. I make more edits and keep going around until I am happy with it. By the time I’m done, it sounds like I something I would write.

      You can make AI generated prose have a “voice” with careful prompting and I give it some of my writing.

      Why don’t I just write it myself if I’m going through all that? It helps me get over writers block and helps me clarify my thoughts. My editing skills are better than my writing skills.

      As I do it more and give it more writing samples, it is a faster process to go from bland AI to my “voice”

      [1] my blog is really not for marketing. I don’t link to it anywhere and I don’t even have my name attached to it. It’s more like a public journal.

      14 replies →

    • I agree. I use Grammarly for finding outright mistakes (spelling and the like, or a misplaced comma or something), but I don't listen to any of the suggestions for writing.

      I feel like when I try writing through Grammarly, it feels mechanical and really homogeneous. It's not "bad" exactly, but it sort of lacks anything interesting about it.

      I dunno. I'm hardly some master writer, but I think I'm ok at writing things that interesting to read, and I feel Grammarly takes that away.

    • Your voice? The style in which you write? That's gold - no one can take that away from you. And honestly? You're brave for admitting that.

    • The thing is, ask it something right away and it'll use its own voice. Give it lots of data from your own writing through examples and extrapolations on your speech patterns and it will impersonate your voice more. It's like how it can impersonate Trump, it has lots of examples to pull from, you? it doesn't know you. LLMs needs large amount of input to give it a really good output.

      4 replies →

    • I said almost exactly that to a coworker a few hours ago. My writing is me, it’s who I am. But I know that is not true for everyone, and in particular non-native speakers.

      I just detest that AI writing style, especially for business writing. It’s the kind of writing that leaves the reader less informed for the effort.

  • It's also exactly the type of writing you see on LinkedIn (yuck), so this article really goes full circle!

  • FTR I sometimes use AI to make my writing more "professional" because I rite narsty like

    I've recently had to say "My CV has been cleaned up with AI, but there are no hallucinations/misrepresentations within it"

    • If you have access to Microsoft Word, I'd customize the grammar checker settings to flag more than what is enabled by default. They have a lot of helpful rules that many are oblivious to because it's all buried deep in the preferences. Then adopt the stance of taking the green lines under advisement but ignore them if your original words suit your preference. That will get you polished up without submitting to AI editorial mundanity.

  • Honestly, the issue is that most people are poor writers. Even “good” professional writing, like the NY Times science section, can be so convoluted. AI writing is predictable now, but generally better than most human writing. Yet can be an irritating at the same time.

hey, I was almost hacked by someone pretending to be a legit person working for a legit looking company. They hid some stuff in the server side code.. could you turn this into a 10k words essay for my blog posts with hooks and building suspense and stuff? Thank you!

Probably how it went.

Edit: I see the author in the comments, it’s unfortunately pretty much how it went. The worst part is that the original document he linked would have been a better read than this AI slopified version.

I’d personally like to see these posts banned / flagged out of existence (AI posts, not the parent post).

It’s sort of the personal equivalent of tacky content marketing. Usually you’d never see an empty marketing post on the front page, even before AI when a marketer wrote them. Now the same sort of spammy language is accessible to everyone, it shouldn’t be a reason for such posts to be better tolerated

  • The problem is the same as in academic world; you cannot be sure and there will be false positivies.

    Rather, do we want to ban posts with specific format? I don’t know how that will end. So far, marketing hasn’t been a problem because people notice them, and don’t interact with them, and then they are not in front page.

  • I would agree, but the truth is that I've seen a few technical articles that benefited greatly from both organization and content that was clearly LLM-based. Yes, such articles feel dishonest and yucky to read, but the uncomfortable truth is that they aren't all stereotypical "slop."

No, you're right. Writing is very expressive; you can certainly get that feeling from observing how different people write, and stylometry gives objective evidence of this. If you mostly let AI write for you, you get a very specific style of writing that clearly is something the reinforcement learning is optimizing for. It's not that language models are incapable of writing anything else, but they're just tuned for writing milquetoast, neutral text full of annoying hooks and clichés. For something like fixing grammar errors or improving writing I see no reason to not consider AI aside from whatever ethical concerns one has, but it still needs to feel like your own writing. IMO you don't even really need to have great English or ridiculous linguistic skills to write good blog posts, so it's a bit sad to see people leaning so hard on AI. Writing takes time, I understand; I mean, my blog hardly has anything on it, but... It's worth the damn time.

P.S.: I'm sure many people are falsely accused of using AI writing because they really do write similarly to AI, either coincidentally or not. While I'm sure it's incredibly disheartening, I think in case of writing it's not even necessarily about the use of AI. The style of writing just doesn't feel very tasteful, the fact that it might've been mostly spat out by a computer without disclosure is just the icing on the cake. I hate to be too brutal, but these observations are really not meant to be a personal attack. Sometimes you just gotta be brutally honest. (And I'm speaking rather generally, as I don't actually feel like this article is that bad, though I can't lie and say it doesn't feel like it has some of those clichés.)

  • Your comment looks like it was Ai generated. I can tell from some of the words and from seeing quite a few AI essays in my time.

    But seriously, anyone can just drive by and cast aspersions that something's AI. Who knows how throughly they read the piece before lobbing an accusation into a thread? Some people just do a simple regexp match for specific punctuation, eg /—/ (which gives them 100% confidence this comment was written by AI without having to read it!) Others just look at length, and simply anything think is long must be generated, because if they're too lazy to write that much, everyone else is as well.

    https://xkcd.com/3126/

>but I can’t shake the feeling it was written by AI.

After I read this article, I thought this whole incident is fabricated and created as a way to go viral on tech sites. One immediate red flag was: why would someone go to these lengths to hack a freelancer who's clearly not rich and doesn't have millions in his cryptowallet. And how did they know he used Windows? Many devs don't.

Ah, you might say, maybe he is just one of the 100 victims. Maybe but we'd hear from them by now. There's no one else on X claiming to have been contacted by them.

Anyway, I'm highly skeptical of this whole incident. I could be wrong though :)

  • It's a thing. Google "fake job interview crypto hacks".

    It's been a thing for a while. I saw the title, was like "Hmm, Hacker News is actually late to the party for once".

    I think I first heard about it on Coffeezilla video or something.

that was the case. you can find the base write up and the prompt used in one of my comments on this post.

i did not have much time to work on this at all, being in the middle of a product launch at my work, and a bunch of other 'life' stuff.

thanks for understanding.

  • Yeah, people hate that. It just instantly destroyed the immersion and believability of any story. The moment i smell AI every single shred of credibility is completely trashed. Why should i believe a single thing you say? How am i to know in any way how much you altered the story? I understand you must be very busy but straight up the original sketch is better to post than the generic and sickly ai'ified mushmash

  • Thanks for letting us know, but it’s offensive to your readers. Please include a section at the beginning of the article to let us know. Otherwise you’re hurting your own reputation

  • > i did not have much time to work on this at all

    From your other comment:

    > this went though 11 different versions before reaching this point

    https://news.ycombinator.com/item?id=45594554

    Seriously, just do things yourself next time. You aren't going to improve unless you always ride with training wheels. Plus, it seems you saved no time with AI at all.

  • Next time maybe just post the base write up and the prompt? What value does the llm transformation add, other than wasting every reader's time (while saving yours)?

    • People are often unconfident about their own writing. But if you can feed it to a LLM and have the LLM output something that looks coherent, your writing is good enough to publish.

      3 replies →

  • You have good words. Have faith in your words. They are better words than ai even if they few or many. They let us get to know “you”. Ai erases “you”

  • Next time add “in the style of a thedailywtf post” to your prompt to stay on genre.

The first paragraph feels like a parody of one of the LinkedIn marketing professional that receives a valuable insight from a toddler when their pet goldfish was run over by a car.

Very obvious writing style but also the bullet points that restate the same thing in slightly different ways as well as the weirdly worded “full server privileges” and “full nodejs privileges”.

Like… yes running a process is going to have whatever privileges your user has by default. But I’ve never once heard someone say “full server privileges” or “full nodejs privileges”…. It’s just random that is not necessarily wrong but not really right either.

My issue with the article's repeated use of a Title + List of Things structure isn't that it's LLM output, it's that it's LLM output directly, with no common sense editing done afterwards to restore some intelligent rhythm to the writing.

Does anyone know if this David Dodda is even real?

He is a freelance full stack dev that “dabbles”, but his own profile on his blog leaves the tech stack entry empty?

Another blog post is about how he accidentally rewired his mind with movies?

Also, I get that I’m now primed because of the context, but nothing about that linkedin profile of that AI image of the woman would have made me apply for that position.

Lately, has everyone actually seen that image of the woman standing in front of the house??? I sure have not and it’s unlikely anyone has in post-AI world. Sounds more like AI appeal to inside knowledge go build report.

It has many of the hallmarks of AI prose. It's amazing to me that people can't spot this stuff just by feel alone,

* Not X. Not Y. Just Z.

* The X? A Y. ("The scary part? This attack vector is perfect for developers.", "The attack vector? A fake coding interview from")

* The X was Y. Z. (one-word adjectives here).

* Here's the kicker.

* Bullet points with a bold phrase starting each line.

The weird thing is that before LLMs no one wrote like this. Where did they all get it from?

  • My assumption is that people absolutely did, and do, write like that all the time. Just not necessarily in places that you'd normally read. LLM drags up idioms from all over its training set and spews them back everywhere else, without contextual awareness. (That also means it averages across global cultures by default.)

    But also, over the last three years people have been using AI to output their own slop, and that slop has made its way back into the training data for later iterations of the technology.

    And then there's the recent revelation (https://www.anthropic.com/research/small-samples-poison , which I got from HN) that it might not actually take a whole lot of examples in the data for an LLM to latch onto some pattern hard.

I had the same feeling, but also the feeling that it was written for AI, as in marketing. That’s probably not the case, but it looks suspicious because this person only found this issue using AI and would’ve otherwise missed it, and then made a blog post saying so (which arguably makes one look incompetent, whether that’s justifiable or not, and makes AI look like the hero).

Yeah my reaction was:

- The class of threat is interesting and worth taking seriously. I don't regret spending a few minutes thinking about it.

- The idea of specifically targeting people looking for Crypto jobs from sketchy companies for your crypto theft malware seems clever.

- The text is written by AI. The whole story is a bit weird, so it's plausible this is a made up story written by someone paid to market Cursor.

- The core claim, that using LLMs protect you from this class of threat seems flat wrong. For one thing, in the story, the person had to specifically ask the LLM about this specific risk. For another, a well-done attack of this form would (1) be tested against popular LLMs, (2) perhaps work by tricking Cursor and similar tools into installing the malware, without the user running anything themselves, or (3) Hide the shellcode in an `npm` dependency, so that the attack isn't even in the code available to the LLM until it's been installed, the payload delivered, and presumably the tracks of the attack hidden.

  • > be tested against popular LLMs, perhaps work by tricking Cursor and similar tools into installing the malware, without the user running anything themselves

    My sense is that the attack isn't nearly as sophisticated as it looks, and the attackers out there aren't really thinking about things on this level — yet.

    > Hide the shellcode in an `npm` dependency

    It would have to be hidden specifically in a post-install script or similar. Which presumably isn't any harder, but.

The philosophically interesting point is that kids growing up today will read an enormous amount of AI content, and likely formulate their own writing like AI. I wouldn't be surprised if in 20 years a lot of journalism feels like AI, even if it's written by a human

Your comment was so validating, I was getting such weird vibes and felt it was so dumbly written given the contention was actually good advice. Consequently, the author tarnished his reputation for me personally from the very beginning.

I think it only really has that feel if you use GPT. I mean, all AIs produce output that sounds kinda like it was written by an AI. But I think GPT is the most notorious on that front. It's like ten times worse.

So really the feeling I get when I run into "obviously AI" writing isn't even, "I wish they had written this manually", but "dang, they couldn't even be bothered to use Claude!"

(I think the actual solution is base text models, which exist before the problem of mode collapse... But that's kind of a separate conversation.)

  • I mean, they are different, but there is only a subset of like 3 big model providers. And we see hundreds of thousands+ of words of generated content from each, probably. It is easy to become very familiar with each output.

    Claude vs GPT both sound like AI to me. While GPT is cheery Claude is more informative. But both of them have "artifacts" due to them trying to transform language from a limited initial prompt.

The important part for me is that the experience is legitimate, and secondarily that it's well written. The problem for me with LLM-written texts are that they're rarely very well written, and sometimes unauthentic.

If we had really good AI writing, I wouldn't mind if poor authors used that to improve how they communicate. But today's crop of AI are not that good writers.

  • That’s what I’m actually doubting in one of the screenshots, it says “Hi Arun,” but the author’s name is David.

Totally written by AI. There’s too many embellishments like “LinkedIn legitimacy” and short summarizations. AI loves to wordsmith.

My daughter feels all my writing naturally sounds like AI, even my college papers from 30 years ago. Maybe author has similar issue?

  • I have been told I am "AI" because I was simply a bit too serious, enthusiastic and nerdy about some topic. It happens. I put more effort into such writings. Check my comment history and you will find that many comments from me are low-effort: including this one. :)

The sentence structure is too consistent across the whole piece, like they all have the same number of syllables, almost none start with a subject, and they are all very short. It is robotic in its consistency. Even if it’s not AI, it’s bad writing.

I stopped reading a few paragraphs in.

I get the point of the article. Be careful running other people's code on your machine.

After understanding that, there's no point to continue to read when a human barely even touched the article.

  • I found the details of how the attack was constructed to be interesting.

    • Yes, it's an informative and important article. I think the complaints here are absurd. Hopefully the people not reading it for silly reasons won't become the victims of similar social engineering.

      7 replies →

This article is so interesting, but I can’t shake the feeling it was written by AI. The writing style has that feel for me.

A bunch of these have been showing up on HN recently. I can't help but feel that we're being used as guinea pigs.

> This article is so incredibly interesting, but I can’t shake the feeling it was written by AI. The writing style has all the telltale signs.

The sadder realization is that after enough AI slop around, real people will start talking like AI. This will just become the new standard communication style.

  • Chatgpt is just an aggregate of how the terminally online, talk, when they have to act professional.

    Chatgpt is hardcoded to not be rude (or German <-- this is a joke).

    So when you say, "people will start talking like AI". They are already doing that in professional settings. They are the training data.

    As someone who writes with swear words and personality. I think this era is amazing for me. Before, I was seen as rude and unprofessional. Now, I feel like I have a leg up, over all this AI slop.

    Authenticity is valued now. Swearing is in vogue.

    • > They are already doing that in professional settings. They are the training data.

      It's a self-reinforcing cycle. AI sucks up and barfs back up the same bland style and eventually books, articles, news will all look even more bland and sound more AI like. That junk then will be sucked up by the next AI model, and regurgitate into some even more bland uniform format. If that's all the new generation hears and sees, that's how they'll perceive one should "talk" or "write".

      > Authenticity is valued now. Swearing is in vogue.

      Ha! That's a good point, I like that. Not that swearing is my style (unless I stub my toe), but I agree with the general authenticity point. Maybe until the interns at Google and OpenAI will figure out how to make their LLM sounds more "hip" and "authentic".

  • Even now, I think many people are not literate enough to see that it’s bad, and in fact think it improves their writing (beyond just adding volume).

    Maybe that’s a good thing? It’s given a whole group of people who otherwise couldn’t write a voice (that of a contract African data labeller). Personally I still think it’s slop, but maybe in fact it is a kind of communication revolution? Same way writing used to only be the province of the elite?

    • If they aren't literate enough to see that it's bad, then it probably actually is an improvement over their own writing.

    • Except, the interface to ChatGPT is writing! People who can't write can't use ChatGPT: if you can use ChatGPT, then you can write. (You might lack confidence, but you can write.)

      People who cannot write who try to use ChatGPT are not given a voice. They're given the illusion of having written something, but the reader isn't given an understanding of the ChatGPT-wielder's intent.

I read this comment first then attempted to read this article but whether it's this inception or it's genuinely AI-ish, I'm now struggling to read this article.

The funny thing is, for years I've had this SEO-farm bullshit content-farm filter and the AI impact for me has been, an increasing mistrust of anything written by humans or not. I don't even care if this was AI written, if it's good, great! However, the... 'genuine-ness' of it or lack of it, is an issue. It doesn't connect with me anymore and I feel/connect to any of it.

Weird times.

I honestly think AI can write much better. Sure, it needs a lot of input, but experienced AI users will get there.

The era of the AI bubble economy has arrived, and now almost everyone is interacting with and using AI. Just like your feeling, this is an article organized with GPT. Perhaps the story really happened.