← Back to context

Comment by josephg

11 hours ago

Super interesting. I wish this article wasn’t written by an LLM though. It feels soulless and plastic.

It's not setting off any LLM alarm bells to me. It just reads like any other scientific article, which is very often soulless

  • It repeats a few points too many times for a professional writer to not catch it.

    I don’t mind that they let an LLM write the text, but they should at least have edited it.

Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.

  • Here's one tell-tale of many: "No alarm, no program light."

    Another one: "Two instructions are missing: [...] Four bytes."

    One more: "The defensive coding hid the problem, but it didn’t eliminate it."

For what it’s worth, Pangram thinks this article is fully human-written: https://www.pangram.com/history/f5f68ce9-70ac-4c2b-b0c3-0ca8...

  • The AI writing detectors are very unreliable. This is important to mention because they can trigger in the opposite direction (reporting human written text as AI generated) which can result in false accusations.

    It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.

  • Then pangram isn't very good, because that article is full of Claude-isms.

    • > because that article is full of Claude-isms

      Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.

      5 replies →

    • Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles.

      Don't understand how these tools exist.

      1 reply →

    • It has Claude-isms, but it doesn't feel very Claude-written to me, at least not entirely.

      What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks.

  • Pangram doesn't reliably detect individual LLM-generated phrases or paragraphs among human written text.

    It seems to look at sections of ~300 words. And for one section at least it has low confidence.

    I tested it by getting ChatGPT to add a paragraph to one of my sister comments. Result is "100% human" when in fact it's only 75% human.

    Pangram test result: https://www.pangram.com/history/1ee3ce96-6ae5-4de7-9d91-5846...

    ChatGPT session where it added a paragraph that Pangram misses: https://chatgpt.com/share/69d4faff-1e18-8329-84fa-6c86fc8258...

AI tends to write like it is getting paid by the word. This article wasn't too egregious but an editor could have improved it.

I'm starting to develop a physiological response when I recognize AI prose. Just like an overwhelming frustration, as if I'm hearing nails on chalkboard silently inside of my head.

  • I feel ya.... and i have to admit in the past i tried it for one article in my own blog thinking it might help me to express... tho when i read that post now i dont even like it myself its just not my tone.

    therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.

This is the top reply on a substantial percentage of HN posts now and we should discourage it.

It is:

- sneering

- a shallow dismissal (please address the content)

- curmudgeonly

- a tangential annoyance

All things explicitly discouraged in the site guidelines. [1]

Downvoting is the tool for items that you think don't belong on the front page. We don't need the same comment on every single article.

[1] - https://news.ycombinator.com/newsguidelines.html

  • It's not a shallow dismissal; it's a dismissal for good reason. It's tangential to the topic, but not to HN overall. It's only curmudgeonly if you assume AI-written posts are the inevitable and good future (aka begging the question). I really don't know how it's "sneering", so I won't address that.

    • The fact that the whole thread has basically devolved into debates over if it is or isn't an LLM written article is proving well enough that it doesn't really matter one way or another

    • It is a witch hunt with no evidence whatsoever, all based on intuition. It is distraction from the main topic, a topic that enough people find interesting to stay on the top page. What was intellectually interesting has now become a bore fest of repeated back and forth. That’s disrespectful and inconsiderate. Write a new post about why do you think AI writing is dangerous. I don’t mind that. I’d upvote it.

  • > Downvoting is the tool for items that you think don't belong on the front page.

    You can’t downvote submissions. That’s literally not a feature of the site. You can only flag submissions, if you have more that 31 karma.

    • Twelve year old account and who knows how much lurking before that and I've never noticed this. Good lord.

      Optimistically, I guess I can call myself some sort of live-and-let-live person.

  • The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.

    Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.

    • > The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.

      Note: the guidelines are a living document that contain references to current AI tools.

      > Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.

      This is something worth saying about a pure slop content. But the "charge" against the current item is that a reader encountered a feeling that an LLM was involved in the production of interesting content.

      With enough eyeballs, all prose contains LLM tells.

      We don't need to be told every time someone's personal AI detection algorithm flags. It's a cookie-banner comment: no new information for the reader, but a frustratingly predictable obstacle to scroll through.

      1 reply →

  • No idea why you're being downvoted. I've done my bit to redress the balance, I hope others do the same.

Not to single out your comment, but it feels like it's gotten to the point where HN could use a rule against complaining about AI generated content.

It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.

  • I disagree. I like to read articles and explore Show HN posts, but in the past 6 months I’ve wasted a lot of time following HN links that looked interesting but turned out to be AI slop. Several Show HN posts lately have taken me to repos that were AI generated plagiarisms of other projects, presented on HN as their own original ideas.

    Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.

    For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)

    Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.

    • > you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in

      This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.

      1 reply →

  • You're suggesting this is the complainant's fault?

    • Yes. These HN guidlines already basically cover it:

      > Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

      > Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

      1 reply →

    • Yes, because all of them are now irrational about the possibility of LLM writing something they read.

  • HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.

    There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.

    • If you’re looking for a place that surfaces only human-written content regardless of whether it’s interesting, rather than interesting content regardless of how it was written, HN is not the place.

      There might be a market for your alternative though. Should be easy enough to build with Claude Code.

      2 replies →

    • I know the author personally. He's hardly the type of person to publish AI slop. Read his other articles and watch his talks, this is very much Henry's literary style.

      1 reply →

I've seen way, way worse. Either someone LLM-polished something they already wrote, or they did their own manual editing pass.

The short sentence construction is the most suspicious, but I actually don't see anything glaring. It normally jumps out and hits me in the face.

I did not get any “written by LLM vibes”. I enjoyed it and it pulled me in to keep reading.

Who gives a crap if it was written by an LLM. Read it or don’t read it. Your choice.

If it conveys the idea and your learn something new, then it’s mission accomplished.