Comment by Freebytes

2 days ago

Using AI to write content is seen so harshly because it violates the previously held social contract that it takes more effort to write messages than to read messages. If a person goes through the trouble of thinking out and writing an argument or message, then reading is a sufficient donation of time.

However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.

This is very well put, and captures my feelings on it. I take it as disrespect that someone would have any expectation for me to read something they can’t be bothered to write. LinkedIn is a great example - my entire professional network is just spamming at this point, which drowns out others that DO put in any effort.

When I have AI write things for me, I'm spending a good amount of time on it - certainly longer than it takes to read. I'm also usually editing it quite a bit. Maybe I'm an outlier, but I still don't think it's appropriate to make a blanket statement about using AI to write content violating this social contract you described.

If it takes longer to read, it's not an AI problem, but the author failing to catch that the comment is too drawn out. I don't see how it is a problem to have AI write a comment if you agree with the content. If it is bad content, it will eventually reflect badly on the author anyway.

  • I skim 100 comments here everyday. Good comments/bad comments, overly long comments, whatever, time to read is low. I assume all those authors have a strong opinion / expertise on the subject that urged them to take the time to write that comment, which makes skimming hacker news to keep a pulse on the world (imho) a valuable task. If, instead, most of those comments are composed by molt-bots, then I'm not getting a "real" view of the world, I don't care how good and concise the comments are, I'd be wasting my time reading about news that may not matter to anyone and opinions that may not exist.

I guess, in theory, this can eventually be countered by people using LLM browser integrations to tell them whether comments are worth reading (and maybe to summarize long comments). Is anyone currently working on that? It might be interesting to see.

  • First we would run into the spam-filter problem no different to email. Then we have to choose: do we concede to viewing the world through a lens of WhatEverAI, or train it locally on our own thoughts/views on the world, and hope that AI model is never compromised.

  • I don't believe that delegating reading comprehension to an LLM is really any better than delegating writing ability. In fact I'd argue it's worse to have an automation advising on what's worth reading or not.

    There are a lot of people who have no time for something like Infinite Jest and even getting through the first few chapters is an effort. But at least they tried. An LLM excluding the possibility of reading this book because it is 1000 pages of postmodern absurdity effectively optimises away the fringes of human creativity and leaves only the average stuff behind.

    AI slop detectors already exist and are no better than snake oil, because a person can have an LLM-smelling writing style without actually using AI. After all, LLMs were originally trained on human input.

Where does the line fall? I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought. Is that disrespectful? It doesn't feel so.

  • > I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought.

    Better to post your stream of thought.

    Using LLMs to turn stream of thoughts into prose is mostly just adding fluff and expanding the text to make it look more like thoughtful prose. What you get looks nice to the creator because they agree with what it's saying, but it wastes other reader's time as they have to dissect the extra LLM prose to get back to the author's stream of thought.

    Just post what you're thinking, even if it's not elegant prose. Don't have an LLM wrap it in structures and cliches that disguise it as something else.

    • I strive to be understood, and my streams of thought are often weird and generally intractable. Nobody really wants to read that; nobody wants the deep threads required to explain it.

      I value reading novel and interesting thoughts and ideas. I don't feel "tricked" when I read something of substance or thought provoking, even if LLM generated and decorated with the platitudes and common forms for dull readers.

      9 replies →

  • > Is that disrespectful

    It is, by way of being extremely dishonest in at least two ways:

    - there's no way you would do this if you were required to disclose that you used an LLM to write your comment.

    - therefore, if your primary goal isn't communication, then you must be doing it to look smart and "win" the conversation

    Same reason people desperately post links to scientific papers they don't understand in a frantic attempt to stay on top of some imaginary debate.

Well just have an AI read it for you then!

That reminds me of the gmail LLM usage where AI can writes your emails for you and also summarize incoming ones. Maybe we lost the thread somewhere...

It's not just about the increase in volume, it's about the delta between the prompt and the generation.

If the generation merely restates the prompt (possibly in prettier, cleaner language), then usually it's the case that the prompt is shorter and more direct, though possibly less "correct" from a formal language perspective. I've seen friends send me LLM-generated stuff and when I asked to see the prompt, the prompts were honestly better. So why bother with the LLM?

But if you're using the LLM to generate information that goes beyond the prompt, then it's likely that you don't know what you're talking about. Because if you really did, you'd probably be comfortable with a brief note and instructions to go look the rest up on one's own. The desire to generate more comes from either laziness or else a desire to inflate one's own appearance. In either case, the LLM generation isn't terribly useful since anyone could get the same result from the prompt (again).

So I think LLMs contribute not just to a drowning out of human conversation but to semantic drift, because they encourage those of us who are less self-assured to lean into things without really understanding them. A danger in any time but certainly one that is more acute at the moment.

This reads as an AI comment to me. Anybody else?

  • AI has not been used to write any comment that I have ever posted on Hacker News. You can observe my previous comments over the years, even prior to the adoption of modern LLMs, which demonstrate how I communicate.

    (While the patterns may be similar, I have a tendency to be more loquacious due to my larger token limit! %)

  • On 4chan, a long time ago, comments like these would invariably get the reply "not ur personal army"

    Think about that for a minute. 4chan would make fun of the comment you just made.