Comment by donatj

2 days ago

I'm trying to sort out my own emotions on this.

I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.

It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.

I talked to AI basically all day, yet I am genuinely made uneasy by this.

Maybe it's because I think your comment throws away a lot of relevant context from OP's submission on HN.

He says he spent months on this piece and then some, I think it's safe to assume here that this was well supervised, guided, thoughtful and full of human intent despite the AI-assisted part.

In short, I think calling it "AI generated" takes all the human effort that went into these months and the ingenious creativity of OP towards crafting this piece!

Anyways, I enjoyed it. :)

  • Reading it, I get the feeling the author worked the story the way Tom Hartmann works those agricultural machines. The AI gave input, but the author was tweaking it with human knowledge and wisdom.

It's a major bummer. When I first read the story (a few days ago, maybe?) I thought it was an interesting metaphor that didn't quite line up with the observed details of software development with AI. I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.

Without the inferred writer, it's much less interesting to me, except as a reminder that models change and I can't rely on the old tics to spot LLM prose consistently any more.

  • Surely you see it's somewhat unreasonable? As if it was written by the author you disliked, and until you knew of the fact, you quite enjoyed it.

    Quite honestly, I do that sometimes too -- but I _know_ that it's unreasonable.

    • For me, “interestingly wrong” becomes just “wrong” without human thinking behind it. I wasn’t bowled over by the prose, I just thought it was an uncommon take and didn’t twig the signs it was Claude product.

    • Can i compare this with fucking inflatable doll (not done this, just extrapolating). Even if senses for your penis are identical, whole experience is totally not the same as doing with another live person.

  • What is it about it that makes the story less interesting to you? It's the same story, down to the same delicate details. When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?

    I find it interesting to ponder. We look at the luddite movement as futile and somewhat fatalistic in a way. I feel like the current attitude towards AI generated art will suffer the same fate—but I'm really not quite sure.

    • Stories are particularly troubling because we have the concept of "suspending disbelief" and readers tend to take a leap of faith with longwinded narratives because we assume the author is going somewhere with the story and has written purposefully.

      When AI can write convincingly enough, it is basically a honeypot for human readers. It looks well-written enough. The concept is interesting and we think it is going somewhere. The point is that AI cannot write anything good by itself, because writing is a form of communication. AI can't communicate, only generate output based on a prompt. At best, it produces an exploded version of a prompt, which is the only seed of interest that carries the whole thing.

      Somebody had that nugget of an idea which is relevant for today's readers. They told the AI to write it up, with some tone or setting details, then probably edited it a bunch. If we enjoy any part of it, we are enjoying the bits of humanity peeking through the process, not the default text the AI wrote.

      12 replies →

    • You can get some good guesses from the comment itself.

      > I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.

      If you assume you're reading something from a person with intention and a perspective, who you could connect with or influence in some way, then that affects the experience of reading. It's not just the words on the page.

      4 replies →

    • the story is bad in itself and doesn't add anything to the reader

      but if you knew it came from a human it would be interesting as a window to learning what the writer was thinking

      since there is no writer such window doesn't exist either

      3 replies →

    • i don't find the luddite comparison accurate. they were against looms and anti-ai people or ai skeptical people are against the wholesale strip mining of intellectual property as it exists... both public domain and non-public domain. it's used to enrich the capital class at the expense of the workers. sure it's similar but it certainly didn't have the copyright and wholesale theft of all of the human ideas behind it. it just feels quite different.

      2 replies →

    • People had a revulsion to eating refrigerated foods. The developed world got over it. We're comfortably on the path to becoming Eloi who will trust everything the magic box does for us.

      1 reply →

    • As a couple sibling comments said, I took it for an insight into the way an optimistic writer might see AI software development becoming a new form of "end-user programming" or "citizen developer" tooling. I'm personally too deep in the weeds to ever see it becoming empowering in that way (if nothing else, this will be an incredibly centralizing technology and whoever wins the "arms race" [assuming we we're not in a bubble destined to pop soon] will absolutely have the possible Toms and Megans of such a future by the short hairs). But I love end-user programming, or whatever we're calling it now! (I was partial to "shadow IT" - made it sound really cool.) So I enjoyed the idea that somebody saw AI as a "bicycle for the mind" in that sense, even if I feared they'd end up disappointed.

      But there was nobody there, and I'm only disappointed in myself for not noticing.

    • >What is it about it that makes the story less interesting to you?

      Read my comment below for a perspective.

    • > When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?

      For me, the answer to this riddle is very easy: I want to engage with other human minds. A robot (or AI) doesn't have a human mind, so I'm not interested in its "artistic" output.

      It was never about how good it was. Of course AI slop adds insult to injury by being also bad. Currently. But it'll get better. My position was never that AI art (shorts, pictures, music, text) is to be frowned up because it's bad. I don't like it because it's not the expression of a human mind.

      It's a bit like how an AI boy/girlfriend is not the real deal, no matter how realistic -- and I'm sure they'll get uncannily realistic in the future. They aren't the real deal because there's no real human behind the facade of companionship.

I think its a valid emotion to feel. I genuinely resonated with the story, but when I learned it was written by Claude it kind of left me feeling ... betrayed?

One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.

So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.

But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.

If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.

In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.

  • You can never be sure if what you're feeling is a genuine human experience or not.

    This is what the deconstructionists were preparing us for, I guess. The author is dead, and if not dead, then fake. It was never a good idea to tie our sense of meaning to external validation.

    The humanity immanent in the text came from you, the reader, not the author, and it has always been that way. Language never gave us access to the author's mind -- and to the extent that statement is wrong, it doesn't matter. AI is just another layer of text, coming between the reader and the same collective consciousness that a human author would presumably have drawn on. The artistic appreciation of that text is the sole privilege of the reader.

I also had no idea this was LLM generated. After reading your comment, I had a similar emotional reaction.

Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.

My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.

  • > preface explaining it was written by an LLM

    why can't the quality of the works stand on its own? Whether there's LLM generation or not should be irrelevant.

    • because we typically want to know the writer of a piece. we want to know where to lay credit.

      every book you buy has an author credited. articles in newspapers and magazines have photographer and author attributions.

      asking an ai to write you a story does not make you an author. if you ask someone to take a photo for you, you don’t magically get to say “look at this photograph, i’m a photographer.” if you ask someone to bake you a wedding cake, and then claim you baked it, you’re a fraud.

      we deserve to know the actual writer.

      32 replies →

    • Because 'quality' is a misnomer. LLM writing has quality in the same way that a press release from a big company has quality, or a professional contract written by a lawyer has quality. It is functional, generally typo-free and conforms to most standards but that doesn't mean it has flavor or spice to it.

      Creative writing is the intent to convey feelings, thoughts, to create atmosphere. Here's a great example of the failure to do so here, in a way that even most terrible writers would avoid.

      > “It just said harvest,” she told Tom. She was sitting in one of the plastic chairs, holding a cup of the adequate coffee.

      The coffee in this story is conveyed as being 'perfectly adequate'. But how do you convey adequacy? When you simply just say 'the coffee is adequate' there's nothing there. It could be conveyed by establishing that the coffee is always perfectly room temperature, or with the mere hint of bitterness and sweetness, or that it tastes like every other brand out there. In many respects this story is the exact same as the 'perfectly adequate' coffee: functional, unexciting and ultimately flavorless.

      5 replies →

    • I started reading it then found it waffling on quite a bit, then came to the HN comments and saw - ah LLM. I could have saved time if I'd know.

      Also I feel a bit conned. I was curious what Tom Hartmann was up to and now it seems he doesn't exist and it's just some slop?

  • For a while, people found solace in denial: "it's not good, it will never be good, and i will always be able to tell"

    next stop will be to ask for some sort of regulation

  • People don’t want to self-disclose their use of AI I’ve noticed, especially the ones that put the least effort into using it. So this will only work for a small portion of the AI content.

  • We really need to stop thinking that every AI assisted thing is bound to be slop. "Shit in Shit out" often Applies in reverse aswell.

Humans build friendships and relationships on shared experiences. There is an element of relationship-through-experiencing-a-thing. Whether it's going for a walk together or the classic first date template of dinner and a movie. The shared experience is the thing.

With stories that shared experience is between author and reader. Book clubs etc will try to extend that "shared experience" but primarily it is author <-> reader relationship.

Remove that "shared feeling with the author" and what meaning does it have?

  • You can look at a tree and feels things by yourself. Also there's the shared readership.

  • ...and what meaning does it have?

    It means, "Wow. Cool. I'm a member of a species that taught rocks to think. Holy fuck. That's pretty insanely fucking awesome. Wow. Wow, wow, wow. Fuck."

    That's about all it means. Nothing was removed from your life, but something optional was added.

I suspect (but don't know) that this had to be edited somewhat heavily or generated in isolated chunks: I've generated a lot of fiction with Claude and it has a chronic issue of overusing any literary device one might associate with good writing once it appears in the context window

I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails

  • Yeah, there's a lot more work and personal touch that went into this (and the previous piece) than just "write prompt -> copy/paste into substack".

    It's really interesting to hear about others that have been exploring generating fiction with Claude. I clearly need some more work based on some of the comments, but it has been really interesting discovering and coming up with different techniques both LLM-assisted and manual to end up with something I felt confident enough about to put out.

    I'd be curious to hear more about your experience!

    • I run a product that generates interactive fiction (for search engine reasons I don't mention it in my comments, but there's a link to an April Fool's landing page in my post history where you can try it)

      Because it's productized I need to "one-shot" the output, so I focus a lot on post-training models these days, but I've also used tricks like running wordfreq to find recently overused words and feed the list back to the model as words that cannot be used in the next generation.

      Models couldn't always follow instructions like that (pink elephant problem), but recently they're getting better at it.

  • Yeah, there's often a heavy instruction and recency bias that just squeezes all of the nuance and subtlety out if it.

There is an interesting dichotomy where we express an uncanny-valley revulsion to AI-generated text, art, video and music; yet we seemingly go with the AI-generated code.

Personally I have an uneasiness with it and are correspondingly cautious. Often after a review and edits it loses that "smell". I kind-of felt the same about NPM and package managers for a long time before using it became obligatory (for lack of a better word).

Are we conditioned to use other people's code unthinkingly, or is it something else?

  • It's because code isn't a way to communicate ideas, it's a way to specify behavior. Text, drawings, video, and music are means for brains to connect with each other. When you read or view or listen to something generated you're not connecting with any other brain. No idea has been transmitted to you. The feeling is analogous to speaking on the phone and only realizing several minutes later that the call was dropped. It's a feeling that combines betrayal, being made to waste time, and alienation.

    • I tend to disagree that code can't be a way to communicate an idea. Sure, I might struggle to edict an emotion in the reader (excluding confusion or frustration) but I feel it is a way to describe ideas, model constructs and processes, etc.

      With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?

      1 reply →

I had a similar experience a few days ago with some music on Spotify. It was an Irish Pub song, rendering some political satire that seemed pretty consistent with what I figure is a predominant Irish viewpoint. Since I holidayed in Ireland a while ago and adored the public there, I really liked it. I reveled in the fact that somewhere in Ireland, there was a band singing messages in pubs that resonated strongly with me. And then it was pointed out that it was AI. I was crushed. I went from feeling connected to some people across the pond, to feeling lonely.

And yet, in ironic counterpoint, there is a different artist I follow on Spotify that does EDM-fusion-various-world-genres. And it’s very clearly prompt generated. And that doesn’t bother me.

My hypothesis is that it has to do with how we connect/resonate with the creations. If they are merely for entertainment, then we care less. But if the creation inspired an emotion/reasoning that connects us to other humans, we feel betrayed, nay, abandoned, when it comes up being synthetic.

  • I've gotten pretty good at identifying AI-genned music. There are two tells that I've noticed so far.

    The most quantifiable is the presence of a high frequency component that sort of sounds like someone tried to clean up our restore a highly compressed track. It almost sounds kind it's going to start doing that warbling sound that happens when a teleconferencing call has a bad connection but it's just not bad enough to lose connection completely. I guess it's the sound of being highly noise gated.

    The other is more qualitative. The song is boring. Like you said, on paper the song should be something I enjoy. But I suddenly notice that there is no... variation or never hook or anything to make it interesting. Anything to make it something other than the result of a machine. The aural equivalent of eating at Applebee's or reading The New Yorker. The songs just kind of plod onward without ever really getting to a point.

    It feels kind of like a vivid dream when you're on the edge of lucidity. You can tell something is wrong, but there is something messing with you faculties. You're trying to see where things are going, how things will resolve, and it never happens. It just keeps going and going in a particular mode. If it does change, it's not to resolve, it's to start on a new thread that is an alternate universe version of the previous thread. With no attempt at establishing continuity, no resolution is ever found.

  • The connection is often with other people experiencing the same thing even if they thing is AI generated. You can see this clearly on Youtube with comments which just quote a line from the video. They get lots of upvotes, probably from other people who felt that line was special too and enjoy seeing others sharing the same feeling. Of course if all those comments are AI too, you would lose that connection.

It's full of AI generated imagery. Why would it not be AI generated?

  • Blog posts like this have been full of genAI images for years, even if the text is actually written by a human. So just because the images are obviously generated doesn't really tell you much about the text.

I didn't know either, but wasn't surprised to find out. The writing was too... polished, in a way I'm starting to recognize more and more. The knowledge doesn't really impact my experience of having read it, but I'm looking forward to a day when AI agents can be trained out of the servile mentality. It directly affects everything they make.

Interesting. I didn't realise it was LLM generated either, but only came here after the first section to find out if it was worth reading the rest.

Maybe the summary of the first section wouldn't have landed without the example but "People who would spend $50,000 on elective surgery without blinking would balk at a $200 annual wellness check. The fix was always cheaper than the failure, the prevention was always cheaper than the fix, and somehow the money always flowed toward the crisis rather than away from it." explained the problem far more succinctly than the rambling prose before it.

I did notice something else AI about it - I really liked the art style for the illustrations, and had mixed emotions as my thought process was "I'd really like to learn how to draw like this, but I guess there's no point spending my time doing that because now I could just get an AI to generate it, and I guess that's the point of the article".

Well contrary to many, myself was not convinced and suspected the content being LLM generated from very beginning with the images and even background. Something in the writing also didn’t hit right.

I can't remember the exact phrasing, but I read somewhere long ago that what you read now, you become in 5 years from now. As in, right after reading something, you think and deliberate about it, but in 5 years from now that becomes part of your subconscious and you can't activity filter it.

The thing is, if you want to convey a social/political message via fiction, you have to be a genius to make it non boring or uncanny.

Very few humans have managed this. This text is at the average level of "i want to pass the message and i'm trying to write professionally".

I have the same issue with AI generated music : it can be quite good to say the least.

But I deeply feel that art only matters if there is an artist. The artist wants to convey something.

What makes you uneasy (if you are like me) is that a machine deliberately created emotions in your brain. And positive emotions, at that. It’s really something I can’t stand.

  • I different way of reframing this point is looking at some of the modern art that's highly celebrated, without the human component of what it represents, the art itself isn't that good.

    So, the guy who suspends buckets of paint with a hole in the bottom to make patterns has an idea of what he's creating. The guy who just put a few strips of electrical tape in different colours had an idea of what he was trying to convey. The guy who flings paint against a wall also has an idea of what he's creating. The guy who made all the white paintings. All that art is trivial to copy in the same style, maybe even an exact copy for the electrical tape, but it's the artist's intention that makes it worth more than a toddler's painting.

    Personally, I think most of that abstract art is pointless, because I don't really see how the artist's vision is represented by whatever the mess they've created is, but I definitely understand that at least they had an idea that they wanted to convey. A machine creating the same thing has no meaning behind it, it's just a waste of paint and canvas.

It's treachery, a betrayal of trust. It's the same feeling as when you get sweet-talked into overpaying for something. This time, you overpaid with your attention.

Well, FWIW, LLMs are specified to infer and fill in the blanks of books. It makes the headlines now and again that publishers put AI companies on the hook for unauthorized use, The New Yorker included.

Whether people know it or not, when they engage with art they are assuming a person not just made it but experienced it. I'm going to blow past the discussion of "what is art" here, but where something came from and how it was made has always mattered to me (you could draw parallels to food here if you wanted). One thing that has been on my mind a lot is a particular photograph I saw in the past few years (and I'm sure it's easy to find online): it's a POV shot taken by a person sitting atop a skyscraper with their feet dangling over the edge. There is just no way that anyone could in good faith claim that the same photo produced by "AI" could possibly have the same emotional impact as knowing someone actually went and did that. I think that for a lot of people they may not even realize that when hey see a painting or even a photo as innocuous as a tree, their mind goes to that the person who produced this went to this that place the tree was in an had an experience and chose to document that particular perspective. If they were to see a painting or drawing of something that is clearly "fantasy," they know that a person made this up in their crazy mind and experience their feelings on it (good or bad). "AI" (heavy quotes) is trying to trick us and rob of us this basic knowledge. Some see this as progress. I personally think it's fucking disgusting, but I've been wrong before.

Of course this has always been a bit of a problem with digital art trying to mascarade as the real thing... I always think of programmed drums using real drum samples. In my adult life I found out that an album I loved as a teenager that listed a real drummer as the performer was actually 100% programmed (this was an otherwise very "organic" sounding heavy guitar album). I always had my suspicions since it was so perfect but I experienced exactly what you are describing. I also never got over it.

My $.02 is that in the domain of software engineering LLMs have largely automated the process of copy-pasting from StackOverflow and existing parts of the codebase. Architecture and product management is still very necessary. In the same fashion they can also automate writing a novel. The issue is that prose is sometimes much more important in literature than it is software (because, after all, users use software, they don't read the code). I say "sometimes" because this clearly doesn't apply to stuff like schlocky bestsellers that one buys in airport stores and reads like movies.

When ChatGPT first came onto the scene I actually started using it to write something in this vein - a techno-thriller starring a former fashion model trained in Krav Maga working as a nuclear physicist who discovers a sinister government conspiracy to alter the foundations of quantum mechanics and enslave humanity with assistance from extraterrestrials. And, of course, only she can stop them with the help of a gruff-but-sensitive retired Marine who has since opened a ranch where he teaches orphaned puppies calculus. I only got 20 pages (so one gunfight and a car chase) in but it was as riveting as anything. Context limit cut my efforts short. Perhaps I'll revisit it soon.

I say all this to say that if words themselves are distantly secondary to narrative then I don't see anything particularly wrong with leveraging an LLM to help crank something out.

> Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project…

> After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms.

There is a substantial amount of work here, comparable to how long a human writer would take to write from scratch, definitely not slop. I think we can call it AI-assisted, not AI-generated. Even the illustrations are well above average.

Absolutely the opposite here, after reading a few paragraphs I was a bit bored. Then I saw the length of the piece, noticed the AI imagery, quit, came here. I read your comment and it makes sense. I'm not reading a story that somebody couldn't be bothered to write.

Yup. There should be a disclaimer or a "food tag". The implicit assumption in society is some human had written the text you read.

I also did not gin to the fact that it was AI, but I did have the distinct feeling that I was reading something not that great. It bothered me because the message was something I could appreciate but the delivery felt anathema to the message.

It felt like it was written by someone trying to quit an addiction to Corporate Memphis content spam. Like it came from some weird timeline where qntm was a LinkedIn influencer. It straddles an uncanny valley of being a criticism of the domination of The Corporation over human culture while at the same time wallowing in The Corporate Eunuch Voice, not because it's a subversion of form, but because it knows no other way.

I then came to the comments section and found the piece that brought the picture into focus.

It's just... hard to explain the specific kind of disappointment. Perhaps there is a German phrase-with-all-the-spaces-removed kind of word that describes it succinctly. I feel like I exist in this Truman Show kind of world where everyone is trying to gaslight me into thinking LLMs are important, but they aren't very good at it and whenever I try to find out how or why, it all evaporates away. I was very reluctant to say that because I'm sure it's going to come with a heaping side of Extremely Earnest Walruses ready to Have A Debate about it and I just don't have the energy for it anymore. That's the baseline existence right now. It's like a really boring version of Gamergate.

And then this thing comes along. And yeah, it's a thing. You got me. Ha. Ha. Joke's on me. I lost the shitty, fake version of the Turing Test that I didn't even ask to be a part of. And it reminds me of the Microsoft Hololens: a massively impressive technological achievement that was ultimately a terrible consumer experience. Like if you figured out Fusion Power but it could only power Guy Fieri restaurants.

Ever since the pandemic I've been keenly aware of the complete destruction of every enjoyable social structure around me. The meetups that evaporated. The offices we essentially squatted in that suddenly turned Extremely Concerned about what people were doing. The complete lack of any social interaction at work because we're all so busy because we're running at half-workforce and pretty sure the executive suite is salivating at the bit to lay the rest of us off. The lack of care about how this is impacting open source software. The lack of concern for people.

I feel like my entire adult life was this slow, agonizing, but at least constant push forward into recognizing the humanity in others and creating a kind and diverse world and then over night it's all been destroyed and half the people I see online are cheering it on like it's Technojesus coming to absolve them of their sins of never learning to invert a binary tree. Where the blogs and books and startups of the early 2000s were about finding the hidden potential in people--the college dropout working as a barista who just needs someone to give them a chance to be a programmer or a graphic designer or an artist or whatever--the modern era seems to all be about the useless middle management guy who never had any creative bone in his body no longer having to write status reports to his equally mendacious boss on his own anymore.

We might be restarting old coal plants, but at least Kevin in middle management gets to enjoy "programming" again.

  • Actually, I was waiting for a punchline, twist or climax of sorts.

    This had the feeling of reading someone's diary: today happened, same as yesterday.

    The only difference is that the routine, and almost identical, stories is set in in a fictional place.

    Some journal/found footage fiction can be good (Dracula for example), but this was not that.

  • you're saying qntm is NOT an influencer? what a miscalculation i have made

> She was sitting in one of the plastic chairs, holding a cup of the adequate coffee

and other stuff... it's not that good.