Comment by gabriel666smith

6 days ago

Wow! Why?

Personally, I'm fascinated by the question of what Joyce would have done with SillyTavern. Or Nabokov. Or Burroughs. Or T S Eliot, who incorporated news clippings into Wasteland - which feels, to me, extremely analogous with the way LLMs refract existing text into new patterns.

Creative works carry meaning through their author. The best art gives you insight into the imaginative mind of another human being—that is central to the experience of art at a fundamental level.

But the machine does not intend anything. Based on the article as I understand it, this product basically does some simulated annealing of the quality of art as judged by an AI to achieve the "best possible story"—again, as judged by an AI.

Maybe I am an outlier or an idiot, but I don't think you can judge every tool by its utility. People say that AI helps them write stories, I ask to what end? AI helps write code, again to what end? Is the story you're writing adding value to the world? Is the software you're writing adding value to the world? These seem like the important questions if AI does indeed become a dominant economic force over the coming decades.

  • Ah, fair enough. I believe quite strongly that creative works' meaning exists for the reader / audience / user. I don't think interpretation of art is towards an authorial, authoritative truth - rather that it's a lens to view the world through, and change one's perspective on it - so this is where we differ. But I understand your viewpoint.

    I do agree that the LLM's idea of achieving the 'best possible story' is defined entirely by its design and prompting, and that is obviously completely ridiculous - not least because appreciating (or enduring) a story is a totally subjective experience.

    I do disagree that one needs to ask "to what end?" when talking about writing stories, the same way one shouldn't need to ask "to what end?" about a pencil or a paintbrush. The joy of creating should be in the creation.

    Commercial software is absolutely a more nuanced, complex topic - it's so much more intertwined with people's jobs, livelihoods, aeroplanes not falling out of the sky, power grids staying on, etc. That's a different, separate question. I don't think it's fair to equate them.

    I think LLMs are the most interesting paintbrush-for-words we've come up with since the typewriter (at least), and that, historically, artists who embrace new technologies that arise in their forms are usually proven to be correct in their embrace of them.

    • I think that is a fair perspective. When I say "to what end" I am mostly implying the "end" of a product for the market. I think writing in particular is always a thing where if you tell people you do it as a hobby, they assume your goal is a published book, not the process itself. Creativity as the end is a wonderful thing, but I just have a feeling AI is going to be more widely adopted to pump out passable (or even arguably "good") content that people will pay money for.

      Again the same thing with writing software, where you can be creative with it and it can enhance the experience. But most people just use AI to help them do their job better—and in an era where many software companies appear to have a net negative effect on society, it's hard to see the good in that.

      1 reply →

    • > The joy of creating should be in the creation

      > I think LLMs are the most interesting paintbrush-for-words we've come up with since the typewriter

      I cannot reconcile these thoughts in my head

      For me, the joy of creating does not come from asking the computer to create something for me. It doesn't matter what careful prompt I made, I did not create the outcome. The computer did

      And no, this is not the same as other computer tools. A drawing tablet may offer tools to me, but I still have to create myself

      AI is not a "tool" it is the author

      Prompt engineers are editors at best

      2 replies →

  • I don't give a shit about getting an insight into the authors mind. It is not even revelant to the experience of art for me.

    You're presuming that your experience of it is universal, and it is not.

    To me, a tool that would produce stories that I enjoy reading would add value to my world if I meant I got more stories I enjoy.

    • I think that discussing this subject in the abstract, with some ideal notion of a tool that generates perpetually enjoyable stories misses the thrust of the general objection, which is actually mechanistic, and not social. LLMs are not this tool, for many (I would say most, but...). LLMs recycle the same ideas over and over and over with trite stylistic variation. Once you have read enough LLM generated/adapted works they're all the same and they lose all value as entertainment.

      There is a moment I come to over and again when reading any longer form work informed by AI. At first, I don't notice (if the author used it 'well'). But once far enough in, there is a moment where everything aligns and I see the structure of it and it is something I have seen a thousand times before. I have seen it in emails and stories and blog posts and articles and comments and SEO spam and novels passed off as human work. In that moment, I stop caring. In that moment, my brain goes, "Ah, I know this." And I feel as if I have already finished reading its entirety.

      There is some amount of detail I obviously do not 'recall in advance of reading it'. The sum total of this is that which the author supplied. The rest is noise. There is no structure beyond that ever present skein patterned out by every single LLM in the same forms, and that skein I am bored of. It's always the same. I am tired of reading it again and again. I am tired of knowing exactly how what is coming up will come, if not the precise details of it, and the way every reaction will occur, and how every pattern of interaction will develop. I am tired of how LLMs tessellate the same shapes onto every conceptual seam.

      I return now to my objection to your dismissal of the value of insight into the author's mind. The chief value, as I see it, is merely that it is always different. Every person has their own experiences and that means when I read them I will never have a moment where I know them (and consequently, the work) in advance, as I do the ghost-writing LLMs, which all share a corpus of experience.

      Further, I would argue that the more apt notion of insight into the work is the sole value of said work (for entertainment), and that insight is one time use (or strongly frequency dependent, for entertainment value). Humans actively generate 'things to be insightful of' through lived experience, which enriches their outputs, while LLMs have an approximately finite quantity of such due to their nature as frozen checkpoints, which leads you to "oh, I have already consumed this insight; I have known this" situations.

      If you have a magic tool that always produces a magically enjoyable work, by all means, enjoy. If you do not, which I suspect, farming insight from a constantly varying set of complex beings living rich real life experiences is the mechanical process through which a steady supply of enjoyable, fresh, and interesting works can be acquired.

      Being unaware of this process does not negate its efficacy.

      TLDR; from the perspective of consumption, generated works are predominantly toothless as reading any AI work depletes from a finite, shared pool of entertaining-insight that runs dry too quickly

I don't really understand. You think these great minds of writing lacked same level of linguistic capability as a model?

The authors were language models! If you want to simulate what they could have done with a model, just train a model on the text that was around when they were alive. Then you can generate as much text as you want that's "the same text they would have generated if they could have" which for me is just as good, since either way the product is the model's words not the artist's. What you lost that fascinates you is the author's brain and human perspective!

  • No, quite the opposite, apologies if I was unclear.

    I think that LLMs are a tool, and a tool that is still in the process of being iterated on.

    I think that how this new tool could be applied and iterated on by humans who were, I think, uniquely talented and innovative with language is a useful question to ask oneself.

    It’s a rhetorical device, essentially, to push back against the idea that the sanctity of ‘the novel’ (or other traditional, non-technological, word-based art forms) would somehow be punctured if innovative artists were / are given access to new tools. I feel that idea devalues both the human artist (who has agency to choose which tools to use, how to use them, and how to iterate on those tools) and the form itself.

    I don’t believe that anyone who really adores ‘the novel’ for its formal strengths can also believe that ‘the novel’ won’t withstand [insert latest technology, cinema, VHS, internet, LLMs, etc].

There is no answer to the question “what Joyce would have done…”. None. Nil. They are dead and anything done it their name is by definition not what they would have done, but what future generations who are convinced that they know better than the men themselves did.

It is better to leave unanswerable questions unanswered.

I am not against LLM technologies in general. But this trend of using LLMs to give a seemingly authoritative and conclusive answer to questions where no such thing is possible is dangerous to our society. We will see an explosion of narcissistic disorders as it becomes easier and easier to construct convincing narratives to cocoon yourself in, and if you dare questioning them they will tell you how the LLM passed X and Y and Z benchmarks so they cannot be wrong.

  • I'm confused by this response. I'm fascinated by the question because Joyce (and the other Modernists) are all dead, as you say.

    Were they alive, it wouldn't be a question - we'd be able to see how they used new technologies, of which LLMs are one. And if they chose to use them at all.

    I wasn't trying to provide an answer to that question. You're right that it's unanswerable. That was my point.

    I also - of course - wouldn't presume to know better how to construct a sentence, or story, or novel, using any form of technology, including LLMs, than James Joyce. That would be a completely ridiculous assertion for (almost) anyone, ever, to make, regardless of their generation. I don't really understand what 'generations' have to do with the question I was posing, other than that its underscoring of the central ineffability.

    I do, however, think it's valuable to take a school of thought (20th century Modernism, for example) and apply it to a new technological advance in an artform. In the same way, I think it's interesting to consider how 18th century Romantic thought would apply to LLMs.

    It's fascinating to imagine Wordsworth, for example, both fully embracing LLMs (where is the OpenRouter Romantic? Can they exist?), and, conversely, fully rejecting LLMs.

    Again, I'm not expecting a factual answer - I do understand that Wordsworth isn't alive anymore.

    But: taking a new technology (like the printing press) and an old school of thought (like classical Greek philosophy) often yields interesting results - as it did with the Enlightenment.

    As such, I don't think there's anything fundamentally wrong with asking unanswerable questions. Quite the opposite. The process of asking is the important part. The answer will be new. That's the other (extremely) important part. How else do you expect forms to advance?

    I'm not terribly interested in benchmarking LLMs (especially for creative writing), or in speculating about "explosions of narcissistic disorders", hence not mentioning either. And I certainly wasn't suggesting we attempt to reach a factually correct answer about what Joyce might ask ChatGPT.

    (The man deserves some privacy - his letters are gross enough!)