Comment by zorked

6 days ago

If there is something that I would like AI to never touch, it's that. Please stop making the world worse.

Not everyone shares your same world view, and some people do want to apply machine intelligence to their writing process.

You don't have to participate; ignore AI-generated or AI-assisted content just like you ignore some other thing you don't enjoy that already exists today. But you also don't have to devalue and dismiss the interests of others.

  • All the people generating AI-assisted writing are all the people that never had enough passion or talent to do it before. If you weren't inclined to write fiction or poetry etc before AI was here to do it for you, you probably shouldn't be doing it now.

    • That's extremely presumptuous. I've published two novels, and written hundreds of poems over the years (the latter I'm not sure I'll ever publish), and while I will keep writing manually I'd love to have AI tools that'd write all of the things I want to read that doesn't exist, that I don't want to write myself.

      I don't get remotely the same things out of reading and writing, so writing those stories myself does not give me the enjoyment I'd want out of reading them.

    • Awful take. Transformers have greatly increased the potential number of cool things I can do in my lifetime. I've written poetry, short stories, I draw, and am an experienced professional software engineer, and thanks to transformers I've been able to augment my creative workflow.

      People were similarly dismissive about computers in general. And calculators, and the printing press, and Photoshop, and cameras, and every other disruptive technology. Yet, people found a way to be creative with them even before society accepted their medium.

      Truth is, you don't get to decide what someone else's creative journey looks like.

      20 replies →

Wow! Why?

Personally, I'm fascinated by the question of what Joyce would have done with SillyTavern. Or Nabokov. Or Burroughs. Or T S Eliot, who incorporated news clippings into Wasteland - which feels, to me, extremely analogous with the way LLMs refract existing text into new patterns.

  • Creative works carry meaning through their author. The best art gives you insight into the imaginative mind of another human being—that is central to the experience of art at a fundamental level.

    But the machine does not intend anything. Based on the article as I understand it, this product basically does some simulated annealing of the quality of art as judged by an AI to achieve the "best possible story"—again, as judged by an AI.

    Maybe I am an outlier or an idiot, but I don't think you can judge every tool by its utility. People say that AI helps them write stories, I ask to what end? AI helps write code, again to what end? Is the story you're writing adding value to the world? Is the software you're writing adding value to the world? These seem like the important questions if AI does indeed become a dominant economic force over the coming decades.

    • Ah, fair enough. I believe quite strongly that creative works' meaning exists for the reader / audience / user. I don't think interpretation of art is towards an authorial, authoritative truth - rather that it's a lens to view the world through, and change one's perspective on it - so this is where we differ. But I understand your viewpoint.

      I do agree that the LLM's idea of achieving the 'best possible story' is defined entirely by its design and prompting, and that is obviously completely ridiculous - not least because appreciating (or enduring) a story is a totally subjective experience.

      I do disagree that one needs to ask "to what end?" when talking about writing stories, the same way one shouldn't need to ask "to what end?" about a pencil or a paintbrush. The joy of creating should be in the creation.

      Commercial software is absolutely a more nuanced, complex topic - it's so much more intertwined with people's jobs, livelihoods, aeroplanes not falling out of the sky, power grids staying on, etc. That's a different, separate question. I don't think it's fair to equate them.

      I think LLMs are the most interesting paintbrush-for-words we've come up with since the typewriter (at least), and that, historically, artists who embrace new technologies that arise in their forms are usually proven to be correct in their embrace of them.

      5 replies →

    • I don't give a shit about getting an insight into the authors mind. It is not even revelant to the experience of art for me.

      You're presuming that your experience of it is universal, and it is not.

      To me, a tool that would produce stories that I enjoy reading would add value to my world if I meant I got more stories I enjoy.

      1 reply →

  • I don't really understand. You think these great minds of writing lacked same level of linguistic capability as a model?

    The authors were language models! If you want to simulate what they could have done with a model, just train a model on the text that was around when they were alive. Then you can generate as much text as you want that's "the same text they would have generated if they could have" which for me is just as good, since either way the product is the model's words not the artist's. What you lost that fascinates you is the author's brain and human perspective!

    • No, quite the opposite, apologies if I was unclear.

      I think that LLMs are a tool, and a tool that is still in the process of being iterated on.

      I think that how this new tool could be applied and iterated on by humans who were, I think, uniquely talented and innovative with language is a useful question to ask oneself.

      It’s a rhetorical device, essentially, to push back against the idea that the sanctity of ‘the novel’ (or other traditional, non-technological, word-based art forms) would somehow be punctured if innovative artists were / are given access to new tools. I feel that idea devalues both the human artist (who has agency to choose which tools to use, how to use them, and how to iterate on those tools) and the form itself.

      I don’t believe that anyone who really adores ‘the novel’ for its formal strengths can also believe that ‘the novel’ won’t withstand [insert latest technology, cinema, VHS, internet, LLMs, etc].

  • There is no answer to the question “what Joyce would have done…”. None. Nil. They are dead and anything done it their name is by definition not what they would have done, but what future generations who are convinced that they know better than the men themselves did.

    It is better to leave unanswerable questions unanswered.

    I am not against LLM technologies in general. But this trend of using LLMs to give a seemingly authoritative and conclusive answer to questions where no such thing is possible is dangerous to our society. We will see an explosion of narcissistic disorders as it becomes easier and easier to construct convincing narratives to cocoon yourself in, and if you dare questioning them they will tell you how the LLM passed X and Y and Z benchmarks so they cannot be wrong.

    • I'm confused by this response. I'm fascinated by the question because Joyce (and the other Modernists) are all dead, as you say.

      Were they alive, it wouldn't be a question - we'd be able to see how they used new technologies, of which LLMs are one. And if they chose to use them at all.

      I wasn't trying to provide an answer to that question. You're right that it's unanswerable. That was my point.

      I also - of course - wouldn't presume to know better how to construct a sentence, or story, or novel, using any form of technology, including LLMs, than James Joyce. That would be a completely ridiculous assertion for (almost) anyone, ever, to make, regardless of their generation. I don't really understand what 'generations' have to do with the question I was posing, other than that its underscoring of the central ineffability.

      I do, however, think it's valuable to take a school of thought (20th century Modernism, for example) and apply it to a new technological advance in an artform. In the same way, I think it's interesting to consider how 18th century Romantic thought would apply to LLMs.

      It's fascinating to imagine Wordsworth, for example, both fully embracing LLMs (where is the OpenRouter Romantic? Can they exist?), and, conversely, fully rejecting LLMs.

      Again, I'm not expecting a factual answer - I do understand that Wordsworth isn't alive anymore.

      But: taking a new technology (like the printing press) and an old school of thought (like classical Greek philosophy) often yields interesting results - as it did with the Enlightenment.

      As such, I don't think there's anything fundamentally wrong with asking unanswerable questions. Quite the opposite. The process of asking is the important part. The answer will be new. That's the other (extremely) important part. How else do you expect forms to advance?

      I'm not terribly interested in benchmarking LLMs (especially for creative writing), or in speculating about "explosions of narcissistic disorders", hence not mentioning either. And I certainly wasn't suggesting we attempt to reach a factually correct answer about what Joyce might ask ChatGPT.

      (The man deserves some privacy - his letters are gross enough!)

Like it or not, this stuff will happen. Might as well have curiosity about it

  • Technological change doesn't happen independent of culture. Stop with the technological determinism.

    • I'd argue that technological change that is a simple enough iteration of existing technology that it can be carried out by small groups of people will happen independent of culture with a very high probability once a population is large enough.

      In this case, so many people are curious as to whether we can make this work and/or see financial implications that this will happen irrespective of whether wider culture rejects it.

      1 reply →

    • The proper counter will be cultural determinism, if sufficient people insist upon supporting human writers crafting real books.

I think it's fine as long as its output is watermarked so you can avoid it.

  • If you need to watermark it then you don't need to watermark it, though.

    • Ai content should absolutely be overtly marked imo. A beep should preceed ai speech, a visual for graphics, etc. This should have been a rule from the beginning.

      Pretending to be human, like pretending to be a police officer, should have consequences.

    • So if I can make an AI agent which talks to you just like your husband/wife/girlfriend etc, I can just send you messages without identifying myself as an AI?

      I mean, if you can't tell the difference it doesn't matter right?

      5 replies →

AI is a great writing assistant, if a human is in the driver's seat determining WHAT to write and retaining creative control over the outputs it can only lead to better creative writing. This is because the human can spend less time (re)writing and more time refining and tuning, and AI is a great brainstorming partner/beta reader.

More capable AI systems make the world better. If you don't like AI written material on principle, you can simply choose not to read it. Follow human writers who don't use AI.

  • I feel that when making a claim like that, the burden of proof is on you to explain how AI makes the world a better place. I have seen far more of the opposite since the advent of GPT-3. Please do not say it makes you more productive at your job, unless you can also clearly derive how being better at your job might make the world a better place.

    • I could list many breakthroughs in medicine, material science, and engineering. Point out how AI makes information more accessible, automates away drudgery, etc. I see it every day, making the world a better place.

      But I feel this disagreement isn't precisely about the technical details. If your stance is based on some fundamental idea of politics/philosophy, I can't change your mind.

  • That’s precisely the problem, though. The internet is already rapidly filling with AI-generated slop, and it takes a non-trivial amount of human brain power to determine whether the how-to article you’re reading is actually a reliable source or whether it was churned out to generate ad revenue.

    The infinite number of monkeys with typewriters are generating something that sounds enough like Shakespeare that it’s making it harder to find the real thing.

    • Determining the quality of a how-to or any other kind of information you're looking for is the same job whether it was created by a human or by an AI. Check sources, read horizontally, patronize trusted producers and get your information from there.

      We've got a tragedy of the commons whereupon we've grown complacent that search engines and wisdom of crowds (of nameless strangers) would see us through, but that was never a good strategy to begin with.

      AI slop does little but highlight this fact and give us plenty of reason to vet our sources more carefully.

      2 replies →

    • > AI-generated slop

      This phrase is kind of interesting to me because it implies that everything AI-generated is "slop". What happens when the AI is generating decent content?

      Like, what if we develop AI to the point where the most insightful, funny, or downright useful content is AI-generated? Will we still be calling it, "AI-generated slop"?

      5 replies →

    • I deliberately wrote my comment in a way that would preemp this response. Yet here we are.

      I honestly have little to no problem with finding and filtering the stuff I want to see. All the writers and creators I liked five or ten years ago? Basically all of them are still there and not hard to find. My process of finding new people has not changed.

You cannot stop people from making the world worse or better. The best you can do is focus on your own life.

In time many will say we are lucky to live in a world with so much content, where anything you want to see or read can be spun up in an instant, without labor.

And though most will no longer make a living doing some of these content creation activities by hand and brain, you can still rejoice knowing that those who do it anyway are doing it purely for their love of the art, not for any kind of money. A human who writes or produces art for monetary reasons is only just as bad as AI.

  • > In time many will say we are lucky to live in a world with so much content, where anything you want to see or read can be spun up in an instant, without labor.

    Man, you are talking about a world that's not just much worse but apocalyptically gone. In that world, there is no more art, full stop. The completeness and average-ness of stimulation would be the exact equivalent of sensory deprivation.

    • It seems paradoxical to say there is no more art, when AI’s ability for art generation is infinite.

      AI art can be equally stimulating, especially for people who will eventually be born in a time when AI generated art has always existed for them. It is only resisted by those who have lived their whole lives expecting all art to be human generated.

      3 replies →

  • > You cannot stop people from making the world worse or better.

    I can think of quite a few ways to do this.

  • >You cannot stop people from making the world worse or better. The best you can do is focus on your own life.

    We have laws and regulations for a reason.

  • > A human who writes or produces art for monetary reasons is only just as bad as AI.

    Or they're what you call "a professional artist," aka "people who produce art so good that other people are willing to pay for it."

    Another HN commenter who thinks artfulness is developed over decades and that individual art pieces are made over hundreds of hours out of some charity... Ridiculously ignorant worldview.

    • > Or they're what you call "a professional artist," aka "people who produce art so good that other people are willing to pay for it."

      If this is okay, then why isn’t an AI that produces art so good that other people are willing to pay for it also not okay? They are equivalent.

      15 replies →

  • > A human who writes or produces art for monetary reasons is only just as bad as AI.

    Tell that to all the Renaissance masters.

  • Clearly you've never made a list of openai data centre locations before