← Back to context

Comment by kragen

7 days ago

Maybe. So far it seems to be a lot better at creative idea generation than at writing correct code, though apparently these "agentic" modes can often get close enough after enough iteration. (I haven't tried things like Cursor yet.)

I agree that it's also not currently capable of judging those creative ideas, so I have to do that.

This sort of discourse really grinds my gears. The framing of it, the conceptualization.

It's not creative at all, any more than taking the sum of text on a topic, and throwing a dart at it. It's a mild, short step beyond a weighted random, and certainly not capable of any real creativity.

Myriads of HN enthusiasts often chime in here "Are humans any more creative" and other blather. Well, that's a whataboutism, and doesn't detract from the fact that creative does not exist in the AI sphere.

I agree that you have to judge its output.

Also, sorry for hanging my comment here. Might seem over the top, but anytime I see 'creative' and 'AI', I have all sorts of dark thoughts. Dark, brooding thoughts with a sense of deep foreboding.

  • Point taken but if slushing up half of human knowledge and picking something to fit into the current context isn't creative then humans are rarely creative either.

  • > Well, that's a whataboutism, and doesn't detract from the fact that creative does not exist in the AI sphere.

    Pointing out that your working definition excludes reality isn't whataboutism, it's pointing out an isolated demand for rigor.

    If you cannot clearly articulate how human creativity (the only other type of creativity that exists) is not impugned by the definition you're using as evidence that creativity "does not exist in the AI sphere", you're not arguing from a place of knowledge. Your assertion is just as much sophistry as the people who assert it is creativity. Unlike them, however, you're having to argue against instances where it does appear creative.

    For my own two cents, I don't claim to fully understand how human creativity emerges, but I am confident that all human creative works rest heavily on a foundation of the synthesis of author's previous experiences, both personal and of others' creative works - and often more heavily the latter. If your justification for a lack of creativity is that LLMs are merely synthesizing from previous works, then your argument falls flat.

    • I'll play with your tact in this argument, although I certain do not agree it is accurate.

      You're asserting that creativity is a meld of past experience, both personal and the creative output of others. Yet this really doesn't jive, as an LLM does not "experience" anything. I would argue that raw knowledge is not "experience" at all.

      We might compare this to the university graduate, head full of books and data jammed therein, and yet that exceptionally well versed graduate needs "experience" in a job for quite some time, before having any use.

      The same may be true of learning how to do anything, from driving, to riding a bike, or just being in conversations with others. Being told, on paper (or as part of your baked in, derived "knowledge store") things, means absolutely nothing in terms of actually experiencing them.

      Heck, just try to explain sex to someone before they've experienced it. No matter the literature, play, movie or act performed in front of them, experience is entirely different.

      And an AI does not experience the universe, nor is it driven by the myriad of human totality, from the mind o'lizard, to the flora/fauna in one's gut. There is no motive driving it, for example it does not strive to mate... something that drives all aspect of mammalian behaviour.

      So intertwined with the mating urge is human experience, that it is often said that all creativity derives from it. The sparrow dances, the worm wiggles, and the human scores 4 touchdowns in one game, thank you Al.

      Comparatively, an LLM does not reason, nor consider, nor ponder. It is "born" with full access to all of its memory store, has data spewed at it, searches, responds, and then dies. It is not capable of learning in any stream of consciousness. It does not have memory from one birth to the next, unless you feed its own output back at it. It can gain no knowledge, except from "context" assigned at birth.

      An LLM, essentially, understands nothing. It is not "considering" a reply. It's all math, top to bottom, all probability, taking all the raw info it has an just spewing what fits next best.

      That's not creative.

      Any more than Big Ben's gears and cogs are.

      3 replies →

    • Agreed.

      "Whataboutism" is generally used to describe a more specific way of pointing out an isolated demand for rigor—specifically, answering an accusation of immoral misconduct with an accusation that the accuser is guilty of similar immoral misconduct. More broadly, "whataboutism" is a term for demands that morality be judged justly, by objective standards that apply equally to everyone, rather than by especially rigorous standards for a certain person or group. As with epistemic rigor, the great difficulty with inconsistent standards is that we can easily fall into the trap of applying unachievable standards to someone or some idea that we don't like.

      So it makes some sense to use the term "whataboutism" for pointing out an isolated demand for rigor in the epistemic space. It's a correct identification of the same self-serving cognitive bias that "whataboutism" targets in the space of ethical reasoning, just in a different sphere.

      There's the rhetorical problem that "whataboutism" is a derogatory term for demanding that everyone be judged by the same standards. Ultimately that makes it unpersuasive and even counterproductive, much like attacking someone with a racial slur—even if factually accurate, as long as the audience isn't racist, the racial slur serves only to tar the speaker with the taint of racism, rather than prejudicing the audience against its nominal target.

      In this specific case, if you concede that humans are no more creative than AIs, then it logically follows that either AIs are creative to some degree, or humans are not creative at all. To maintain the second, you must adopt a definition of "creativity" demanding enough to exclude all human activity, which is not in keeping with any established use of the term; you're using a private definition, greatly limiting the usefulness of your reasoning to others.

      And that is true even if the consequences of AIs being creative would be appalling.

  • I understand. I share the foreboding, but I try to subscribe to the converse of Hume's guillotine.