Comment by nearbuy
9 days ago
> Current generation of AI models can't think of anything truly new.
How could you possibly know this?
Is this falsifiable? Is there anything we could ask it to draw where you wouldn't just claim it must be copying some image in its training data?
Novelty in one medium arises from novelty in others, shifts to the external environment.
We got brass bands with brass instruments, synth music from synths.
We know therefore, necessarily, that they can be nothing novel from an LLM -- it has no live access to novel developments in the broader environment. If synths were invented after its training, it could never produce synth music (and so on).
The claim here is trivially falsifiable, and so obviously so that credulous fans of this technology bake it in to their misunderstanding of novelty itself: have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.
Yet an artist which paints with a new kind of black pigment can, trivially so.
> arises from novelty in others, shifts to the external environment
> Everything is simply a blend of prior work.
I generally consider these two to be the same thing. If novelty is based on something else, then it's highly derivative and its novelty is very questionable.
A quantum random number generator is far more novel than the average human artist.
> have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.
Put someone in jail for the last 15 years, and ask them to make a smartphone. They obviously cannot do it either.
So if your point is an LLM is something like a person kept in a coma inside solitary confinement -- sure? But I don't believe that's where we set the bar for art: we arent employing comatose inmates to do anything.
> I generally consider these two to be the same thing.
Sure words themselves bend and break under the weight of hype. Novelty is randomness. Everything is a work of art. For a work of art to be non-novel it can only incorporate randomness.
The fallacies of ambiguity abound to the point where speaking coherently disappears completely.
An artist who finds a cave half-collapsed for the first time has an opportunity to render that novel physical state of the universe into art. Every moment which passes has a near infinite amount of such novel circumstances.
Since an LLM cannot do that, we must wreck and ruin our ability to describe this plain and trivial situation. Poke our eyes and skewer our brains.
1 reply →
Kind of a weird take that excludes the vast majority of human artwork that most people would consider novel. For all the complaints one might have of cubism, few would claim it's not novel. And yet it's not based on any new development in the external world but rather on mashing together different perspectives. Someone could have created the style 100 years earlier if they were so inclined, and had Picasso never existed, someone could create the novel style today just by "remixing" ideas from past art in that very particular way.
I would argue that Picasso's life experiences, the environments he grew up and lived in, the people he interacted with, and the world events that took place in his life (like the world wars) were the external developments that led to the development of cubism. Sure, an AI could take in and analyze the works that existed prior, but it couldn't have the emotional reaction that occurred en masse after WWI and started the breakdown of more classical forms of art and the development/rise of more abstract forms of art.
Or, as the kids might say, AI couldn't feel the vibe shift occurring in the world at the time.
1 reply →
Let's reverse that. "Current generation AI models can think of things that are truly new."
How could you possibly know that? Could you prove that an image wasn't copying from images in its training data?
No one here claimed the AI models made something truly new.
The commenter flessner asserted it couldn't, despite having no way to demonstrate it. They are passing off faith as fact.
Assuming someone did want to show AI models can make new stuff...
> How could you possibly know that? Could you prove that an image wasn't copying from images in its training data?
This isn't even that hard. You just need to know what images are your training data when you train your model. A researcher with a small grant could do this.