← Back to context

Comment by snek_case

21 hours ago

I think it's because they're all trained on the same data (everything they could possibly scrape from the open web). The models tend to learn some kind of distribution of what is most likely for a given prompt. It tends to produce things that are very average looking, very "likely", but as a result also predictable and unoriginal.

If you want something that looks original, you have to come up with a more original prompt. Or we have to find a way to train these models to sample things that are less likely from their distribution? Find a way to mathematically describe what it means to be original.

An more original prompt wont fix things. Modern base models want to eliminate everything that puts their creators at risk, which is anything that is clearly made by someone else, more or less accurately reproducible. If you avoid decent representation of any artist style, or anything/anyone that is likely to go to court, you wont get the chance of an creative synthesis either.

Do you know of some tools with a parameter that asks it to be "weird" and increase diversity of outputs?

  • If you want a chance for real creativity, flexibility and you have a decent gpu go local. Check out comfyui, download models and play around. The mainstream services have zero knobs to play around with, local is infinite.