Comment by vidarh
1 year ago
With emphasis on the "semi-". They are very good at following prompts, and so overplaying the "random" part is dishonest. When you ask it for something, and it follows your instructions except for injecting a bunch of biases for the things you haven't specified, it matters what those biases are.
Are they good at following prompts?
Unless I format my prompts very specifically, diffusion models are not good at following them. Even then I need to constantly tweak my prompts and negative prompts to zero in on what I want.
That process is novel and pretty fun, but it doesn’t imply the model is good at following my prompt.
LLMs are similar. Initially they seem good at following a prompt, but continue the conversation and they start showing recall issues, knowledge gaps, improper formatting, etc.
It’s not dishonest to say semi-random. It’s accurate. The detokenizing step of inference, for example, is taking a sample from a probability distribution which the model generates. Literally stochastic.