← Back to context

Comment by dartos

1 year ago

Are they good at following prompts?

Unless I format my prompts very specifically, diffusion models are not good at following them. Even then I need to constantly tweak my prompts and negative prompts to zero in on what I want.

That process is novel and pretty fun, but it doesn’t imply the model is good at following my prompt.

LLMs are similar. Initially they seem good at following a prompt, but continue the conversation and they start showing recall issues, knowledge gaps, improper formatting, etc.

It’s not dishonest to say semi-random. It’s accurate. The detokenizing step of inference, for example, is taking a sample from a probability distribution which the model generates. Literally stochastic.