Comment by yugretcx
3 months ago
Why do these text diffusion demos always look like the number of allowed tokens is fixed for a specific unfilled region?
Is this the case?
Ie. if the region only has four tokens(here characters) but calculates the best word is “forget” does it just abandon the best fit or truncate it to fit?
Are there text diffusion models with lax infill directives?
Tokens start as a special [MASK] token. Then as the diffusion process runs they are "unmasked" i.e. sampled.
So yes, you define a sequence of [MASK] tokens with some length ahead of time.
In practice, if a model wants to write a shorter sequence, it'll just fill the remaining tokens with empty content. If it wants to write a longer sequence, you'll have to identify this and extend the sequence with more [MASK] tokens. This is typically obvious since there's no "end of sequence" token present if the model wants to generate more.
Yes, this is the case. During training, the model will get a sequence of text (ex, 512 tokens long) with a percentage of them masked out (with a special <MASK> token). It learns how to unmask those tokens to construct the original text.
In the case that you mentioned, if we had 4 <MASK> tokens in a row, all we are doing for decoding is predicting what those 4 tokens should be.
Generally, this does not seem to be a significant problem, as there are usually multiple ways to express an idea in varying lengths. Also, with confidence-aware parallel decoding, it can usually avoid the scenario you mentioned, as focusing on decoding the highest confident tokens will generally avoid such scenarios with a well trained model.