Comment by thatguysaguy
3 months ago
Back when BERT came out, everyone was trying to get it to generate text. These attempts generally didn't work, here's one for reference though: https://arxiv.org/abs/1902.04094
This doesn't have an explicit diffusion tie in, but Savinov et al. at DeepMind figured out that doing two steps at training time and randomizing the masking probability is enough to get it to work reasonably well.
Im just learning this from your text, after spending last week trying to get a BERT model to talk.
https://joecooper.me/blog/crosstalk/
I’ve still got a few ideas to try though so I’m not done having fun with it.
The trick is to always put the [MASK] at the end:
"The [MASK]" "The quick [MASK]" etc
I've saved this and I'll study this when I come back to it. Thanks!
Interesting as I was in the (very large) camp that never considered it for generation, and saw it as a pure encoder for things like semantic similarity with an easy jump to classification, etc