← Back to context

Comment by kleskling

6 months ago

I've been working on a JAX implementation for my own projects. I've implemented everything in the paper except guidance.

See here: https://github.com/homerjed/transformer_flow

I'm happy to see the return of normalising flows - exact likelihood models have many benefits. I found the model needed soft-clipping on some operations to ensure numerical stability.

I wonder if adding transformers can be done for the GLOW algorithm since attention and 1x1 convolutions could be made to do the same operation.