← Back to context

Comment by nvr219

3 days ago

Congrats!! Very cool.

I've been thinking about this too—how different DDN is from other generative models. The idea of generating multiple outputs at once in a single pass sounds like it could really speed things up, especially for tasks where you need a bunch of samples quickly. I'm curious how this compares to something like GANs, which can also generate multiple samples but often struggle with mode collapse.

The zero-shot conditional generation part is wild. Most methods rely on gradients or fine-tuning, so I wonder what makes DDN tick there. Maybe the tree structure of the latent space helps navigate to specific conditions without needing retraining? Also, I'm intrigued by the 1D discrete representation—how does that even work in practice? Does it make the model more interpretable?

The Split-and-Prune optimizer sounds new—I'd love to see how it performs against Adam or SGD on similar tasks. And the fact that it's fully differentiable end-to-end is a big plus for training stability.

I also wonder about scalability—can this handle high-res images without blowing up computationally? The hierarchical approach seems promising, but I'm not sure how it holds up when moving from simple distributions to something complex like natural images.

Overall though, this feels like one of those papers that could really shift the direction of generative models. Excited to dig into the code and see what kind of results people get with it!