Comment by eyvindn

5 years ago

1) This is often the first question many people familiar with NNs ask and rightly so. Compression was not one of our goals with this article, and it would look like a terrible compression algorithm if that were its purpose. In fact, the model displayed here is about 8.3k parameters, although the WebGL model is quantized (more info on this in the last section), and each model learns to encode an image consisting of 44x44x3 = 5808 integers. We made no attempt to minimize this number. The key thing to bear in mind is that all the cells have the exact same rule and the image generation starts from a single one of them - meaning they have to learn to communicate locally with their neighbours to self-organize in the correct pattern. This is a very non-trivial task and the majority of the model’s parameters are likely going towards this communication protocol and growth behavior.

2) We have not tried 3D. As for animations, some of our earliest experiments suggested one could achieve “animations” by applying the loss at key-points to have the model learn to iterate through these points across several time steps.

3) One could argue the WebGL implementation does this to some extent by quantizing the learned weights we take from the Tensorflow training code. The model remains very resilient and worked out of the box in almost all cases. Moreover, if one tried to inject explicit noise to the CA in a given location, some models would have no problems adapting to it, while others would fail miserably. Some early experiments yielded some remarkably resistant models, able to resist while being subject to continuous globally occurring noise. We suspect explicitly training them while introducing noise would allow us to drive the model towards more consistently resistant behaviors.

4) One of the main obstacles to larger patterns at the moment is memory usage during a forward/backward pass. There are optimization and tricks we plan to employ to generate larger and more complex patterns, which may be discussed in a follow up thread.

Dave Ackley, who developed the Moveable Feast Machine, had some interesting thoughts about moving from 2D to 3D grids of cells:

https://news.ycombinator.com/item?id=21131468

DonHopkins 4 months ago | parent | favorite | on: Wolfram Rule 30 Prizes

Very beautiful and artistically rendered! Those would make great fireworks and weapons in Minecraft! From a different engineering perspective, Dave Ackley had some interesting things to say about the difficulties of going from 2D to 3D, which I quoted in an earlier discussion about visual programming:

https://www.youtube.com/channel/UC1M91QuLZfCzHjBMEKvIc-A