Comment by CGamesPlay
2 years ago
Well, yeah, but the training process means that the compression is both lossy and much less efficient than a standard compression method like gzip. You could even train your NN on its ability to losslessly recall, but we generally call that "overfitting" in the lingo.
The way you'd do compression using a NN, is using the NN to predict the probability of the next symbol, and feeding that into an arithmetic coder to produce a compressed representation. This process is lossless, and better prediction quality directly translates into better compression.