Comment by funnymunny
2 years ago
Very cool and intuitive; if x0 and x1 are in category A, while x2 is in category B, then the concatenation of x0,x1 is more compressible than the concatenation of x0,x2. It makes sense that this would do better than an out of distribution NN.
Simplicity aside, practicality is unclear. It seems you can’t escape the need to perform at least one compression operation, per class, per inference. Eg classifying X into one of 10 categories requires a minimum of 10 string compressions. Probably more if you want better accuracy.
No comments yet
Contribute on Hacker News ↗