No, it is a hypothesis I formulated here after reading the article. I did a quick check on google scholar but I didn't hit any result. The more interesting question is, if true, what can you do with this information. Maybe it can be a way to evaluate a complete program or specific heap allocator, as in "how fast does this program reach universality". Maybe this is something very obvious and has been done before, dunno, heap algos are not my area of expertise.
Today I thought a lot about this topic and was also trying to find connections to computation. Seems like "computational entropy" could be a useful bridge in the sense that to derive a low entropy output from a high entropy input, it seems intuitively necessary that you'd need to make use of the information in the high entropy input. In this case you would need to compute the eigenvalues, which requires a certain wrestling with the information in the matrices. So even though the entries of the matrices themselves are random, the process of observing their eigenvalues/eigenvectors is has a certain computational complexity involved with processing and "aggregating" that information in a sense.
I realize what I'm saying is very gestural. The analogous context I'm imagining is deriving blue noise distributed points from randomly distributed points: intuitively speaking it's necessary to inspect the actual distributions of the points in order to move the points toward the lower entropy distribution of blue noise, which means "consuming" information about where the points actually are.
The "random song" thing is similar: in order to make a shuffle algorithm that doesn't repeat, you need to consume information about the history of the songs that have been played. This requirement for memory allows the shuffle algorithm to produce a lower entropy output than a purely random process would ever be able to produce.
So hearing that a "purely random matrix" can have these nicely distributed eigenvalues threw me off for a bit, until I realized that observing the eigenvalues has some intrinsic computational complexity, and that it requires consuming the information in the matrix.
Again, this is all very hunchy, I hope you see what I'm getting at.
Interesting, I did not know that colors-of-noice was related to this, what you say sounds conceptually very similar to how Maxwell's demon connects thermodynamics to information theory.
No, it is a hypothesis I formulated here after reading the article. I did a quick check on google scholar but I didn't hit any result. The more interesting question is, if true, what can you do with this information. Maybe it can be a way to evaluate a complete program or specific heap allocator, as in "how fast does this program reach universality". Maybe this is something very obvious and has been done before, dunno, heap algos are not my area of expertise.
Today I thought a lot about this topic and was also trying to find connections to computation. Seems like "computational entropy" could be a useful bridge in the sense that to derive a low entropy output from a high entropy input, it seems intuitively necessary that you'd need to make use of the information in the high entropy input. In this case you would need to compute the eigenvalues, which requires a certain wrestling with the information in the matrices. So even though the entries of the matrices themselves are random, the process of observing their eigenvalues/eigenvectors is has a certain computational complexity involved with processing and "aggregating" that information in a sense.
I realize what I'm saying is very gestural. The analogous context I'm imagining is deriving blue noise distributed points from randomly distributed points: intuitively speaking it's necessary to inspect the actual distributions of the points in order to move the points toward the lower entropy distribution of blue noise, which means "consuming" information about where the points actually are.
The "random song" thing is similar: in order to make a shuffle algorithm that doesn't repeat, you need to consume information about the history of the songs that have been played. This requirement for memory allows the shuffle algorithm to produce a lower entropy output than a purely random process would ever be able to produce.
So hearing that a "purely random matrix" can have these nicely distributed eigenvalues threw me off for a bit, until I realized that observing the eigenvalues has some intrinsic computational complexity, and that it requires consuming the information in the matrix.
Again, this is all very hunchy, I hope you see what I'm getting at.
Interesting, I did not know that colors-of-noice was related to this, what you say sounds conceptually very similar to how Maxwell's demon connects thermodynamics to information theory.
1 reply →