Comment by movpasd
1 year ago
The important thing about matrices is their linear properties. Most proofs in linear algebra over real vector spaces don't rely on the particularities of the real number system, just the fact that it is a field [0].
To make an analogy with programming, you can think of a matrix as a generic container type. Normally, we think of the contained type as being the real numbers, but you could replace it with anything that supports the multiplication and addition operations -- that implements the "field interface/trait/typeclass". [1]
So you can define linear algebra over any field: the real numbers of course, but also the rational numbers, or the complex numbers. Fields can also be finite. In particular, the integers modulo n is a field if n is prime. My suggestion is to consider the case n=2. [2]
It's a bit hard to get geometric intuition about what matrices on finite fields even mean, but a lot of theorems we can prove in the real case generalize, because they depend only on the field properties of the reals. That's why I suggested it -- it would make things like inverting straightforward. And that natural relationship between linear subspaces and holding certain bits constant might make search possible.
The issue (which others pointed out) is that linearity imposes so much structure that it might not seem pseudo-random anymore. But you could consider something like XORing all numbers with a randomly generated bit string -- it won't hold up if someone looks closely but might fool the cursory human eye.
---
[0] https://en.wikipedia.org/wiki/Field_(mathematics)
[1] The analogy breaks down slightly because not only must the operations be defined, but they must also obey laws, which can't be expressed in (most) programming language type systems. But if you've used Haskell before, you'll have seen this before in the fact that a Monad has to obey the Monad laws.
No comments yet
Contribute on Hacker News ↗