Comment by lambdaone
6 days ago
I find this fascinating, as it raises the possibility of a single framework that can unify neural and symbolic computation by "defuzzing" activations into what are effectively symbols. Has anyone looked at the possibility of going the other way, by fuzzifying logical computation?
Yes, you can relax logic gates into continuous versions which makes the system differentiable. An AND gate can be constructed with the function x*y and NOT by 1-x (on inputs in the range [0,1]. From there you can construct a NAND gate, which is universal and can be used to construct all other gates. Sigmoid can be used to squash the inputs into [0,1] if necessary.
This paper lists out all 16 possible logic gates in Table 1 if you're interested in this sort of thing: https://arxiv.org/abs/2210.08277
There's been some work (e.g RASP - https://arxiv.org/abs/2106.06981) on taking logical computations and compiling them into transformer weights.
Sakana AI is also working on merging different transformer models together to combine skills.
https://sakana.ai/evolutionary-model-merge/
https://en.wikipedia.org/wiki/Probabilistic_logic
More generally, machine learning is all about dealing with imprecision, including logic.
That was a bust: https://www.reddit.com/r/engineering/comments/pwht4f/whateve...
Do you mean fuzzy logic [1]? It was all the hype in the 1990s.
[1] https://en.wikipedia.org/wiki/Fuzzy_logic
> fuzzifying logical computation?
Isn't that basically what the sigmoid operator does? Or more in the direction of averaging many logical computations, we have random forests.