Comment by alganet
1 month ago
> finding the right (minimal) set of biases is hard
I'm not familiar with this concept of a minimal set of biases.
The way I see it, it's a series of loosely community-defined tresholds. If there's something like a theory that defines those biases in a formal way, I would very much like to read it.
I'm just using (inductive) "bias" to refer to assumptions about the data that are built into the model and the way it learns, such as the way a CNN is built to assume spatial locality of patterns.
My point is that the more biases you build in, especially with a poorly understood goal like AGI, the more chance you have of either getting it wrong, or simply of over-constraining the model. In general less is more - you want minimal necessary set of biases to learn effectively, without adding others that it could better (providing greater generality) learn for itself.