← Back to context

Comment by HarHarVeryFunny

1 month ago

I'm just using (inductive) "bias" to refer to assumptions about the data that are built into the model and the way it learns, such as the way a CNN is built to assume spatial locality of patterns.

My point is that the more biases you build in, especially with a poorly understood goal like AGI, the more chance you have of either getting it wrong, or simply of over-constraining the model. In general less is more - you want minimal necessary set of biases to learn effectively, without adding others that it could better (providing greater generality) learn for itself.