← Back to context

Comment by colordrops

6 years ago

I don't think what you are saying contradicts the text. What he's saying is that we need to put our efforts into how to design and use the tools that tackle the problem space, rather than reasoning about the problem space itself, e.g. how to use neural nets, monte carlo search, etc. That doesn't mean we just throw a for-loop at the data.

But this doesn't work either - convolutional layers in neural networks have a very specific structure, which encodes strong prior knowledge that we have about the problem space (translation invariance). If we just had multilayer perceptions, we wouldn't be talking about this right now.

  • >convolutional layers in neural networks have a very specific structure, which encodes strong prior knowledge that we have about the problem space

    Yes. The point of the author is that it doesn't do this symbolically.

    Don't get confused with the terms "brute force", "neural net", etc.

    The main idea of the author is that AI that uses brute force, simpler statistical methods, NN, etc, wins over AI that tries to implement some deeper reasoning about the problem domain the way humans do (when thinking about it consciously).

    • Hmm, I'm not sure I see the difference. Why is it not "symbolic"? The symbols that construct the neural network are what encodes translation invariance -- not some vector of reals.

      1 reply →