Comment by emilfihlman
6 years ago
This is a horrible post. It advocates to just throwing out research and replacing it with black boxes. Sure, they approximate (or even fully extract) the actual behaviour, but they are opaque.
I'd like to remind everyone that science is in the business of understanding, making things less opaque, less magic and engineering benefits from both.
I think you missed the point. It is saying that when we build AI systems put our understanding of a problem space into the system, we inhibit the development of a system that can create its own understanding of the problem space. He gives three very good examples of that. He also explains why people are tempted to do that: it's satisfying and initially improves the results.