← Back to context

Comment by m11a

15 days ago

Most ML is disjoint from the current NN trends, IMO. Compare Bishop's PRML to his Deep Learning textbook. First couple chapters are copy+paste preliminaries (probability, statistics, Gaussians, other maths background), and then they completely diverge. I'm not sure how useful classical ML is for understanding NNs.

That's fair. My understanding is that NN and ML are similar insofar as they are both about minimizing a loss value (like negative log likelihood). And then the methods of doing that are very different and once you get even more advanced, NN concepts feel like a completely different universe.