That's also the year where they released on-chip acceleration for certain things, so they probably started a year or 2 before working on that tech? Not as accidental as assumed.
Apple's Neural Engine from 2017 is an NPU that's entirely obsolete today in light of Metal Compute Shaders. It was accidental, and Apple is now pivoting their GPUs away from raster efficiency in acknowledgement that it was the wrong bet.
CUDA on the other hand continues to be relevant, and the compute capabilities from 2014 are still instrumental for accelerating training and acceleration workloads.
Nvidia has research papers on accelerating Machine Learning as far back as 2014: https://research.nvidia.com/publications?f%5B0%5D=research_a...
Apple's website from 2017 https://machinelearning.apple.com/research?page=1&sort=oldes...
That's also the year where they released on-chip acceleration for certain things, so they probably started a year or 2 before working on that tech? Not as accidental as assumed.
Apple's Neural Engine from 2017 is an NPU that's entirely obsolete today in light of Metal Compute Shaders. It was accidental, and Apple is now pivoting their GPUs away from raster efficiency in acknowledgement that it was the wrong bet.
CUDA on the other hand continues to be relevant, and the compute capabilities from 2014 are still instrumental for accelerating training and acceleration workloads.