Comment by blobbers
3 hours ago
Can someone help me understand when these neural engines kick in in open source software?
I typically use python ML libraries like lightgbm, sklearn, xgboost etc.
I also use numpy for large correlation matrices, covariance etc.
Are these operations accelerated? Is there a simple way to benchmark?
I see a lot of benchmarks on what look like C functions, but today in my jobs I rely on higher level libraries. I don't know if they perform any better on apple HW, and unless they have a flag like use_ane I'm inclined to think they do better.
Of course chatgpt suggested I benchmark an Intel Mac vs. newer apple silicon. Thanks chatgpt, there's a reason people still hate AI.
> when these neural engines kick in in open source software?
It mostly doesn't because NPUs are bespoke and vendor-specific (which incents neglect by software devs working on open source numerics and ML/AI infrastructure), and the Apple ANE is no exception. Part of this effort is most likely about fixing that for the specific case of the Apple ANE.
Part of which effort? The Reverse engineering is so it can be used blog article?
I just think: great it seems like I'm paying for a hardware accelerator that makes Siri go faster. And I use siri on my laptop exactly 0 times in the last infinite years.