Comment by esafak 12 hours ago You can convert your own ML models to MLX to use them; Apple Intelligence is not the only application. 4 comments esafak Reply nullstyle 12 hours ago MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine. mirsadm 12 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 10 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
nullstyle 12 hours ago MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine. mirsadm 12 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 10 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
mirsadm 12 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 10 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
sroussey 10 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine.
Even then there is no transparency on how it decides what runs on the ANE/GPU etc
Correct. OS level stuff get first priority, so you can’t count on using it.
1 reply →