← Back to context Comment by esafak 14 hours ago You can convert your own ML models to MLX to use them; Apple Intelligence is not the only application. 4 comments esafak Reply nullstyle 14 hours ago MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine. mirsadm 14 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 12 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
nullstyle 14 hours ago MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine. mirsadm 14 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 12 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
mirsadm 14 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 12 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
sroussey 12 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine.
Even then there is no transparency on how it decides what runs on the ANE/GPU etc
Correct. OS level stuff get first priority, so you can’t count on using it.
1 reply →