← Back to context Comment by esafak 10 hours ago You can convert your own ML models to MLX to use them; Apple Intelligence is not the only application. 4 comments esafak Reply nullstyle 10 hours ago MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine. mirsadm 10 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 8 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
nullstyle 10 hours ago MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine. mirsadm 10 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 8 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
mirsadm 10 hours ago Even then there is no transparency on how it decides what runs on the ANE/GPU etc sroussey 8 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
sroussey 8 hours ago Correct. OS level stuff get first priority, so you can’t count on using it. 1 reply →
MLX does not run on NPUs AFAIK; just gpu and cpu. You have to use CoreML to officially run code on the neural engine.
Even then there is no transparency on how it decides what runs on the ANE/GPU etc
Correct. OS level stuff get first priority, so you can’t count on using it.
1 reply →