← Back to context

Comment by yousifa

3 days ago

On the coreml side this is likely because the neural engine supports fp16 and offloading some/all layers to ANE significantly increases inference time and power usage when running models. You can inspect in the Xcode profiler to see what is running on each part of the device at what precision.

Yeah I can see why they let it be that way, but the fact it is pretty undefined is what bugged me. I suppose it depends on what your goals are - efficiency vs reproducibility.

Also I did run a test of FP16 vs FP32 for a large matmul on the Apple GPU and the FP16 calculation was 1.28x faster so it makes sense that they'd go for FP16 as a default.