← Back to context Comment by martinald 2 days ago Yes but you could use the space on die for GPU cores. 1 comment martinald Reply heavyset_go 1 day ago At least with the embedded platforms I'm familiar with, dedicated silicon to NPU is both faster and more power efficient than offloading to GPU cores.If you're going to be doing ML at the edge, NPUs still seem like the most efficient use of die space to me.
heavyset_go 1 day ago At least with the embedded platforms I'm familiar with, dedicated silicon to NPU is both faster and more power efficient than offloading to GPU cores.If you're going to be doing ML at the edge, NPUs still seem like the most efficient use of die space to me.
At least with the embedded platforms I'm familiar with, dedicated silicon to NPU is both faster and more power efficient than offloading to GPU cores.
If you're going to be doing ML at the edge, NPUs still seem like the most efficient use of die space to me.