When building MLModel, it is set to use NPU. It seems that GPU is used during inference, but it crashes during Compile.
The stack is as follows:
How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here
When building MLModel, it is set to use NPU. It seems that GPU is used during inference, but it crashes during Compile.
The stack is as follows:
+1