Slow inference speed after my core ml model was encrypted

Hi friends,

I have just found that the inference speed dropped to only 1/10 of the original model.

Had anyone encountered this?

Thank you.

Answered by DTS Engineer in 835046022

Hello @wild-bee,

Please file a bug report for this issue using Feedback Assistant. It is unexpected that model encryption would affect inference time.

-- Greg

I profiled the app in 'Instruments', found that it was using the CPU to inference, so it is much slower than the original model.

Hello @wild-bee,

Please file a bug report for this issue using Feedback Assistant. It is unexpected that model encryption would affect inference time.

-- Greg

Fired the bug.

Filed the bug with 'Feedback Assistant'.

Slow inference speed after my core ml model was encrypted
 
 
Q