-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: The inference result using CPU on MacOS M2 is abnormal, but the result using TEMPLATE device is normal #24481
Comments
Hi @wanglxchina , Do you still need help with this? If so, please let us know what model you use and share the link to your model if it's available publicly. |
Thank you for your reply. Unfortunately, the current model cannot be provided at the moment. This seems to be a bug in OpenVino. There was an error executing the BatchNorm2d operator on the arm64 platform. When exporting the BatchNorm2d execution result, the result was correct, but if not exported, the result is incorrect. There is no such issue on the X86_64 platform, everything is normal on the X86_64 platform; |
@wanglxchina can you help a quick check of the result by using device directly as CPU (not AUTO:CPU)? what is the result on arm64 for CPU without reporting the BatchNorm2d execution result? |
@songbell Using CPU or AUTO:CPU results are both incorrect, and I have already tested them. The first thing I used was to directly use the CPU.If the execution results of BatchNorm2d are exported, both CPU and AUTO:CPU results are correct |
@wanglxchina The difference is due to the TEMPLATE and CPU device. Will ask CPU engineer to take a look. |
@allnes Please take a look on the issue. @wanglxchina We would appreciate if you can provide part of the model (or equivalent subgraph) that produces incorrect results. It will significantly simpify reproduction work on our side. |
@dmitry-gorokhov this is a simple onnx model with a similar structure for testing. The results obtained by The results obtained by |
Hi! |
@allnes Hello, has there been any progress on this issue |
@wanglxchina Hello, I keep debugging your issue, could you please tell me if you build OpenVINO or take a specific package? |
OpenVINO2024.0.0 build by myself and OpenVINO2024.1.0 download from official website both have been tried. Both have the same problem. |
If it possible could you provide dumb blobs, you can get it with help this instruction in debug mode for building? I will try to compare yours blobs and my getting blobs. |
@wanglxchina Hello, I apologize for the misinformation. Dumping blobs is not required, you don't have to do it, I am in the process of debugging this bug. Thank you for your patience. |
@wanglxchina Hello! According to the latest data on the current version of the master branch the results for your networks (toy_model.zip) match, we ask you to check if it works for you on the latest version of the master branch OpenVINO |
OpenVINO Version
2024.0.0
Operating System
macOS Systems for Apple Silicon
Device used for inference
CPU
Framework
ONNX
Model used
No response
Issue description
Platform: Mac M2
Why is using the AUTO:CPU as an inference device causing abnormal results, while using AUTO is normal?
The text was updated successfully, but these errors were encountered: