-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Model infers on Intel CPU, but crashes on ARM CPU (both systems using Ubuntu 22.04) #24542
Comments
@kmn1024 Hi, thank you for issue. |
Thanks for looking, @allnes ! The .bin file: https://mega.nz/file/FalykSAS#IgHmpV_LGO56U1Cdeh2ko9Ggkj7hp9uiw9oyQI9ZAtM The .xml is attached in the original post (as decoder2-openvino-xml.txt) |
Thanks for model. I will return when I get some results. |
@kmn1024 hi! Could you provide yours script of the model conversion? Because yours model (.xml and .bin) has an internal defect. |
|
@allnes Can you please let me know if the above is what you need to help you debug, or do you need something else? I can also do a bit of debugging on my end, if you can guide me on where to look. |
@kmn1024 Alas, I could not reproduce this case, then I would like to ask you to build an OpenVINO library with DEBUG level and send us a stack trace of the network inference crash. |
OpenVINO Version
2024.1.0
Operating System
Other (Please specify in description)
Device used for inference
CPU
Framework
None
Model used
Custom (a version of Hifi-GAN)
Issue description
This model is a version of Hifi-GAN with some customizations. I converted it from Pytorch on my desktop, which runs an Intel CPU on Ubuntu 22.04, using openvino-2024.1.0-15008-cp310-cp310-manylinux2014_x86_64.whl, using these instructions: https://github.com/openvinotoolkit/openvino/blob/74829b1ad22fdc5cd915bd0ec1bba5a4c20cfe08/docs/articles_en/openvino-workflow/model-preparation.rst#convert-a-model-with-python-convert_model
On the desktop, the model loads (
ov.compile_model
) and infers perfectly fine.However, if I move the model to an ARM-based edge computer (Orange Pi 5, which has A76+A55 CPU), with openvino-2024.1.0-15008-cp312-cp312-manylinux_2_31_aarch64.whl.metadata installed, the model loads but inference crashes. Stack trace of the crash:
I have attached the xml portion of the saved model:
decoder2-openvino-xml.txt
Step-by-step reproduction
Difficult. The .bin portion of the saved model is about 90MB, so I cannot upload it. Please let me know if this is absolutely required.
Relevant log output
No response
Issue submission checklist
The text was updated successfully, but these errors were encountered: