-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: when loading model, 'Floating point exception (core dumped)‘ happended #24559
Comments
Hi @zhulei2017, Please validate on the recent 2022.3.2 release: https://github.com/openvinotoolkit/openvino/releases/tag/2022.3.2 Best regards, |
hi @rkazants , Thanks for the reply. This issue can still be reproduced on the 2022.3.2 version. benchmark_app -m net512.onnx [Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2022.3.2-9279-e2c7e4d7b4d-releases/2022/3
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2022.3.2-9279-e2c7e4d7b4d-releases/2022/3
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 304.08 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Model inputs:
[ INFO ] input (node: input) : f32 / [...] / [1,4,32,512,512]
[ INFO ] Model outputs:
[ INFO ] output (node: output) : f32 / [...] / [1,2,32,512,512]
[Step 5/11] Resizing model to match image sizes and given batch
[ INFO ] Model batch size: 1
[Step 6/11] Configuring input of the model
[ INFO ] Model inputs:
[ INFO ] input (node: input) : f32 / [...] / [1,4,32,512,512]
[ INFO ] Model outputs:
[ INFO ] output (node: output) : f32 / [...] / [1,2,32,512,512]
[Step 7/11] Loading the model to the device
Floating point exception (core dumped) lscpu Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 52
On-line CPU(s) list: 0-51
Thread(s) per core: 1
Core(s) per socket: 26
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 5320 CPU @ 2.20GHz
Stepping: 6
CPU MHz: 2200.000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 39936K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities Best regards |
Hello @zhulei2017 , I don't see the issue on 24.1 while I do see it on 22.3. Is it possible for you to upgrade to the 24.1 while we are figuring out which commit require to be backported?
|
Hello @andrei-kochin , Thanks for the reply. I tested it on 24.1 and it works fine. On 2022.3.1, sliding window inference was used to avoid this shape of inputs. This approach avoids program crashes, but increases the inference time. it's acceptable. It would be very nice if this issue could be fixed on version 2022. Thanks for your work. Best regards |
OpenVINO Version
2022.3.1 LTS
Operating System
Other (Please specify in description)
Device used for inference
CPU
Framework
ONNX
Model used
nnUNet
Issue description
Detailed description
Hi, I am using verison 2022.3.1 to deploy model. Loading model fails with specific CPU and input size: Xeon(R) Gold 5320, 4x32x512x512. The error message is as follows:
When I switched to the Core i5-8500, the model loaded successfully;
When I set shape of input to 4x32x256x256, the model loaded successfully;
System information (version)
To reproduce the problem, I provide an onnx model with randomized weights, net512.onnx. To avoid attachment size limitations, the model was split into eight files:
cat net_00.tar.gz net_01.tar.gz net_02.tar.gz net_03.tar.gz net_04.tar.gz net_05.tar.gz net_06.tar.gz net_07.tar.gz > net.tar.gz tar -zxvf net.tar.gz
net_00.tar.gz
net_01.tar.gz
net_02.tar.gz
net_03.tar.gz
net_04.tar.gz
net_05.tar.gz
net_06.tar.gz
net_07.tar.gz
Step-by-step reproduction
conda create -n pyov22 python=3.8
pip install openvino==2022.3.1
pip install openvino-dev==2022.3.1
benchmark_app -m net512.onnx
Relevant log output
Issue submission checklist
The text was updated successfully, but these errors were encountered: