Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing issue: vtk-m (+cuda) #44215

Open
4 tasks done
wspear opened this issue May 15, 2024 · 6 comments
Open
4 tasks done

Testing issue: vtk-m (+cuda) #44215

wspear opened this issue May 15, 2024 · 6 comments

Comments

@wspear
Copy link
Contributor

wspear commented May 15, 2024

Steps to reproduce the failure(s) or link(s) to test output(s)

Singularity> spack find -dvl vtk-m+cuda
-- linux-ubuntu22.04-x86_64 / gcc@11.4.0 ------------------------
wmxwujt vtk-m@2.1.0~64bitids+cuda+cuda_native+doubleprecision+examples~fpic~ipo~kokkos~logging+mpi+openmp+rendering~rocm+shared~tbb~testlib build_system=cmake build_type=Release cuda_arch=90 generator=make patches=64177d0
3gq4owh     cmake@3.27.9~doc+ncurses+ownlibs build_system=generic build_type=Release
bqjvf6e         curl@8.6.0~gssapi~ldap~libidn2~librtmp~libssh~libssh2+nghttp2 build_system=autotools libs=shared,static tls=openssl
pz4ulyn             nghttp2@1.57.0 build_system=autotools
ms4sbsh             openssl@3.2.1~docs+shared build_system=generic certs=mozilla
5bui7u7                 ca-certificates-mozilla@2023-05-30 build_system=generic
fqkbmyz                 perl@5.38.0+cpanm+opcode+open+shared+threads build_system=generic patches=714e4d1
y2dkgvn                     berkeley-db@18.1.40+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc
mvv5tmf                     bzip2@1.0.8~debug~pic+shared build_system=generic
ppgdy35                         diffutils@3.10 build_system=autotools
vn2i3ot                             libiconv@1.17 build_system=autotools libs=shared,static
xc5bypn                     gdbm@1.23 build_system=autotools
fl4uzzk                         readline@8.2 build_system=autotools patches=bbf97f1
xbzeeoo             pkgconf@1.9.5 build_system=autotools
7a4mbz3         ncurses@6.4~symlinks+termlib abi=none build_system=autotools patches=7a351bc
nsjxqfd         zlib-ng@2.1.6+compat+new_strategies+opt+pic+shared build_system=autotools
x7rtpp3     cuda@12.2.0~allow-unsupported-compilers~dev build_system=generic
6cqtdnc     gcc-runtime@11.4.0 build_system=generic
kunl7zm     gmake@4.4.1~guile build_system=generic
w2qwypv     mpich@4.1.2~argobots~cuda+fortran~hwloc+hydra+libxml2+pci~rocm+romio~slurm~vci~verbs~wrapperrpath~xpmem build_system=autotools datatype-engine=auto device=ch4 netmod=ofi pmi=pmi

==> 1 installed package

Error message

Error message
==> Error: TestFailure: 1 test failed.


Command exited with status 8:
    '/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cmake-3.27.9-3gq4owhidx3exqfpga3a465lrn25wvbd/bin/ctest' '--verbose'


1 error found in test log:
     56    
     57    The following tests FAILED:
     58           1 - SmokeTestInternal (Subprocess aborted)
     59    Errors while running CTest
     60    Output from these tests are in: /home/users/wspear/.spack/test/azy74y34mqyn6rsjvcyhdjaxwlcsxnvd/vtk-m-2.1.0-wmxwujt/smoke_test_build/Testing/Temporary/LastTest.log
     61    Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
  >> 62    FAILED: VtkM::test: Command exited with status 8:
     63        '/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cmake-3.27.9-3gq4owhidx3exqfpga3a465lrn25wvbd/bin/ctest' '--verbose'
     64      File "/spack/bin/spack", line 52, in 
     65        sys.exit(main())
     66      File "/spack/lib/spack/spack_installable/main.py", line 42, in main
     67        sys.exit(spack.main.main(argv))
     68      File "/spack/lib/spack/spack/main.py", line 1068, in main

Information on your system or the test runner

  • Spack: 0.22.0.dev0
  • Python: 3.10.12
  • Platform: linux-ubuntu22.04-zen3
  • Concretizer: clingo

Additional information

@kmorel @vicentebolea

The non-cuda build of vtk-m passes its test in this same environment

General information

  • I have reported the version of Spack/Python/Platform/Runner
  • I have run spack maintainers <name-of-the-package> and @mentioned any maintainers
  • I have uploaded any available logs
  • I have searched the issues of this repo and believe this is not a duplicate
@kmorel
Copy link
Contributor

kmorel commented May 15, 2024

Is there a platform we can have access to to replicate this issue?

@wspear
Copy link
Contributor Author

wspear commented May 15, 2024

Currently we're testing in a singularity image that provides the spack-installed software. I've pasted the contents of the test log file. We're running other cuda based tests built with this cuda version that are passing without the driver version error.

I'm looking into replication options. Please let me know if there's anything else diagnostic I can provide in the meantime.

Start testing: May 15 20:54 America
----------------------------------------------------------
1/1 Testing: SmokeTestInternal
1/1 Test: SmokeTestInternal
Command: "/home/users/wspear/.spack/test/azy74y34mqyn6rsjvcyhdjaxwlcsxnvd/vtk-m-2.1.0-wmxwujt/smoke_test_build/smoke_test"
Directory: /home/users/wspear/.spack/test/azy74y34mqyn6rsjvcyhdjaxwlcsxnvd/vtk-m-2.1.0-wmxwujt/smoke_test_build
"SmokeTestInternal" start time: May 15 20:54 America
Output:
----------------------------------------------------------
terminate called after throwing an instance of 'vtkm::cont::cuda::ErrorCuda'
  what():  CUDA Error: CUDA driver version is insufficient for CUDA runtime version
Unchecked asynchronous error @ /tmp/root/spack-stage/spack-stage-vtk-m-2.1.0-wmxwujtmduiulmxcalaznuz4cz3pymzv/spack-src/vtkm/cont/cuda/internal/RuntimeDeviceConfigurationCuda.h:40
(Stack trace unavailable)
<end of output>
Test time =   0.19 sec
----------------------------------------------------------
Test Failed.
"SmokeTestInternal" end time: May 15 20:54 America
"SmokeTestInternal" time elapsed: 00:00:00
----------------------------------------------------------

End testing: May 15 20:54 America

@wspear
Copy link
Contributor Author

wspear commented May 15, 2024

If you have access to a system with a Tesla or newer gpu where you can run singularity, this is the image we used: https://oaciss.nic.uoregon.edu/e4s/images/24.05/e4s-cuda80-x86_64-24.05.sif (Running it with the -e and --nv flags should work). It's over 45gb in size so I understand if that's not an option for you.

This is the procedure that gets to the issue we're seeing over here:

wget https://oaciss.nic.uoregon.edu/e4s/images/24.05/e4s-cuda80-x86_64-24.05.sif
singularity run -e --nv e4s-cuda80-x86_64-24.05.sif
git clone https://github.com/E4S-Project/testsuite.git
cd testsuite/validation_tests/vtk-m-cuda
./run.sh
#to sanity check that cuda works in the container you can do
../cuda
./compile.sh
./run.sh

@vicentebolea
Copy link
Member

I think that the error is that vtk-m only builds with the specified cuda_arch whereas the other app might build for older cuda_archs as well in the same binaries. Can you share the nvidia-smi of the target system?

@wspear
Copy link
Contributor Author

wspear commented May 23, 2024

Here it is:

Singularity> nvidia-smi 
Thu May 23 05:14:14 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12    Driver Version: 525.85.12    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA H100 PCIe    On   | 00000000:25:00.0 Off |                    0 |
| N/A   45C    P0    80W / 310W |    123MiB / 81559MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      5613      G   /usr/libexec/Xorg                  63MiB |
|    0   N/A  N/A      9552      G   /usr/bin/gnome-shell               59MiB |
+-----------------------------------------------------------------------------+

@vicentebolea
Copy link
Member

The nvidia driver seems to be enough: https://docs.nvidia.com/deploy/cuda-compatibility/index.html

Another thing could be that the process is picking the integrated GPU. is this a jetson type machine?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants