-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
馃摎 todo: Provide compiled Cython/C/C++ binaries #596
Comments
I'm curious about how it is done and implemented in other packages. Do you know of any gpu-accelerated Python package that doesn't require the C++ compiler & CUDA Toolkit ? Their devs could help us make the right choices in terms of deployment |
It seems at least for the C++ part, E: Looks like PyTorch has CUDA pre-compiled |
Btw the |
That's interesting.. It seems it makes one wheel for all platforms and versions. I'm thinking we could use |
Said today : we'll release next version 0.15 with |
We will have to decide: what is the lowest python version we would want to support? |
To answer partially your question @dcmvdbekerom, Python 3.7 is no longer supported. Python 3.8 will not be supported anymore on November |
馃挱 Description
Over the past years RADIS has been heavily optimized. Some of these optimization rely on cython/c++ code which is compiled during installation of RADIS. However, we shouldn't expect all users to have a compiler installed, and we also don't want them to miss out on optimizations.
The situation is even worse for GPU code, as it currently requirescupy
, which requires theCUDA Toolkit
(3GB), which requiresMicrosoft Visual Studio
(10+GB).To prevent users have to compile code, we should do it for them.
The pre-built libraries should be "staged", because we also don't want developers to have to rebuild all code if they're only working on one thing. For example, if a dev works on a cython optimization, we shouldn't require them to also be able to compile GPU code (requiring CUDA Toolkit and MSVC).Cython/C++ code could be compiled into pre-built wheels. The problem with Wheels is that we need a separate version for OS (windows/linux/macOS), architecture (32/64bit), and Python version. However, there is a package called
cibuildwheels
that let's the CI build all these different flavors. I propose we build these whenever a version is released to PyPy.Currently all cython functions have a python backup in case compilation fails. It would be better if we provide the backup in the form of Wheels, because then everyone will be running the same code. The disadvantage is that the barrier for development is slightly higher, because it would require more code to be re-compiled during development.
Curious to hear thoughts & suggestions!
Edit: CUDA can be handled entirely without C/C++ compilation, so I made a separate thread here: #603)
The text was updated successfully, but these errors were encountered: