You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Support multiple backends (NumPy, PyTorch, JAX, CuPy, ...) by decoupling the implementation of DMD variant from the provider of linear algebra operations. This would enable running on distributed architectures, GPUs, and maybe even differentiation.
This could be achieved by inferring the backend from the type of vectors/matrices provided by the user, and writing some mapping class for functions and methods.
Support multiple backends (NumPy, PyTorch, JAX, CuPy, ...) by decoupling the implementation of DMD variant from the provider of linear algebra operations. This would enable running on distributed architectures, GPUs, and maybe even differentiation.
This could be achieved by inferring the backend from the type of vectors/matrices provided by the user, and writing some mapping class for functions and methods.
Example (high level idea)
Fixes #297
The text was updated successfully, but these errors were encountered: