Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapt matrix factorizations from nalgebra? #191

Open
regexident opened this issue Aug 16, 2017 · 5 comments
Open

Adapt matrix factorizations from nalgebra? #191

regexident opened this issue Aug 16, 2017 · 5 comments

Comments

@regexident
Copy link
Contributor

With the latest release (v0.13) nalgebra now supports Rust-native implementations of the following matrix factorizations:

In particular, this release includes pure-rust implementations of the following factorizations for real matrices ("general matrices" designates real-valued matrices that may be rectangular):

  • Cholesky decomposition of symmetric definite-positive matrices (+ inverse, square linear system resolution).
  • Hessenberg decomposition of square matrices.
  • LU decompostion of general matrices with partial pivoting (+ inversion, determinant, square linear system resolution).
  • LU decompostion of general matrices with full pivoting (+ inversion, determinant, square linear system resolution).
  • QR decomposition of general matrices (+ inverse, square linear system resolution).
  • Real Schur decomposition of general matrices (+ eigenvalues, complex eigenvalues).
  • Eigendecomposition of symmetric matrices.
  • Singular Value Decomposition (SVD) of general matrices (+ pseudo-inverse, linear system resolution, rank).

This made me wonder if instead of reinventing the wheel for rulinalg, maybe rulinalg and nalgebra should join or make use of the other side's efforts with these operations?

cc @sebcrozet

Related issues (varying degrees), but not limited to:

@Andlon
Copy link
Collaborator

Andlon commented Aug 17, 2017

That's certainly a good question!

I'm really happy to see more work on pure rust linear algebra algorithms. Rulinalg has been rather dormant for some time now, so it's good to see things are happening in the ecosystem.

As for making use of each other's effort, and assuming the continued co-existence of the two libraries, I'm sure we can learn a lot from each other. Beyond that, did you have anything particular in mind?

@c410-f3r
Copy link
Contributor

c410-f3r commented Aug 18, 2017

Both projects could benefit from each other, like GCC and LLVM do, but differents projects have different objectives and differents leaderships.
As for myself, I think optimized linear algebra is so huge and complex that a better, faster, stronger, and unified project would be a awesome standard reference for the Rust ecosystem.

@regexident
Copy link
Contributor Author

Beyond that, did you have anything particular in mind?

I basically stumbled upon the announcement and thought "wait a second, lots of these have open issues on rulinalg, maybe there is a chance for symbiosis here". 😉

@AtheMathmo
Copy link
Owner

I have only had time to loosely follow what has been happening with nalgebra but I agree that it is good to see things moving in the ecosystem.

Unfortunately I just haven't found the time to pick up my own slack and get things moving with rulinalg again. I would be more than happy to see if there is a way we can work together towards some greater good. I am also a little unsure about exactly how this relationship would work - especially given the lack of activity on my end. But I'm very open to any ideas about how we could make a meaningful proposal.

@regexident
Copy link
Contributor Author

FYI:

Implementing matrix decompositions (Cholesky, LQ, sym eigen)
as differentiable operators: https://arxiv.org/abs/1710.08717

Abstract:

Development systems for deep learning, such as Theano, Torch, TensorFlow, or MXNet, are easy-to-use tools for creating complex neural network models. Since gradient computations are automatically baked in, and execution is mapped to high performance hardware, these models can be trained end-to-end on large amounts of data. However, it is currently not easy to implement many basic machine learning primitives in these systems (such as Gaussian processes, least squares estimation, principal components analysis, Kalman smoothing), mainly because they lack efficient support of linear algebra primitives as differentiable operators. We detail how a number of matrix decompositions (Cholesky, LQ, symmetric eigen) can be implemented as differentiable operators. We have implemented these primitives in MXNet, running on CPU and GPU in single and double precision. We sketch use cases of these new operators, learning Gaussian process and Bayesian linear regression models. Our implementation is based on BLAS/LAPACK APIs, for which highly tuned implementations are available on all major CPUs and GPUs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants