-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scipt.opt.approx_fprime not good enough #3155
Comments
I've turned off the test in #3170 since I've already verified the gradient. Can change later if required. One issue is that the location (in parameter space) at which the gradient is evaluated is the minimum of the cost function (sum of squares), which means that a forward difference approximation at this point is going to be fairly poor. If the location of testing the gradient was not the minimum of the cost function, perhaps this might work better... I will have a look |
Hi, @samcoveney Can I look at this issue? Line 584: I am stuck with very silly doubt on how to compare and test it 😅, Could you please guide me on this and how to proceed further on this approach? |
I have tested different epsilon - I am now quite sure that the problem is testing using first-order finite difference approximation at the minimum, where it is not a good approx (I used to do many CUDA simulations using finite differences). The fact that the auto-grad gave an exact match seems to prove to me that first order finite difference is the issue, since a different numerical calculation of the gradient gave an exact match - remember, we are not just dealing with 1 variable here, but with 7. If you have time, what you could try it changing D to not be the same as the "ground truth D", so that we are not approximating the derivative at the minimum? Perhaps paired with another epsilon, if needed. Other than that, I am a bit stumped as to what else to do. To compare and test, just change epsilon and D and run the test, something like:
I would also suggest to try implementing the autograd option as per my first comment (note the last line of that comment), just to convince you that this is what is going on |
@tanishka321 any progress? |
I am checking through some of the tests for reconst/dti, and I am finding (after some changes made for an up and coming PR) that the checks on the non-linear-least-squares error function derivate using
scipy.opt.approx_fprime
are failing.The tests assert that the analytical Jacobian and the numerical Jacobian are not equal.
This is not the first time I have encountered this issue (I came across it while doing this #2730 as well, and have found the same problem in other projects as well).
I decided to quickly check if approximation of the derivatives were the problem, so I installed some autograd code and got to work in the tests in
dipy\reconst\tests\test_dti.py
(look for# === SEE HERE ===
):The tests now pass because, of course, the function
nlls.jacobian_func
really is the correct jacobian ofnlls.err_func
... I am printing the difference just to check, and the autograd perfectly performs the differentiation.I suspect the problem is with
scipy.opt.approx_fprime
using only forward differences.Suggestions welcome please, I am happy to attempt a fix as part of the PR I am finishing up. We could switch these tests off, but given that my next PR has involved making changes to this function, it would be good to have a working test.
(edit: I had to do
import autograd.numpy as np
indipy\reconst\dti.py
for this autograd to work)The text was updated successfully, but these errors were encountered: