-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TST, MAINT: run optimize array API tests and fix chandrupatla
#20737
Conversation
* Addresses some of my points at: scipy#20085 (review) and seems to fix about 55 GPU-based array API test failures [skip cirrus] [skip circle]
Matt will probably find a few more fixes that I missed, and/or slightly adjust these ones. Feel free to push over, close, etc. |
@@ -655,11 +655,11 @@ def test_convergence(self, xp): | |||
x1, x2 = bracket | |||
f0 = xp_minimum(xp.abs(self.f(x1, *args)), xp.abs(self.f(x2, *args))) | |||
res1 = _chandrupatla_root(self.f, *bracket, **kwargs) | |||
xp_assert_less(np.abs(res1.fun), 1e-3*f0) | |||
xp_assert_less(xp.abs(res1.fun), 1e-3*f0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was confused about why this was passing with torch
CPU, but ufuncs seem to preserve type.
import numpy as np
import torch
x = torch.asarray([-1, 1.])
np.abs(x) # tensor([1., 1.])
Same as exp
, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was confused about why this was passing with
torch
CPU, but ufuncs seem to preserve type.
Only when the input array type implements __array_wrap__
or some other such method. torch.Tensor.__array_wrap__
exists, so the return value remains a torch.Tensor
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks good. I'll fix the remaining failures then merge.
ba7f43d
to
dce9777
Compare
chandrupatla
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM too, thanks @tylerjereddy and @mdhaber.
I found a few more issues in gh-20085 that are new with CuPy. So I'll go ahead and merge this, that makes it easier to finalize gh-20085.
[skip cirrus] [skip circle]