Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JP-3614: relaxed tolerance for coron unit test #8490

Merged
merged 1 commit into from
May 18, 2024

Conversation

emolter
Copy link
Collaborator

@emolter emolter commented May 17, 2024

Resolves JP-3614

Closes #8465

This PR addresses a unit test failure in jwst/coron/tests/test_coron.py::test_align_array on Mac only with the latest scipy version. Similarly to this PR, the problem was caused by changes in the scipy leastsq function, which is used here. The change does not seem to cause any qualitative problems, so simply increasing the tolerance is seemingly fine.

I don't know what the "rules" are for needing a changelog entry, but I feel like this deserved the no-changelog-entry-needed tag, as there are no changes to the way the pipeline functions, nor did it feel appropriate to put this under a "testing" header because it doesn't change anything about the test infrastructure, really. But let me know if you do want a changelog entry for cases like this.

Checklist for PR authors (skip items if you don't have permissions or they are not applicable)

  • added entry in CHANGES.rst within the relevant release section
  • updated or added relevant tests
  • updated relevant documentation
  • added relevant milestone
  • added relevant label(s)
  • ran regression tests, post a link to the Jenkins job below.
    How to run regression tests on a PR
  • All comments are resolved
  • Make sure the JIRA ticket is resolved properly

@emolter emolter marked this pull request as ready for review May 17, 2024 13:48
@emolter emolter requested a review from a team as a code owner May 17, 2024 13:48
@emolter emolter requested a review from braingram May 17, 2024 13:49
@braingram
Copy link
Collaborator

braingram commented May 17, 2024

Thanks for making this PR. Unfortunately I don't have a good enough understanding of this test or the algorithm to review these changes and determine if this change in tolerance is acceptable. I am surprised that 1e-5 is working as on my machine I had to increase these to 1e-4 and 1e-3 to pass (as in the linked PR). EDIT: oops, I see that you've changed the absolute tolerance instead of adding a relative tolerance as I did in the other PR.

Copy link

codecov bot commented May 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 57.93%. Comparing base (781e0e0) to head (67047d6).
Report is 3 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #8490   +/-   ##
=======================================
  Coverage   57.93%   57.93%           
=======================================
  Files         387      387           
  Lines       38839    38852   +13     
=======================================
+ Hits        22502    22510    +8     
- Misses      16337    16342    +5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@emolter emolter added this to the Build 11.0 milestone May 17, 2024
@emolter
Copy link
Collaborator Author

emolter commented May 17, 2024

@braingram these are in units of pixels I believe, so I don't think those absolute tolerances will make much difference. Let me know if you agree or if you have other comments/concerns with this PR

@braingram braingram requested review from mcara and hbushouse May 17, 2024 15:36
@braingram
Copy link
Collaborator

I requested reviews from the codeowners listed for jwst/coron:

jwst/coron @hbushouse @mcara

My main concern is that I'm not familiar with the code. At the moment I don't feel knowledgeable enough to make a judgement on the change. I think the root cause is a small change in the leastsq result (I'm not sure if this is a scipy bug) leading to a larger difference in the test (which might suggest the algorithm isn't stable to small numerical differences but also might be completely innocuous). Hopefully the other reviewers can clear up the uncertainty.

Copy link
Collaborator

@hbushouse hbushouse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that alignment results to the level of 1e-5 are still quite stringent enough. Really good image registration/alignment results are generally considered to be on the order of 1e-2 at best. So there's still a lot of breathing room here.

@hbushouse hbushouse merged commit e9d45e2 into spacetelescope:master May 18, 2024
28 of 29 checks passed
@emolter emolter deleted the JP-3614 branch May 20, 2024 12:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

coronography unit test failures on Mac only
3 participants