Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue #435 :- solved test predict #507

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

ayeankit
Copy link

@ayeankit ayeankit commented Mar 7, 2024

Hey @sichinaga and @mtezzele , I have raised the pr for issue #435. Can you review it?

@ayeankit
Copy link
Author

ayeankit commented Mar 7, 2024

hey @mtezzele, Can please help me here, I couldn't find why GitHub actions is failing

@fandreuz
Copy link
Contributor

fandreuz commented Mar 7, 2024

Hi @ayeankit, could you please have a look at the tests close to that which you have refactored? We use a different test style:

  • We don't usually use print statements to report the status of the test, since pytest will provide already enough details
  • We don't need to catch assertion error

Note that if the test does not raise an AssertionError, pytest will assume it was successful even if it wasn't.


try:
expected = np.load("tests/test_datasets/input_sample_predict_exact.npy")
except Exception as e:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to adopt defensive programming in tests, letting the snippet fail if we can't load the test dataset is just fine

@sichinaga
Copy link
Contributor

sichinaga commented Mar 7, 2024

Hi @ayeankit! It's nice to meet you. :)

So one comment I want to make about this issue is that the main problem is not the test itself, but rather the data that the test is functioning on. Basically the original test should work just fine as long as the input to the predict method and the expected matrix make sense. At the moment, both are empty arrays, which is why the test technically passes, but it overall doesn't add much meaning to the testing suite.

Personally, my suggestion for fixing this test is to redefine the predict input and the expected matrix. If you notice what the predict function does [source], it's basically computing $\mathbf{A}\mathbf{x}$ for some input $\mathbf{x}$. So some expected behavior is that

$$ \mathbf{A}\mathbf{x}(t_k) \approx \mathbf{x}(t_{k+1}), $$

where $\mathbf{x}(t_k)$ and $\mathbf{x}(t_{k+1})$ are snapshots that we used to fit the DMD model. $\mathbf{x}(t_k)$ specifically denotes the snapshot at the $k$ th time step. We expect this because of how the DMD problem is formulated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants