Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make compliance tests fail and skip #2018

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

sneakers-the-rat
Copy link
Collaborator

I was trying to understand how the compliance tests work, and like in linkml/linkml-runtime#312 i noticed there were a number of places that should be test failures or skips.

I went through and tried to change very little, just turning logging messages that say 'skipping testing' into skips and places where it seemed like there should be a test failure into a test failure, lmk where i was incorrect with that.

it looks like there are a number of masked failures, and marking tests as skips helps us keep track of which we still need to implement.

main this pr
passed 5533 1981
skipped 936 4379
failed 0 110

this PR is mostly to try and raise the question if this is expected or not, and ask more generally what the plan is for the compliance tests and how they're supposed to work? in a longer term sense, a lot of why i'm pitching in heavily at the moment on testing and CI is that i'm trying to lay a base for future work i want to build off of with linkml, and am also still trying to scope out how much work it would take to do that vs. build on my own and meet up later. I see the git blames for the compliance tests are more recent than elsewhere, so is this a new test suite that has a lot of skips because it's new? i think this kind of testing is ambitious and cool and if so i want to help out to try and make it robust and implementable. Trying to understand this thing, so let me know where i'm wrong!

I also see a reporting system started here, but I can't really tell how to read the outputs i'm getting - is that ongoing work on the system for reporting the capabilities? if so i think this change lets us fail/skip the tests faster and have better visiblity - we can hook into pytest's collection/teardown features to do that as well while making the report visible in the actual testing output.

@sneakers-the-rat sneakers-the-rat added the devops poetry, setuptools, actions, etc. related changes label Mar 23, 2024
@sneakers-the-rat sneakers-the-rat marked this pull request as draft March 23, 2024 08:11
@cmungall
Copy link
Member

cmungall commented Mar 24, 2024 via email

@sneakers-the-rat
Copy link
Collaborator Author

Ah! thank you https://github.com/orgs/linkml/discussions/1549 does make it clearer :)

The priority is transparency for linkml users >> transparency for core devs
(though latter is obv important). This is also part of than plan for linkml
ports to other languages, extending validation strategies etc

hell yes! I think we can do both. I see the pieces are in place for reporting, and a note in 1549 says we want to make markdown summaries from it. i would love to help with that bc i think this is a great idea!! Summary could be dumped to JSON during test runs and versioned, with a test to check if the compliance results have changed, and then the docs could read that and emit a markdown version of it when they are built.

We could use pytest hooks to gather results so we don't need to worry about reaching the end of a helper function, so that way we can get both - clarity for devs by using pytest.skip and test failures as expected, and also human readable output of test results in markdown.

the pytest_terminal_summary hook could be used to print the results there after a test run, pytest_report_teststatus also could be used for the actual report collection, and then pytest_sessionfinish to write the results out.

There are likely many things that can be improved here but best to discuss
first before diving in with PRs, to make sure goals are aligned!

Yes! the reason I opened this one was to start conversation about that, as well as having a CI run on the books demonstrating what i'm talking about re: results - we can close this if we don't like it, but i raised this without discussion just bc it seemed like we probably won't want asserts hardcoded to True, and the logging messages indicating a skip probably should just be a pytest.skip(), and then if my read was right on what was supposed to be happening here i could demo how to pull in those skips/fails/etc. using the pytest hooks to report summaries of the compliance tests

some of the test failures seem substantive, like dropping a value on a roundtrip or uri problems and whatnot, so i also figured it would be useful to document those since i noticed them in local testing.

lmk if there is something else we want here, because i would love to help out finishing up the reporting stuff and documenting how these work and all that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devops poetry, setuptools, actions, etc. related changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants