Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest --sw doesn't remember failures when using subtests with @parametrize #51

Open
kalekundert opened this issue Oct 9, 2021 · 1 comment

Comments

@kalekundert
Copy link
Contributor

Here's a short example:

# test_foo.py
import pytest

@pytest.mark.parametrize(
        'xs', [
            [True],
            [False],
        ],
)
def test_foo(xs, subtests):
    for x in xs:
        with subtests.test(x=x):
            assert x

When I run pytest --sw the first time, it runs both tests (as expected). As a sidenote, though, it does seem to miscount the tests. The summary at the end claims that 1 test failed and 2 passed, while the verbose output at the beginning claims that 1 failed and 3 passed (each parameter seems to be tested twice).

============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.2.5, py-1.9.0, pluggy-0.13.1 -- /home/kale/.pyenv/versions/3.8.2/bin/python3.8
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/kale/sandbox/pytest_subtest/.hypothesis/examples')
rootdir: /home/kale/sandbox/pytest_subtest
plugins: forked-1.1.3, anyio-3.1.0, xonsh-0.9.27, typeguard-2.12.1, unordered-0.4.1, subtests-0.5.0, cov-2.8.1, xdist-1.32.0, hypothesis-5.8.3, mock-2.0.0, profiling-1.7.0
collected 2 items                                                              
stepwise: no previously failed tests, not skipping.

test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs1] FAILED                                        [100%]
test_foo.py::test_foo[xs1] PASSED                                        [100%]

=================================== FAILURES ===================================
___________________________ test_foo[xs1] (x=False) ____________________________

xs = [False]
subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7fce6eaa8580>, suspend_capture_ctx=<bound method CaptureManager.gl...e='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_foo[xs1]>>)

    @pytest.mark.parametrize(
            'xs', [
                [True],
                [False],
            ],
    )
    def test_foo(xs, subtests):
        for x in xs:
            with subtests.test(x=x):
>               assert x
E               assert False

test_foo.py:14: AssertionError
=============================== warnings summary ===============================
test_foo.py::test_foo[xs0]
test_foo.py::test_foo[xs1]
  /home/kale/.pyenv/versions/3.8.2/lib/python3.8/site-packages/pytest_subtests.py:143: PytestDeprecationWarning: A private pytest class or function was used.
    fixture = CaptureFixture(FDCapture, self.request)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED test_foo.py::test_foo[xs1] - assert False
!!!!!!!! Interrupted: Test failed, continuing from this test next run. !!!!!!!!!
=================== 1 failed, 2 passed, 2 warnings in 0.64s ====================

When I run pytest --sw for a second time, I expect it to skip the first parameter, which passed the first time. Instead, it reruns both parameters:

$ pytest --sw -v
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.2.5, py-1.9.0, pluggy-0.13.1 -- /home/kale/.pyenv/versions/3.8.2/bin/python3.8
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/kale/sandbox/pytest_subtest/.hypothesis/examples')
rootdir: /home/kale/sandbox/pytest_subtest
plugins: forked-1.1.3, anyio-3.1.0, xonsh-0.9.27, typeguard-2.12.1, unordered-0.4.1, subtests-0.5.0, cov-2.8.1, xdist-1.32.0, hypothesis-5.8.3, mock-2.0.0, profiling-1.7.0
collected 2 items                                                              
stepwise: no previously failed tests, not skipping.

test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs1] FAILED                                        [100%]
test_foo.py::test_foo[xs1] PASSED                                        [100%]

=================================== FAILURES ===================================
___________________________ test_foo[xs1] (x=False) ____________________________

xs = [False]
subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7ffb9ee0a580>, suspend_capture_ctx=<bound method CaptureManager.gl...e='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_foo[xs1]>>)

    @pytest.mark.parametrize(
            'xs', [
                [True],
                [False],
            ],
    )
    def test_foo(xs, subtests):
        for x in xs:
            with subtests.test(x=x):
>               assert x
E               assert False

test_foo.py:14: AssertionError
=============================== warnings summary ===============================
test_foo.py::test_foo[xs0]
test_foo.py::test_foo[xs1]
  /home/kale/.pyenv/versions/3.8.2/lib/python3.8/site-packages/pytest_subtests.py:143: PytestDeprecationWarning: A private pytest class or function was used.
    fixture = CaptureFixture(FDCapture, self.request)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED test_foo.py::test_foo[xs1] - assert False
!!!!!!!! Interrupted: Test failed, continuing from this test next run. !!!!!!!!!
=================== 1 failed, 2 passed, 2 warnings in 0.64s ====================

If I had to guess what was going on, I'd say that pytest thinks the test as a whole is passing even though some of the subtests are failing.

@nicoddemus
Copy link
Member

If I had to guess what was going on, I'd say that pytest thinks the test as a whole is passing even though some of the subtests are failing.

Yeah probably that's the reason.

The more I think subtests kinda needs to be integrated into the core, because other plugins (like sw) need to be aware of its functionality in order to work properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants