Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

self.skipTest does not produce the same results as unittest does #5

Open
nicoddemus opened this issue Apr 1, 2019 · 1 comment
Open

Comments

@nicoddemus
Copy link
Member

Given this test case:

from unittest import TestCase, main

class T(TestCase):

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
                if i % 2 == 0:
                    self.skipTest('even number')


if __name__ == '__main__':
    main()

Running with python:

λ python .tmp\test-ut-skip.py
sss
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK (skipped=3)

Running with pytest:

 λ pytest .tmp\test-ut-skip.py -q
ss                                                             [100%]
2 skipped in 0.01 seconds

The problem is that TestCaseFunction.addSkip will append the exception info to a list, but pytest_runtest_makereport will only pop one item off the list. We need to find a way to hook into TestCase.subTest to issue pytest_runtest_logreport from within the with statement.

@eli-schwartz
Copy link

I think I rediscovered this... using:

import unittest


class T(unittest.TestCase):
    def test_foo(self):
        for i in range(6):
            with self.subTest("custom message", i=i):
                if i==5:
                    assert True == False
                if i % 2 != 0:
                    self.skipTest(f'{i}: not equal')
                else:
                    self.skipTest(f'{i}: equal')


if __name__ == "__main__":
    unittest.main()

I was wondering why pytest did not report a failure for step i=5 at all. It does report a failure if both skipTests are changed to pass, but seemingly the presence of skipTest throws this off.

This is significantly worse than just failing to report the right number of skipped tests. A failing run is erroneously handled as a passing run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants