pytest-rerunfailures is a plugin for pytest that re-runs tests to eliminate intermittent failures.
You will need the following prerequisites in order to use pytest-rerunfailures:
- Python 3.6, up to 3.9, or PyPy3
- pytest 5.3 or newer
This package is currently tested against the last 5 minor pytest releases. In case you work with an older version of pytest you should consider updating or use one of the earlier versions of this package.
To install pytest-rerunfailures:
$ pip install pytest-rerunfailures
To re-run all test failures, use the --reruns
command line option with the
maximum number of times you'd like the tests to run:
$ pytest --reruns 5
Failed fixture or setup_class will also be re-executed.
To add a delay time between re-runs use the --reruns-delay
command line
option with the amount of seconds that you would like wait before the next
test re-run is launched:
$ pytest --reruns 5 --reruns-delay 1
To re-run only those failures that match a certain list of expressions, use the
--only-rerun
flag and pass it a regular expression. For example,
the following would only rerun those errors that match AssertionError
:
$ pytest --reruns 5 --only-rerun AssertionError
Passing the flag multiple times accumulates the arguments, so the following
would only rerun those errors that match AssertionError
or ValueError
:
$ pytest --reruns 5 --only-rerun AssertionError --only-rerun ValueError
To mark individual tests as flaky, and have them automatically re-run when they
fail, add the flaky
mark with the maximum number of times you'd like the
test to run:
@pytest.mark.flaky(reruns=5)
def test_example():
import random
assert random.choice([True, False])
Note that when teardown fails, two reports are generated for the case, one for the test case and the other for the teardown error.
You can also specify the re-run delay time in the marker:
@pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_example():
import random
assert random.choice([True, False])
You can also specify an optional condition
in the re-run marker:
@pytest.mark.flaky(reruns=5, condition=sys.platform.startswith("win32"))
def test_example():
import random
assert random.choice([True, False])
You can use @pytest.mark.flaky(condition)
similarly as @pytest.mark.skipif(condition)
, see pytest-mark-skipif
@pytest.mark.flaky(reruns=2,condition="sys.platform.startswith('win32')")
def test_example():
import random
assert random.choice([True, False])
# totally same as the above
@pytest.mark.flaky(reruns=2,condition=sys.platform.startswith("win32"))
def test_example():
import random
assert random.choice([True, False])
Note that the test will re-run for any condition
that is truthy.
Here's an example of the output provided by the plugin when run with
--reruns 2
and -r aR
:
test_report.py RRF ================================== FAILURES ================================== __________________________________ test_fail _________________________________ def test_fail(): > assert False E assert False test_report.py:9: AssertionError ============================ rerun test summary info ========================= RERUN test_report.py::test_fail RERUN test_report.py::test_fail ============================ short test summary info ========================= FAIL test_report.py::test_fail ======================= 1 failed, 2 rerun in 0.02 seconds ====================
Note that output will show all re-runs. Tests that fail on all the re-runs will be marked as failed.
- This plugin may not be used with class, module, and package level fixtures.
- This plugin is not compatible with pytest-xdist's --looponfail flag.
- This plugin is not compatible with the core --pdb flag.
Test execution count can be retrieved from the
execution_count
attribute in testitem
's object. Example:@hookimpl(tryfirst=True) def pytest_runtest_makereport(item, call): print(item.execution_count)