pytest-rerunfailures

pytest plugin to re-run tests to eliminate flaky failures


Keywords
failures, flaky, pytest, rerun
License
MPL-2.0
Install
pip install pytest-rerunfailures==14.0

Documentation

pytest-rerunfailures

pytest-rerunfailures is a plugin for pytest that re-runs tests to eliminate intermittent failures.

License

PyPI

GitHub Actions

Requirements

You will need the following prerequisites in order to use pytest-rerunfailures:

  • Python 3.8+ or PyPy3
  • pytest 7.2 or newer

This plugin can recover from a hard crash with the following optional prerequisites:

  • pytest-xdist 2.3.0 or newer

This package is currently tested against the last 5 minor pytest releases. In case you work with an older version of pytest you should consider updating or use one of the earlier versions of this package.

Installation

To install pytest-rerunfailures:

$ pip install pytest-rerunfailures

Recover from hard crashes

If one or more tests trigger a hard crash (for example: segfault), this plugin will ordinarily be unable to rerun the test. However, if a compatible version of pytest-xdist is installed, and the tests are run within pytest-xdist using the -n flag, this plugin will be able to rerun crashed tests, assuming the workers and controller are on the same LAN (this assumption is valid for almost all cases because most of the time the workers and controller are on the same computer). If this assumption is not the case, then this functionality may not operate.

Re-run all failures

To re-run all test failures, use the --reruns command line option with the maximum number of times you'd like the tests to run:

$ pytest --reruns 5

Failed fixture or setup_class will also be re-executed.

To add a delay time between re-runs use the --reruns-delay command line option with the amount of seconds that you would like wait before the next test re-run is launched:

$ pytest --reruns 5 --reruns-delay 1

Re-run all failures matching certain expressions

To re-run only those failures that match a certain list of expressions, use the --only-rerun flag and pass it a regular expression. For example, the following would only rerun those errors that match AssertionError:

$ pytest --reruns 5 --only-rerun AssertionError

Passing the flag multiple times accumulates the arguments, so the following would only rerun those errors that match AssertionError or ValueError:

$ pytest --reruns 5 --only-rerun AssertionError --only-rerun ValueError

Re-run all failures other than matching certain expressions

To re-run only those failures that do not match a certain list of expressions, use the --rerun-except flag and pass it a regular expression. For example, the following would only rerun errors other than that match AssertionError:

$ pytest --reruns 5 --rerun-except AssertionError

Passing the flag multiple times accumulates the arguments, so the following would only rerun those errors that does not match with AssertionError or OSError:

$ pytest --reruns 5 --rerun-except AssertionError --rerun-except OSError

Note

When the `AssertionError comes from the use of the assert keyword, use --rerun-except assert` instead:

$ pytest --reruns 5 --rerun-except assert

Re-run individual failures

To mark individual tests as flaky, and have them automatically re-run when they fail, add the flaky mark with the maximum number of times you'd like the test to run:

@pytest.mark.flaky(reruns=5)
def test_example():
    import random
    assert random.choice([True, False])

Note that when teardown fails, two reports are generated for the case, one for the test case and the other for the teardown error.

You can also specify the re-run delay time in the marker:

@pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_example():
    import random
    assert random.choice([True, False])

You can also specify an optional condition in the re-run marker:

@pytest.mark.flaky(reruns=5, condition=sys.platform.startswith("win32"))
def test_example():
   import random
   assert random.choice([True, False])

Exception filtering can be accomplished by specifying regular expressions for only_rerun and rerun_except. They override the --only-rerun and --rerun-except command line arguments, respectively.

Arguments can be a single string:

@pytest.mark.flaky(rerun_except="AssertionError")
def test_example():
    raise AssertionError()

Or a list of strings:

@pytest.mark.flaky(only_rerun=["AssertionError", "ValueError"])
def test_example():
    raise AssertionError()

You can use @pytest.mark.flaky(condition) similarly as @pytest.mark.skipif(condition), see pytest-mark-skipif

@pytest.mark.flaky(reruns=2,condition="sys.platform.startswith('win32')")
def test_example():
    import random
    assert random.choice([True, False])
# totally same as the above
@pytest.mark.flaky(reruns=2,condition=sys.platform.startswith("win32"))
def test_example():
  import random
  assert random.choice([True, False])

Note that the test will re-run for any condition that is truthy.

Output

Here's an example of the output provided by the plugin when run with --reruns 2 and -r aR:

test_report.py RRF

================================== FAILURES ==================================
__________________________________ test_fail _________________________________

    def test_fail():
>       assert False
E       assert False

test_report.py:9: AssertionError
============================ rerun test summary info =========================
RERUN test_report.py::test_fail
RERUN test_report.py::test_fail
============================ short test summary info =========================
FAIL test_report.py::test_fail
======================= 1 failed, 2 rerun in 0.02 seconds ====================

Note that output will show all re-runs. Tests that fail on all the re-runs will be marked as failed.

Compatibility

  • This plugin may not be used with class, module, and package level fixtures.
  • This plugin is not compatible with pytest-xdist's --looponfail flag.
  • This plugin is not compatible with the core --pdb flag.
  • This plugin is not compatible with the plugin flaky, you can only have pytest-rerunfailures or flaky but not both.

Resources

Development

  • Test execution count can be retrieved from the execution_count attribute in test item's object. Example:

    @hookimpl(tryfirst=True)
    def pytest_runtest_makereport(item, call):
        print(item.execution_count)