Name | pytest-retry JSON |
Version |
1.6.3
JSON |
| download |
home_page | None |
Summary | Adds the ability to retry flaky tests in CI environments |
upload_time | 2024-05-14 03:58:47 |
maintainer | None |
docs_url | None |
author | str0zzapreti |
requires_python | >=3.9 |
license | MIT License Copyright (c) 2022 Silas Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
rerun
pytest
flaky
|
VCS |
|
bugtrack_url |
|
requirements |
pytest
black
mypy
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
![Tests](https://github.com/str0zzapreti/pytest-retry/actions/workflows/tests.yaml/badge.svg)
# pytest-retry
pytest-retry is a plugin for Pytest which adds the ability to retry flaky tests,
thereby improving the consistency of the test suite results.
## Requirements
pytest-retry is designed for the latest versions of Python and Pytest. Python 3.9+
and pytest 7.0.0+ are required.
## Installation
Use pip to install pytest-retry:
```bash
$ pip install pytest-retry
```
## Usage
There are two main ways to use pytest-retry:
### 1. Global settings
Once installed, pytest-retry adds new command line and ini config options for pytest.
Run Pytest with the command line argument --retries in order to retry every test in
the event of a failure. The following example will retry each failed up to two times
before proceeding to the next test:
```bash
$ python -m pytest --retries 2
```
An optional delay can be specified using the --retry-delay argument. This will insert
a fixed delay (in seconds) between each attempt when a test fails. This can be useful
if the test failures are due to intermittent environment issues which clear up after
a few seconds
```bash
$ python -m pytest --retries 2 --retry-delay 5
```
#### Advanced Options:
There are two custom hooks provided for the purpose of setting global exception
filters for your entire Pytest suite. `pytest_set_filtered_exceptions`
and `pytest_set_excluded_exceptions`. You can define either of them in your
conftest.py file and return a list of exception types. Note: these hooks are
mutually exclusive and cannot both be defined at the same time.
Example:
```py
def pytest_set_excluded_exceptions():
"""
All tests will be retried unless they fail due to an AssertionError or CustomError
"""
return [AssertionError, CustomError]
```
There is a command line option to specify the test timing method, which can either
be `overwrite` (default) or `cumulative`. With cumulative timing, the duration of
each test attempt is summed for the reported overall test duration. The default
behavior simply reports the timing of the final attempt.
```bash
$ python -m pytest --retries 2 --cumulative-timing 1
```
If you're not sure which to use, stick with the default `overwrite` method. This
generally plays nicer with time-based test splitting algorithms and will result in
more even splits.
Instead of command line arguments, you can set any of these config options in your
pytest.ini, tox.ini, or pyproject.toml file. Any command line arguments will take
precedence over options specified in one of these config files. Here are some
sample configs that you can copy into your project to get started:
_pyproject.toml_
```toml
[tool.pytest.ini_options]
retries = 2
retry_delay = 0.5
cumulative_timing = false
```
_config.ini/tox.ini_
```ini
[pytest]
retries = 2
retry_delay = 0.5
cumulative_timing = false
```
### 2. Pytest flaky mark
Mark individual tests as 'flaky' to retry them when they fail. If no command line
arguments are passed, only the marked tests will be retried. The default values
are 1 retry attempt with a 0-second delay
```py
@pytest.mark.flaky
def test_unreliable_service():
...
```
The number of times each test will be retried and/or the delay can be manually
specified as well
```py
@pytest.mark.flaky(retries=3, delay=1)
def test_unreliable_service():
# This test will be retried up to 3 times (4 attempts total) with a
# one second delay between each attempt
...
```
If you want to control filtered or excluded exceptions per-test, the flaky mark
provides the `only_on` and `exclude` arguments which both take a list of exception
types, including any custom types you may have defined for your project. Note that
only one of these arguments may be used at a time.
A test with a list of `only_on` exceptions will only be retried if it fails with
one of the listed exceptions. A test with a list of `exclude` exceptions will
only be retried if it fails with an exception which does not match any of the
listed exceptions.
If the exception for a subsequent attempt changes and no longer matches the filter,
no further attempts will be made and the test will immediately fail.
```py
@pytest.mark.flaky(retries=2, only_on=[ValueError, IndexError])
def test_unreliable_service():
# This test will only be retried if it fails due to raising a ValueError
# or an IndexError. e.g., an AssertionError will fail without retrying
...
```
If you want some other generalized condition to control whether a test is retried, use the
`condition` argument. Any statement which results in a bool can be used here to add granularity
to your retries. The test will only be retried if `condition` is `True`. Note, there is no
matching command line option for `condition`, but if you need to globally apply this type of logic
to all of your tests, consider invoking the `pytest_collection_modifyitems` hook.
```py
@pytest.mark.flaky(retries=2, condition=sys.platform.startswith('win32'))
def test_only_flaky_on_some_systems():
# This test will only be retried if sys.platform.startswith('win32') evaluates to `True`
```
Finally, there is a flaky mark argument for the test timing method, which can either
be `overwrite` (default) or `cumulative`. See **Command Line** > **Advanced Options**
for more information
```py
@pytest.mark.flaky(timing='overwrite')
def test_unreliable_service():
...
```
A flaky mark will override any command line options and exception filter hooks
specified when running Pytest.
### Things to consider
- **Currently, failing test fixtures are not retried.** In the future, flaky test setup
may be retried, although given the undesirability of flaky tests in general, flaky setup
should be avoided at all costs. Any failures during teardown will immediately halt
further attempts so that they can be addressed immediately. Make sure your teardowns
always work reliably regardless of the number of retries when using this plugin
- When a flaky test is retried, the plugin runs teardown steps for the test as if it
had passed. This is to ensure that any partial state created by the test is cleaned up
before the next attempt so that subsequent attempts do not conflict with one another.
Class and module fixtures are included in this teardown with the assumption that false
test failures should be a rare occurrence and the performance hit from re-running
these potentially expensive fixtures is worth it to ensure clean initial test state.
With feedback, the option to not re-run class and module fixtures may be added, but
in general, these types of fixtures should be avoided for known flaky tests.
- Flaky tests are not sustainable. This plugin is designed as an easy short-term
solution while a permanent fix is implemented. Use the reports generated by this plugin
to identify issues with the tests or testing environment and resolve them.
## Reporting
pytest-retry intercepts the standard Pytest report flow in order to retry tests and
update the reports as required. When a test is retried at least once, an R is printed
to the live test output and the counter of retried tests is incremented by 1. After
the test session has completed, an additional report is generated below the standard
output which lists all of the tests which were retried, along with the exceptions
that occurred during each failed attempt.
```
plugins: retry-1.1.0
collected 1 item
test_retry_passes_after_temporary_test_failure.py R. [100%]
======================= the following tests were retried =======================
test_eventually_passes failed on attempt 1! Retrying!
Traceback (most recent call last):
File "tests/test_example.py", line 4, in test_eventually_passes
assert len(a) > 1
AssertionError: assert 1 > 1
+ where 1 = len([1])
=========================== end of test retry report ===========================
========================= 1 passed, 1 retried in 0.01s =========================
```
Tests which have been retried but eventually pass are counted as both retried and
passed, and tests which have been retried but eventually fail are counted as both
retried and failed. Skipped, xfailed, and xpassed tests are never retried.
Three pytest stash keys are available to import from the pytest_retry plugin:
`attempts_key`, `outcome_key`, and `duration_key`. These keys are used by the plugin
to store the number of attempts each item has undergone, whether the test passed or
failed, and the total duration from setup to teardown, respectively. (If any stage of
setup, call, or teardown fails, a test is considered failed overall). These stash keys
can be used to retrieve these reports for use in your own hooks or plugins.
Raw data
{
"_id": null,
"home_page": null,
"name": "pytest-retry",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "rerun, pytest, flaky",
"author": "str0zzapreti",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/9c/95/8e1be0250a5063c28a1e713b9665ce3bb76a7807096da1b8d6b36cb8856b/pytest_retry-1.6.3.tar.gz",
"platform": null,
"description": "![Tests](https://github.com/str0zzapreti/pytest-retry/actions/workflows/tests.yaml/badge.svg)\n# pytest-retry\n\npytest-retry is a plugin for Pytest which adds the ability to retry flaky tests,\nthereby improving the consistency of the test suite results. \n\n## Requirements\n\npytest-retry is designed for the latest versions of Python and Pytest. Python 3.9+\nand pytest 7.0.0+ are required. \n\n## Installation\n\nUse pip to install pytest-retry:\n```bash\n$ pip install pytest-retry\n```\n\n## Usage\n\nThere are two main ways to use pytest-retry:\n\n### 1. Global settings\n\nOnce installed, pytest-retry adds new command line and ini config options for pytest.\nRun Pytest with the command line argument --retries in order to retry every test in \nthe event of a failure. The following example will retry each failed up to two times\nbefore proceeding to the next test:\n\n```bash\n$ python -m pytest --retries 2\n```\n\nAn optional delay can be specified using the --retry-delay argument. This will insert\na fixed delay (in seconds) between each attempt when a test fails. This can be useful\nif the test failures are due to intermittent environment issues which clear up after\na few seconds\n\n```bash\n$ python -m pytest --retries 2 --retry-delay 5\n```\n\n#### Advanced Options:\nThere are two custom hooks provided for the purpose of setting global exception\nfilters for your entire Pytest suite. `pytest_set_filtered_exceptions`\nand `pytest_set_excluded_exceptions`. You can define either of them in your \nconftest.py file and return a list of exception types. Note: these hooks are \nmutually exclusive and cannot both be defined at the same time.\n\nExample:\n```py\ndef pytest_set_excluded_exceptions():\n \"\"\"\n All tests will be retried unless they fail due to an AssertionError or CustomError\n \"\"\"\n return [AssertionError, CustomError]\n```\n\nThere is a command line option to specify the test timing method, which can either\nbe `overwrite` (default) or `cumulative`. With cumulative timing, the duration of \neach test attempt is summed for the reported overall test duration. The default\nbehavior simply reports the timing of the final attempt.\n\n```bash\n$ python -m pytest --retries 2 --cumulative-timing 1\n```\n\nIf you're not sure which to use, stick with the default `overwrite` method. This\ngenerally plays nicer with time-based test splitting algorithms and will result in\nmore even splits.\n\nInstead of command line arguments, you can set any of these config options in your\npytest.ini, tox.ini, or pyproject.toml file. Any command line arguments will take\nprecedence over options specified in one of these config files. Here are some\nsample configs that you can copy into your project to get started:\n\n_pyproject.toml_\n```toml\n[tool.pytest.ini_options]\nretries = 2\nretry_delay = 0.5\ncumulative_timing = false\n```\n\n_config.ini/tox.ini_\n```ini\n[pytest]\nretries = 2\nretry_delay = 0.5\ncumulative_timing = false\n```\n\n### 2. Pytest flaky mark\n\nMark individual tests as 'flaky' to retry them when they fail. If no command line\narguments are passed, only the marked tests will be retried. The default values\nare 1 retry attempt with a 0-second delay\n\n```py\n@pytest.mark.flaky\ndef test_unreliable_service():\n ...\n```\n\nThe number of times each test will be retried and/or the delay can be manually\nspecified as well\n\n```py\n@pytest.mark.flaky(retries=3, delay=1)\ndef test_unreliable_service():\n # This test will be retried up to 3 times (4 attempts total) with a\n # one second delay between each attempt\n ...\n```\n\nIf you want to control filtered or excluded exceptions per-test, the flaky mark\nprovides the `only_on` and `exclude` arguments which both take a list of exception\ntypes, including any custom types you may have defined for your project. Note that \nonly one of these arguments may be used at a time.\n\nA test with a list of `only_on` exceptions will only be retried if it fails with\none of the listed exceptions. A test with a list of `exclude` exceptions will\nonly be retried if it fails with an exception which does not match any of the\nlisted exceptions.\n\nIf the exception for a subsequent attempt changes and no longer matches the filter,\nno further attempts will be made and the test will immediately fail.\n\n```py\n@pytest.mark.flaky(retries=2, only_on=[ValueError, IndexError])\ndef test_unreliable_service():\n # This test will only be retried if it fails due to raising a ValueError\n # or an IndexError. e.g., an AssertionError will fail without retrying\n ...\n```\n\nIf you want some other generalized condition to control whether a test is retried, use the\n`condition` argument. Any statement which results in a bool can be used here to add granularity\nto your retries. The test will only be retried if `condition` is `True`. Note, there is no\nmatching command line option for `condition`, but if you need to globally apply this type of logic\nto all of your tests, consider invoking the `pytest_collection_modifyitems` hook.\n\n```py\n@pytest.mark.flaky(retries=2, condition=sys.platform.startswith('win32'))\ndef test_only_flaky_on_some_systems():\n # This test will only be retried if sys.platform.startswith('win32') evaluates to `True`\n```\n\nFinally, there is a flaky mark argument for the test timing method, which can either\nbe `overwrite` (default) or `cumulative`. See **Command Line** > **Advanced Options** \nfor more information\n\n```py\n@pytest.mark.flaky(timing='overwrite')\ndef test_unreliable_service():\n ...\n```\n\nA flaky mark will override any command line options and exception filter hooks\nspecified when running Pytest.\n\n### Things to consider\n\n- **Currently, failing test fixtures are not retried.** In the future, flaky test setup \nmay be retried, although given the undesirability of flaky tests in general, flaky setup \nshould be avoided at all costs. Any failures during teardown will immediately halt\nfurther attempts so that they can be addressed immediately. Make sure your teardowns\nalways work reliably regardless of the number of retries when using this plugin\n\n- When a flaky test is retried, the plugin runs teardown steps for the test as if it \nhad passed. This is to ensure that any partial state created by the test is cleaned up \nbefore the next attempt so that subsequent attempts do not conflict with one another.\nClass and module fixtures are included in this teardown with the assumption that false\ntest failures should be a rare occurrence and the performance hit from re-running \nthese potentially expensive fixtures is worth it to ensure clean initial test state. \nWith feedback, the option to not re-run class and module fixtures may be added, but \nin general, these types of fixtures should be avoided for known flaky tests.\n\n- Flaky tests are not sustainable. This plugin is designed as an easy short-term\nsolution while a permanent fix is implemented. Use the reports generated by this plugin\nto identify issues with the tests or testing environment and resolve them.\n\n## Reporting\n\npytest-retry intercepts the standard Pytest report flow in order to retry tests and\nupdate the reports as required. When a test is retried at least once, an R is printed\nto the live test output and the counter of retried tests is incremented by 1. After\nthe test session has completed, an additional report is generated below the standard\noutput which lists all of the tests which were retried, along with the exceptions\nthat occurred during each failed attempt. \n\n```\nplugins: retry-1.1.0\ncollected 1 item\n\ntest_retry_passes_after_temporary_test_failure.py R. [100%]\n\n======================= the following tests were retried =======================\n\n\ttest_eventually_passes failed on attempt 1! Retrying!\n\tTraceback (most recent call last):\n\t File \"tests/test_example.py\", line 4, in test_eventually_passes\n\t assert len(a) > 1\n\tAssertionError: assert 1 > 1\n\t + where 1 = len([1])\n\n=========================== end of test retry report ===========================\n\n\n========================= 1 passed, 1 retried in 0.01s =========================\n```\n\nTests which have been retried but eventually pass are counted as both retried and\npassed, and tests which have been retried but eventually fail are counted as both\nretried and failed. Skipped, xfailed, and xpassed tests are never retried.\n\nThree pytest stash keys are available to import from the pytest_retry plugin:\n`attempts_key`, `outcome_key`, and `duration_key`. These keys are used by the plugin\nto store the number of attempts each item has undergone, whether the test passed or\nfailed, and the total duration from setup to teardown, respectively. (If any stage of \nsetup, call, or teardown fails, a test is considered failed overall). These stash keys \ncan be used to retrieve these reports for use in your own hooks or plugins.\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2022 Silas Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "Adds the ability to retry flaky tests in CI environments",
"version": "1.6.3",
"project_urls": {
"Homepage": "https://github.com/str0zzapreti/pytest-retry"
},
"split_keywords": [
"rerun",
" pytest",
" flaky"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8fb3fa17695ab9096f75b498642e21db59c0cbcc7d97c709daa204ee0953a72c",
"md5": "8fa7917d07e60f4453816c9739f5c292",
"sha256": "e96f7df77ee70b0838d1085f9c3b8b5b7d74bf8947a0baf32e2b8c71b27683c8"
},
"downloads": -1,
"filename": "pytest_retry-1.6.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8fa7917d07e60f4453816c9739f5c292",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 13526,
"upload_time": "2024-05-14T03:58:46",
"upload_time_iso_8601": "2024-05-14T03:58:46.033609Z",
"url": "https://files.pythonhosted.org/packages/8f/b3/fa17695ab9096f75b498642e21db59c0cbcc7d97c709daa204ee0953a72c/pytest_retry-1.6.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9c958e1be0250a5063c28a1e713b9665ce3bb76a7807096da1b8d6b36cb8856b",
"md5": "2af3be7607b3f6e5adde30b2f06cd0f5",
"sha256": "36ccfa11c8c8f9ddad5e20375182146d040c20c4a791745139c5a99ddf1b557d"
},
"downloads": -1,
"filename": "pytest_retry-1.6.3.tar.gz",
"has_sig": false,
"md5_digest": "2af3be7607b3f6e5adde30b2f06cd0f5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 19112,
"upload_time": "2024-05-14T03:58:47",
"upload_time_iso_8601": "2024-05-14T03:58:47.781469Z",
"url": "https://files.pythonhosted.org/packages/9c/95/8e1be0250a5063c28a1e713b9665ce3bb76a7807096da1b8d6b36cb8856b/pytest_retry-1.6.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-14 03:58:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "str0zzapreti",
"github_project": "pytest-retry",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pytest",
"specs": [
[
">=",
"7.0.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"23.3.0"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"1.3.0"
]
]
}
],
"tox": true,
"lcname": "pytest-retry"
}