[![build status](https://github.com/asottile/detect-test-pollution/actions/workflows/main.yml/badge.svg)](https://github.com/asottile/detect-test-pollution/actions/workflows/main.yml)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/asottile/detect-test-pollution/main.svg)](https://results.pre-commit.ci/latest/github/asottile/detect-test-pollution/main)
detect-test-pollution
=====================
a tool to detect test pollution
## installation
```bash
pip install detect-test-pollution
```
## what is test pollution?
[![video about test pollution](https://camo.githubusercontent.com/e72348a4fa8369247e9e2f1441de4424065fc42d6d53aad6ef703e264b820c3d/68747470733a2f2f696d672e796f75747562652e636f6d2f76692f4652746569616e61504d6f2f6d7164656661756c742e6a7067)](https://youtu.be/FRteianaPMo)
test pollution is where a test fails due to the side-effects of some other test
in the test suite.
it usually appears as a "test flake" something where the test fails
mysteriously but passes when run by itself.
a simple example of this is the following python code:
```python
k = 1
def test_k():
assert k == 1
def test_k2():
global k
k = 2
assert k == 2
```
now this example is a little bit silly, you probably wouldn't write code this
poorly but helps us demonstrate the problem here.
when run normally -- these tests pass:
```console
$ pytest -q t.py
.. [100%]
2 passed in 0.00s
```
but, if the tests were run in some other order (due to something like
[pytest-randomly] or [pytest-xdist]) then the pollution would be apparent:
```console
$ pytest -q t.py::test_k2 t.py::test_k
.F [100%]
=================================== FAILURES ===================================
____________________________________ test_k ____________________________________
def test_k():
> assert k == 1
E assert 2 == 1
t.py:4: AssertionError
=========================== short test summary info ============================
FAILED t.py::test_k - assert 2 == 1
1 failed, 1 passed in 0.03s
```
often this flake happens in a codebase with hundreds or thousands of tests
and it's difficult to track down which test is causing the global side-effects.
that's where this tool comes in handy! it helps you find the pair of tests
which error when run in order.
[pytest-randomly]: https://github.com/pytest-dev/pytest-randomly
[pytest-xdist]: https://github.com/pytest-dev/pytest-xdist
## usage
[![video about using detect-test-pollution](https://user-images.githubusercontent.com/857609/162450980-1e45db95-b6dc-4783-9bcb-7a3dc02bb1e0.jpg)](https://www.youtube.com/watch?v=w5O4zTusyJ0)
once you have identified a failing test, you'll be able to feed it into
`detect-test-pollution` to find the causal test.
the basic mode is to run:
```bash
detect-test-pollution \
--failing-test test.py::test_id_here \
--tests ./tests
```
where `test.py::test_id_here` is the identifier of the failing test and
`./tests` is the directory where your testsuite lives.
if you've already narrowed down the list of testids further than that, you
can specify a `--testids-file` instead of `--tests` to speed up discovery:
```bash
detect-test-pollution \
--failing-test test.py::test_id_here \
--testids-file ./testids
```
you can usually get a list of testids via `pytest --collect-only -q` (though
you'll need to strip some unrelated lines at the end, such as timing and
warning info).
then `detect-test-pollution` will bisect the list of tests to find the failing
one. here's an example bisection from a [bug in pytest]
```console
$ detect-test-pollution --tests ./testing --failing-test testing/io/test_terminalwriter.py::test_should_do_markup_FORCE_COLOR
discovering all tests...
-> discovered 3140 tests!
ensuring test passes by itself...
-> OK!
ensuring test fails with test group...
-> OK!
running step 1:
- 3139 tests remaining (about 12 steps)
running step 2:
- 1570 tests remaining (about 11 steps)
running step 3:
- 785 tests remaining (about 10 steps)
running step 4:
- 393 tests remaining (about 9 steps)
running step 5:
- 197 tests remaining (about 8 steps)
running step 6:
- 99 tests remaining (about 7 steps)
running step 7:
- 50 tests remaining (about 6 steps)
running step 8:
- 25 tests remaining (about 5 steps)
running step 9:
- 12 tests remaining (about 4 steps)
running step 10:
- 6 tests remaining (about 3 steps)
running step 11:
- 3 tests remaining (about 2 steps)
double checking we found it...
-> the polluting test is: testing/test_terminal.py::TestTerminal::test_report_teststatus_explicit_markup
```
[bug in pytest]: https://github.com/pytest-dev/pytest/issues/9708
## fuzzing
`detect-test-pollution` can also be used to "fuzz" out failing tests.
it does this by shuffling the test ids and running the testsuite until it
fails.
here's an example execution on a silly testsuite:
```console
$ detect-test-pollution --fuzz --tests t.py
discovering all tests...
-> discovered 1002 tests!
run 1...
-> OK!
run 2...
-> found failing test!
try `detect-test-pollution --failing-test t.py::test_k --tests t.py`!
```
afterwards you can use the normal mode of `detect-test-pollution` to find the
failing pair.
## supported test runners
at the moment only `pytest` is supported -- though in theory the tool could
be adapted to support other python test runners, or even other languages.
Raw data
{
"_id": null,
"home_page": "https://github.com/asottile/detect-test-pollution",
"name": "detect-test-pollution",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "",
"author": "Anthony Sottile",
"author_email": "asottile@umich.edu",
"download_url": "https://files.pythonhosted.org/packages/7d/45/c1e5d4aa0e6c64e87a284bb12e3b607474dd5c8218ed7464f9491781b86d/detect_test_pollution-1.2.0.tar.gz",
"platform": null,
"description": "[![build status](https://github.com/asottile/detect-test-pollution/actions/workflows/main.yml/badge.svg)](https://github.com/asottile/detect-test-pollution/actions/workflows/main.yml)\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/asottile/detect-test-pollution/main.svg)](https://results.pre-commit.ci/latest/github/asottile/detect-test-pollution/main)\n\ndetect-test-pollution\n=====================\n\na tool to detect test pollution\n\n## installation\n\n```bash\npip install detect-test-pollution\n```\n\n## what is test pollution?\n\n[![video about test pollution](https://camo.githubusercontent.com/e72348a4fa8369247e9e2f1441de4424065fc42d6d53aad6ef703e264b820c3d/68747470733a2f2f696d672e796f75747562652e636f6d2f76692f4652746569616e61504d6f2f6d7164656661756c742e6a7067)](https://youtu.be/FRteianaPMo)\n\ntest pollution is where a test fails due to the side-effects of some other test\nin the test suite.\n\nit usually appears as a \"test flake\" something where the test fails\nmysteriously but passes when run by itself.\n\na simple example of this is the following python code:\n\n```python\nk = 1\n\ndef test_k():\n assert k == 1\n\ndef test_k2():\n global k\n\n k = 2\n assert k == 2\n```\n\nnow this example is a little bit silly, you probably wouldn't write code this\npoorly but helps us demonstrate the problem here.\n\nwhen run normally -- these tests pass:\n\n```console\n$ pytest -q t.py\n.. [100%]\n2 passed in 0.00s\n```\n\nbut, if the tests were run in some other order (due to something like\n[pytest-randomly] or [pytest-xdist]) then the pollution would be apparent:\n\n```console\n$ pytest -q t.py::test_k2 t.py::test_k\n.F [100%]\n=================================== FAILURES ===================================\n____________________________________ test_k ____________________________________\n\n def test_k():\n> assert k == 1\nE assert 2 == 1\n\nt.py:4: AssertionError\n=========================== short test summary info ============================\nFAILED t.py::test_k - assert 2 == 1\n1 failed, 1 passed in 0.03s\n```\n\noften this flake happens in a codebase with hundreds or thousands of tests\nand it's difficult to track down which test is causing the global side-effects.\n\nthat's where this tool comes in handy! it helps you find the pair of tests\nwhich error when run in order.\n\n[pytest-randomly]: https://github.com/pytest-dev/pytest-randomly\n[pytest-xdist]: https://github.com/pytest-dev/pytest-xdist\n\n## usage\n\n[![video about using detect-test-pollution](https://user-images.githubusercontent.com/857609/162450980-1e45db95-b6dc-4783-9bcb-7a3dc02bb1e0.jpg)](https://www.youtube.com/watch?v=w5O4zTusyJ0)\n\nonce you have identified a failing test, you'll be able to feed it into\n`detect-test-pollution` to find the causal test.\n\nthe basic mode is to run:\n\n```bash\ndetect-test-pollution \\\n --failing-test test.py::test_id_here \\\n --tests ./tests\n```\n\nwhere `test.py::test_id_here` is the identifier of the failing test and\n`./tests` is the directory where your testsuite lives.\n\nif you've already narrowed down the list of testids further than that, you\ncan specify a `--testids-file` instead of `--tests` to speed up discovery:\n\n```bash\ndetect-test-pollution \\\n --failing-test test.py::test_id_here \\\n --testids-file ./testids\n```\n\nyou can usually get a list of testids via `pytest --collect-only -q` (though\nyou'll need to strip some unrelated lines at the end, such as timing and\nwarning info).\n\nthen `detect-test-pollution` will bisect the list of tests to find the failing\none. here's an example bisection from a [bug in pytest]\n\n```console\n$ detect-test-pollution --tests ./testing --failing-test testing/io/test_terminalwriter.py::test_should_do_markup_FORCE_COLOR\ndiscovering all tests...\n-> discovered 3140 tests!\nensuring test passes by itself...\n-> OK!\nensuring test fails with test group...\n-> OK!\nrunning step 1:\n- 3139 tests remaining (about 12 steps)\nrunning step 2:\n- 1570 tests remaining (about 11 steps)\nrunning step 3:\n- 785 tests remaining (about 10 steps)\nrunning step 4:\n- 393 tests remaining (about 9 steps)\nrunning step 5:\n- 197 tests remaining (about 8 steps)\nrunning step 6:\n- 99 tests remaining (about 7 steps)\nrunning step 7:\n- 50 tests remaining (about 6 steps)\nrunning step 8:\n- 25 tests remaining (about 5 steps)\nrunning step 9:\n- 12 tests remaining (about 4 steps)\nrunning step 10:\n- 6 tests remaining (about 3 steps)\nrunning step 11:\n- 3 tests remaining (about 2 steps)\ndouble checking we found it...\n-> the polluting test is: testing/test_terminal.py::TestTerminal::test_report_teststatus_explicit_markup\n```\n\n[bug in pytest]: https://github.com/pytest-dev/pytest/issues/9708\n\n## fuzzing\n\n`detect-test-pollution` can also be used to \"fuzz\" out failing tests.\n\nit does this by shuffling the test ids and running the testsuite until it\nfails.\n\nhere's an example execution on a silly testsuite:\n\n```console\n$ detect-test-pollution --fuzz --tests t.py\ndiscovering all tests...\n-> discovered 1002 tests!\nrun 1...\n-> OK!\nrun 2...\n-> found failing test!\ntry `detect-test-pollution --failing-test t.py::test_k --tests t.py`!\n```\n\nafterwards you can use the normal mode of `detect-test-pollution` to find the\nfailing pair.\n\n## supported test runners\n\nat the moment only `pytest` is supported -- though in theory the tool could\nbe adapted to support other python test runners, or even other languages.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "a tool to detect test pollution",
"version": "1.2.0",
"project_urls": {
"Homepage": "https://github.com/asottile/detect-test-pollution"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5ea4696c220231ee35466269c93f51f782b889f4dde73fc6f1d105e89a1c135e",
"md5": "566a7f34909a8dc92714f67087113a7c",
"sha256": "96095425dadf95d4ceabd585659ccc3197e88dfb1efb02a8479b27d8ad1a3afb"
},
"downloads": -1,
"filename": "detect_test_pollution-1.2.0-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "566a7f34909a8dc92714f67087113a7c",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.8",
"size": 7290,
"upload_time": "2023-09-28T20:26:00",
"upload_time_iso_8601": "2023-09-28T20:26:00.260581Z",
"url": "https://files.pythonhosted.org/packages/5e/a4/696c220231ee35466269c93f51f782b889f4dde73fc6f1d105e89a1c135e/detect_test_pollution-1.2.0-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7d45c1e5d4aa0e6c64e87a284bb12e3b607474dd5c8218ed7464f9491781b86d",
"md5": "3532690403a4a01aa262d2ea5639eb53",
"sha256": "94f383759bbab309771ceedb8dd526a62fa59607e5f7e41f5b33bbc656bfcea9"
},
"downloads": -1,
"filename": "detect_test_pollution-1.2.0.tar.gz",
"has_sig": false,
"md5_digest": "3532690403a4a01aa262d2ea5639eb53",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 6768,
"upload_time": "2023-09-28T20:26:01",
"upload_time_iso_8601": "2023-09-28T20:26:01.412080Z",
"url": "https://files.pythonhosted.org/packages/7d/45/c1e5d4aa0e6c64e87a284bb12e3b607474dd5c8218ed7464f9491781b86d/detect_test_pollution-1.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-09-28 20:26:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "asottile",
"github_project": "detect-test-pollution",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "detect-test-pollution"
}