Name | pytest-replay JSON |
Version |
1.5.1
JSON |
| download |
home_page | https://github.com/ESSS/pytest-replay |
Summary | Saves previous test runs and allow re-execute previous pytest runs to reproduce crashes or flaky tests |
upload_time | 2024-01-11 17:08:50 |
maintainer | |
docs_url | None |
author | ESSS |
requires_python | >=3.8 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
=============
pytest-replay
=============
.. image:: http://img.shields.io/pypi/v/pytest-replay.svg
:target: https://pypi.python.org/pypi/pytest-replay
.. image:: https://anaconda.org/conda-forge/pytest-replay/badges/version.svg
:target: https://anaconda.org/conda-forge/pytest-replay
.. image:: https://github.com/ESSS/pytest-replay/workflows/test/badge.svg
:target: https://github.com/ESSS/pytest-replay/actions?query=workflow%3Atest
.. image:: https://img.shields.io/pypi/pyversions/pytest-replay.svg
:target: https://pypi.python.org/pypi/pytest-replay
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
Saves previous test runs and allow re-execute previous pytest runs to reproduce crashes or flaky tests
----
This `pytest`_ plugin was generated with `Cookiecutter`_ along with `@hackebrot`_'s `Cookiecutter-pytest-plugin`_ template.
Features
--------
This plugin helps to reproduce random or flaky behavior when running tests with xdist. ``pytest-xdist`` executes tests
in a non-predictable order, making it hard to reproduce a behavior seen in CI locally because there's no convenient way
to track which test executed in which worker.
This plugin records the executed node ids by each worker in the directory given by ``--replay-record-dir=<dir>`` flag,
and a ``--replay=<file>`` can be used to re-run the tests from a previous run. For example::
$ pytest -n auto --replay-record-dir=build/tests/replay
This will generate files with each line being a ``json`` with the following content:
node identification, start time, end time and outcome. It is interesting to note
that usually the node id is repeated twice, that is necessary in case of a test
suddenly crashes we will still have the record of that test started. After the
test finishes, ``pytest-replay`` will add another ``json`` line with the
complete information.
That is also useful to analyze concurrent tests which might have some kind of
race condition and interfere in each other.
For example worker ``gw1`` will generate a file
``.pytest-replay-gw1.txt`` with contents like this::
{"nodeid": "test_foo.py::test[1]", "start": 0.000}
{"nodeid": "test_foo.py::test[1]", "start": 0.000, "finish": 1.5, "outcome": "passed"}
{"nodeid": "test_foo.py::test[3]", "start": 1.5}
{"nodeid": "test_foo.py::test[3]", "start": 1.5, "finish": 2.5, "outcome": "passed"}
{"nodeid": "test_foo.py::test[5]", "start": 2.5}
{"nodeid": "test_foo.py::test[5]", "start": 2.5, "finish": 3.5, "outcome": "passed"}
{"nodeid": "test_foo.py::test[7]", "start": 3.5}
{"nodeid": "test_foo.py::test[7]", "start": 3.5, "finish": 4.5, "outcome": "passed"}
{"nodeid": "test_foo.py::test[8]", "start": 4.5}
{"nodeid": "test_foo.py::test[8]", "start": 4.5, "finish": 5.5, "outcome": "passed"}
If there is a crash or a flaky failure in the tests of the worker ``gw1``, one can take that file from the CI server and
execute the tests in the same order with::
$ pytest --replay=.pytest-replay-gw1.txt
Hopefully this will make it easier to reproduce the problem and fix it.
FAQ
~~~
1. ``pytest`` has its own `cache <https://docs.pytest.org/en/latest/cache.html>`_, why use a different mechanism?
The internal cache saves its data using ``json``, which is not suitable in the advent of a crash because the file
will not be readable.
2. Shouldn't the ability of selecting tests from a file be part of the ``pytest`` core?
Sure, but let's try to use this a bit as a separate plugin before proposing
its inclusion into the core.
Installation
------------
You can install ``pytest-replay`` via `pip`_ from `PyPI`_::
$ pip install pytest-replay
Or with conda::
$ conda install -c conda-forge pytest-replay
Contributing
------------
Contributions are very welcome.
Tests can be run with `tox`_ if you are using a native Python installation.
To run tests with `conda <https://conda.io/docs/>`_, first create a virtual environment and execute tests from there
(conda with Python 3.5+ in the root environment)::
$ python -m venv .env
$ .env\scripts\activate
$ pip install -e . pytest-xdist
$ pytest tests
Releases
~~~~~~~~
Follow these steps to make a new release:
1. Create a new branch ``release-X.Y.Z`` from ``master``;
2. Update ``CHANGELOG.rst``;
3. Open a PR;
4. After it is **green** and **approved**, push a new tag in the format ``X.Y.Z``;
GitHub Actions will deploy to PyPI automatically.
Afterwards, update the recipe in `conda-forge/pytest-replay-feedstock <https://github.com/conda-forge/pytest-replay-feedstock>`_.
License
-------
Distributed under the terms of the `MIT`_ license.
Issues
------
If you encounter any problems, please `file an issue`_ along with a detailed description.
.. _`Cookiecutter`: https://github.com/audreyr/cookiecutter
.. _`@hackebrot`: https://github.com/hackebrot
.. _`MIT`: http://opensource.org/licenses/MIT
.. _`BSD-3`: http://opensource.org/licenses/BSD-3-Clause
.. _`GNU GPL v3.0`: http://www.gnu.org/licenses/gpl-3.0.txt
.. _`Apache Software License 2.0`: http://www.apache.org/licenses/LICENSE-2.0
.. _`cookiecutter-pytest-plugin`: https://github.com/pytest-dev/cookiecutter-pytest-plugin
.. _`file an issue`: https://github.com/ESSS/pytest-replay/issues
.. _`pytest`: https://github.com/pytest-dev/pytest
.. _`tox`: https://tox.readthedocs.io/en/latest/
.. _`pip`: https://pypi.python.org/pypi/pip/
.. _`PyPI`: https://pypi.python.org/pypi
Raw data
{
"_id": null,
"home_page": "https://github.com/ESSS/pytest-replay",
"name": "pytest-replay",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "",
"author": "ESSS",
"author_email": "foss@esss.co",
"download_url": "https://files.pythonhosted.org/packages/0e/ec/8b524307136309f503a47c75dde1245ffad5fa81b237f7db22d68235151e/pytest-replay-1.5.1.tar.gz",
"platform": null,
"description": "=============\npytest-replay\n=============\n\n\n.. image:: http://img.shields.io/pypi/v/pytest-replay.svg\n :target: https://pypi.python.org/pypi/pytest-replay\n\n.. image:: https://anaconda.org/conda-forge/pytest-replay/badges/version.svg\n :target: https://anaconda.org/conda-forge/pytest-replay\n\n.. image:: https://github.com/ESSS/pytest-replay/workflows/test/badge.svg\n :target: https://github.com/ESSS/pytest-replay/actions?query=workflow%3Atest\n\n.. image:: https://img.shields.io/pypi/pyversions/pytest-replay.svg\n :target: https://pypi.python.org/pypi/pytest-replay\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n\n\nSaves previous test runs and allow re-execute previous pytest runs to reproduce crashes or flaky tests\n\n----\n\nThis `pytest`_ plugin was generated with `Cookiecutter`_ along with `@hackebrot`_'s `Cookiecutter-pytest-plugin`_ template.\n\n\nFeatures\n--------\n\nThis plugin helps to reproduce random or flaky behavior when running tests with xdist. ``pytest-xdist`` executes tests\nin a non-predictable order, making it hard to reproduce a behavior seen in CI locally because there's no convenient way\nto track which test executed in which worker.\n\nThis plugin records the executed node ids by each worker in the directory given by ``--replay-record-dir=<dir>`` flag,\nand a ``--replay=<file>`` can be used to re-run the tests from a previous run. For example::\n\n $ pytest -n auto --replay-record-dir=build/tests/replay\n\nThis will generate files with each line being a ``json`` with the following content:\nnode identification, start time, end time and outcome. It is interesting to note\nthat usually the node id is repeated twice, that is necessary in case of a test\nsuddenly crashes we will still have the record of that test started. After the\ntest finishes, ``pytest-replay`` will add another ``json`` line with the\ncomplete information.\nThat is also useful to analyze concurrent tests which might have some kind of\nrace condition and interfere in each other.\n\nFor example worker ``gw1`` will generate a file\n``.pytest-replay-gw1.txt`` with contents like this::\n\n {\"nodeid\": \"test_foo.py::test[1]\", \"start\": 0.000}\n {\"nodeid\": \"test_foo.py::test[1]\", \"start\": 0.000, \"finish\": 1.5, \"outcome\": \"passed\"}\n {\"nodeid\": \"test_foo.py::test[3]\", \"start\": 1.5}\n {\"nodeid\": \"test_foo.py::test[3]\", \"start\": 1.5, \"finish\": 2.5, \"outcome\": \"passed\"}\n {\"nodeid\": \"test_foo.py::test[5]\", \"start\": 2.5}\n {\"nodeid\": \"test_foo.py::test[5]\", \"start\": 2.5, \"finish\": 3.5, \"outcome\": \"passed\"}\n {\"nodeid\": \"test_foo.py::test[7]\", \"start\": 3.5}\n {\"nodeid\": \"test_foo.py::test[7]\", \"start\": 3.5, \"finish\": 4.5, \"outcome\": \"passed\"}\n {\"nodeid\": \"test_foo.py::test[8]\", \"start\": 4.5}\n {\"nodeid\": \"test_foo.py::test[8]\", \"start\": 4.5, \"finish\": 5.5, \"outcome\": \"passed\"}\n\n\nIf there is a crash or a flaky failure in the tests of the worker ``gw1``, one can take that file from the CI server and\nexecute the tests in the same order with::\n\n $ pytest --replay=.pytest-replay-gw1.txt\n\nHopefully this will make it easier to reproduce the problem and fix it.\n\n\nFAQ\n~~~\n\n1. ``pytest`` has its own `cache <https://docs.pytest.org/en/latest/cache.html>`_, why use a different mechanism?\n\n The internal cache saves its data using ``json``, which is not suitable in the advent of a crash because the file\n will not be readable.\n\n2. Shouldn't the ability of selecting tests from a file be part of the ``pytest`` core?\n\n Sure, but let's try to use this a bit as a separate plugin before proposing\n its inclusion into the core.\n\nInstallation\n------------\n\nYou can install ``pytest-replay`` via `pip`_ from `PyPI`_::\n\n $ pip install pytest-replay\n\nOr with conda::\n\n $ conda install -c conda-forge pytest-replay\n\n\nContributing\n------------\n\nContributions are very welcome.\n\nTests can be run with `tox`_ if you are using a native Python installation.\n\nTo run tests with `conda <https://conda.io/docs/>`_, first create a virtual environment and execute tests from there\n(conda with Python 3.5+ in the root environment)::\n\n $ python -m venv .env\n $ .env\\scripts\\activate\n $ pip install -e . pytest-xdist\n $ pytest tests\n\n\nReleases\n~~~~~~~~\n\nFollow these steps to make a new release:\n\n1. Create a new branch ``release-X.Y.Z`` from ``master``;\n2. Update ``CHANGELOG.rst``;\n3. Open a PR;\n4. After it is **green** and **approved**, push a new tag in the format ``X.Y.Z``;\n\nGitHub Actions will deploy to PyPI automatically.\n\nAfterwards, update the recipe in `conda-forge/pytest-replay-feedstock <https://github.com/conda-forge/pytest-replay-feedstock>`_.\n\n\nLicense\n-------\n\nDistributed under the terms of the `MIT`_ license.\n\n\nIssues\n------\n\nIf you encounter any problems, please `file an issue`_ along with a detailed description.\n\n.. _`Cookiecutter`: https://github.com/audreyr/cookiecutter\n.. _`@hackebrot`: https://github.com/hackebrot\n.. _`MIT`: http://opensource.org/licenses/MIT\n.. _`BSD-3`: http://opensource.org/licenses/BSD-3-Clause\n.. _`GNU GPL v3.0`: http://www.gnu.org/licenses/gpl-3.0.txt\n.. _`Apache Software License 2.0`: http://www.apache.org/licenses/LICENSE-2.0\n.. _`cookiecutter-pytest-plugin`: https://github.com/pytest-dev/cookiecutter-pytest-plugin\n.. _`file an issue`: https://github.com/ESSS/pytest-replay/issues\n.. _`pytest`: https://github.com/pytest-dev/pytest\n.. _`tox`: https://tox.readthedocs.io/en/latest/\n.. _`pip`: https://pypi.python.org/pypi/pip/\n.. _`PyPI`: https://pypi.python.org/pypi\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Saves previous test runs and allow re-execute previous pytest runs to reproduce crashes or flaky tests",
"version": "1.5.1",
"project_urls": {
"Homepage": "https://github.com/ESSS/pytest-replay"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a96b2e16170671b7f78ea4c4f35c9311eb0e24ab281fa6cdd986489d3a140414",
"md5": "65b17f963225169a014e2e478f7d7f58",
"sha256": "f4a527f88c30327133ca08ee1e733d326ac5089085de90de08b255125887f252"
},
"downloads": -1,
"filename": "pytest_replay-1.5.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "65b17f963225169a014e2e478f7d7f58",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 6389,
"upload_time": "2024-01-11T17:08:48",
"upload_time_iso_8601": "2024-01-11T17:08:48.576667Z",
"url": "https://files.pythonhosted.org/packages/a9/6b/2e16170671b7f78ea4c4f35c9311eb0e24ab281fa6cdd986489d3a140414/pytest_replay-1.5.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0eec8b524307136309f503a47c75dde1245ffad5fa81b237f7db22d68235151e",
"md5": "0e69d84f080242395d679dc0ae32b3ff",
"sha256": "d1d997d41cd35994a0a8684a29bc7f880148f8215a8ba82dd3cbb8ca9191a02f"
},
"downloads": -1,
"filename": "pytest-replay-1.5.1.tar.gz",
"has_sig": false,
"md5_digest": "0e69d84f080242395d679dc0ae32b3ff",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 11235,
"upload_time": "2024-01-11T17:08:50",
"upload_time_iso_8601": "2024-01-11T17:08:50.198809Z",
"url": "https://files.pythonhosted.org/packages/0e/ec/8b524307136309f503a47c75dde1245ffad5fa81b237f7db22d68235151e/pytest-replay-1.5.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-01-11 17:08:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ESSS",
"github_project": "pytest-replay",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "pytest-replay"
}