mutmut


Namemutmut JSON
Version 2.4.5 PyPI version JSON
download
home_pagehttps://github.com/boxed/mutmut
Summarymutation testing for Python 3
upload_time2024-04-04 07:16:24
maintainerNone
docs_urlNone
authorAnders Hovmöller
requires_python>=3.7
licenseBSD
keywords mutmut mutant mutation test testing
VCS
bugtrack_url
requirements glob2 parso click pony junit-xml toml
Travis-CI
coveralls test coverage No coveralls.
            mutmut - python mutation tester
===============================

.. image:: https://travis-ci.org/boxed/mutmut.svg?branch=master
    :target: https://travis-ci.org/boxed/mutmut

.. image:: https://readthedocs.org/projects/mutmut/badge/?version=latest
    :target: https://mutmut.readthedocs.io/en/latest/?badge=latest
    :alt: Documentation Status

.. image:: https://codecov.io/gh/boxed/mutmut/branch/master/graph/badge.svg
  :target: https://codecov.io/gh/boxed/mutmut

.. image:: https://img.shields.io/discord/767914934016802818.svg
  :target: https://discord.gg/cwb9uNt

Mutmut is a mutation testing system for Python, with a strong focus on ease
of use. If you don't know what mutation testing is try starting with
`this article <https://hackernoon.com/mutmut-a-python-mutation-testing-system-9b9639356c78>`_.

Some highlight features:

- Found mutants can be applied on disk with a simple command making it very
  easy to work with the results
- Remembers work that has been done, so you can work incrementally
- Supports all test runners (because mutmut only needs an exit code from the
  test command)
- If you use the `hammett <https://github.com/boxed/hammett>`_ test runner
  you can go extremely fast! There's special handling for this runner
  that has some pretty dramatic results.
- Can use coverage data to only do mutation testing on covered lines
- Battle tested on real libraries by multiple companies


If you need to run mutmut on a python 2 code base use mutmut ``1.5.0``. Mutmut
``1.9.0`` is the last version to support python ``3.4``, ``3.5`` and ``3.6``.


Install and run
---------------

You can get started with a simple:

.. code-block:: console

    pip install mutmut
    mutmut run

This will by default run pytest (or unittest if pytest is unavailable)
on tests in the "tests" or "test" folder and
it will try to figure out where the code to mutate lies.

NOTE that mutmut will apply the mutations directly, one at a time;
it is **highly** recommended to add all changes to source control
before running.

Enter

.. code-block:: console

    mutmut run --help

for the available flags, to use other runners, etc. The recommended way to use
mutmut if the defaults aren't working for you is to add a
block in ``setup.cfg`` or ``project.toml``.
Then when you come back to mutmut weeks later you don't have to figure out the
flags again, just run ``mutmut run`` and it works.
Like this in ``setup.cfg``:

.. code-block:: ini

    [mutmut]
    paths_to_mutate=src/
    backup=False
    runner=python -m hammett -x
    tests_dir=tests/
    dict_synonyms=Struct, NamedStruct

or like this in ``pyproject.toml``:

.. code-block:: ini

    [tool.mutmut]
    paths_to_mutate="src"
    runner="python -m hammett -x"

To use multiple paths either in the ``paths_to_mutate`` or ``tests_dir`` option
use a comma or colon separated list. For example:

.. code-block:: ini

    [mutmut]
    paths_to_mutate=src/,src2/
    tests_dir=tests/:tests2/

You can stop the mutation run at any time and mutmut will restart where you
left off. It's also smart enough to retest only the surviving mutants when the
test suite changes.

To print the results run ``mutmut show``. It will give you a list of the mutants
grouped by file. You can now look at a specific mutant diff with ``mutmut show 3``,
all mutants for a specific file with ``mutmut show path/to/file.py`` or all mutants
with ``mutmut show all``.

You can also write a mutant to disk with ``mutmut apply 3``. You should **REALLY**
have the file you mutate under source code control and committed before you apply
a mutant!

To generate a HTML report for a web browser: ``mutmut html``

Whitelisting
------------

You can mark lines like this:

.. code-block:: python

    some_code_here()  # pragma: no mutate

to stop mutation on those lines. Some cases we've found where you need to
whitelist lines are:

- The version string on your library. You really shouldn't have a test for this :P
- Optimizing break instead of continue. The code runs fine when mutating break
  to continue, but it's slower.

See also `Advanced whitelisting and configuration`_


Example mutations
-----------------

- Integer literals are changed by adding 1. So 0 becomes 1, 5 becomes 6, etc.
- ``<`` is changed to ``<=``
- break is changed to continue and vice versa

In general the idea is that the mutations should be as subtle as possible.
See ``__init__.py`` for the full list.


Workflow
--------

This section describes how to work with mutmut to enhance your test suite.

1. Run mutmut with ``mutmut run``. A full run is preferred but if you're just
   getting started you can exit in the middle and start working with what you
   have found so far.
2. Show the mutants with ``mutmut results``
3. Apply a surviving mutant to disk running ``mutmut apply 3`` (replace 3 with
   the relevant mutant ID from ``mutmut results``)
4. Write a new test that fails
5. Revert the mutant on disk
6. Rerun the new test to see that it now passes
7. Go back to point 2.

Mutmut keeps a result cache in ``.mutmut-cache`` so if you want to make sure you
run a full mutmut run just delete this file.

If you want to re-run all survivors after changing a lot of code or even the configuration,
you can use `for ID in $(mutmut result-ids survived); do mutmut run $ID; done` (for bash).

You can also tell mutmut to just check a single mutant:

.. code-block:: console

    mutmut run 3


Advanced whitelisting and configuration
---------------------------------------

mutmut has an advanced configuration system. You create a file called
``mutmut_config.py``. You can define two functions there: ``init()`` and
``pre_mutation(context)``. ``init`` gets called when mutmut starts and
``pre_mutation`` gets called before each mutant is applied and tested. You can
mutate the ``context`` object as you need. You can modify the test command like
this:

.. code-block:: python

    def pre_mutation(context):
        context.config.test_command = 'python -m pytest -x ' + something_else

or skip a mutant:

.. code-block:: python

    def pre_mutation(context):
        if context.filename == 'foo.py':
            context.skip = True

or skip logging:


.. code-block:: python

    def pre_mutation(context):
        line = context.current_source_line.strip()
        if line.startswith('log.'):
            context.skip = True

look at the code for the ``Context`` class for what you can modify. Please
open a github issue if you need help.

It is also possible to disable mutation of specific node types by passing the
``--disable-mutation-types`` option. Multiple types can be specified by separating them
by comma:

.. code-block:: console

    mutmut run --disable-mutation-types=string,decorator

Inversely, you can also only specify to only run specific mutations with ``--enable-mutation-types``.
Note that ``--disable-mutation-types`` and ``--enable-mutation-types`` are exclusive and cannot
be combined.


Selecting tests to run
----------------------

If you have a large test suite or long running tests, it can be beneficial to narrow the set of tests to
run for each mutant down to the tests that have a chance of killing it.
Determining the relevant subset of tests depends on your project, its structure, and the metadata that you
know about your tests.
``mutmut`` provides information like the file to mutate and `coverage contexts <https://coverage.readthedocs.io/en/coverage-5.5/contexts.html>`_
(if used with the ``--use-coverage`` switch).
You can set the ``context.config.test_command`` in the ``pre_mutation(context)`` hook of ``mutmut_config.py``.
The ``test_command`` is reset after each mutant, so you don't have to explicitly (re)set it for each mutant.

This section gives examples to show how this could be done for some concrete use cases.
All examples use the default test runner (``python -m pytest -x --assert=plain``).

Selection based on source and test layout
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If the location of the test module has a strict correlation with your source code layout, you can simply
construct the path to the corresponding test file from ``context.filename``.
Suppose your layout follows the following structure where the test file is always located right beside the
production code:

.. code-block:: console

    mypackage
    ├── production_module.py
    ├── test_production_module.py
    └── subpackage
        ├── submodule.py
        └── test_submodule.py

Your ``mutmut_config.py`` in this case would look like this:

.. code-block:: python

    import os.path

    def pre_mutation(context):
        dirname, filename = os.path.split(context.filename)
        testfile = "test_" + filename
        context.config.test_command += ' ' + os.path.join(dirname, testfile)

Selection based on imports
^^^^^^^^^^^^^^^^^^^^^^^^^^

If you can't rely on the directory structure or naming of the test files, you may assume that the tests most likely
to kill the mutant are located in test files that directly import the module that is affected by the mutant.
Using the ``ast`` module of the Python standard library, you can use the ``init()`` hook to build a map which test file
imports which module, and then lookup all test files importing the mutated module and only run those:

.. code-block:: python

    import ast
    from pathlib import Path

    test_imports = {}


    class ImportVisitor(ast.NodeVisitor):
        """Visitor which records which modules are imported."""
        def __init__(self) -> None:
            super().__init__()
            self.imports = []

        def visit_Import(self, node: ast.Import) -> None:
            for alias in node.names:
                self.imports.append(alias.name)

        def visit_ImportFrom(self, node: ast.ImportFrom) -> None:
            self.imports.append(node.module)


    def init():
        """Find all test files located under the 'tests' directory and create an abstract syntax tree for each.
        Let the ``ImportVisitor`` find out what modules they import and store the information in a global dictionary
        which can be accessed by ``pre_mutation(context)``."""
        test_files = (Path(__file__).parent / "tests").rglob("test*.py")
        for fpath in test_files:
            visitor = ImportVisitor()
            visitor.visit(ast.parse(fpath.read_bytes()))
            test_imports[str(fpath)] = visitor.imports


    def pre_mutation(context):
        """Construct the module name from the filename and run all test files which import that module."""
        tests_to_run = []
        for testfile, imports in test_imports.items():
            module_name = context.filename.rstrip(".py").replace("/", ".")
            if module_name in imports:
                tests_to_run.append(testfile)
        context.config.test_command += f"{' '.join(tests_to_run)}"

Selection based on coverage contexts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If you recorded `coverage contexts <https://coverage.readthedocs.io/en/coverage-5.5/contexts.html>`_ and use
the ``--use-coverage`` switch, you can access this coverage data inside the ``pre_mutation(context)`` hook
via the ``context.config.coverage_data`` attribute. This attribute is a dictionary in the form
``{filename: {lineno: [contexts]}}``.

Let's say you have used the built-in dynamic context option of ``Coverage.py`` by adding the following to
your ``.coveragerc`` file:

.. code-block:: console

    [run]
    dynamic_context = test_function

``coverage`` will create a new context for each test function that you run in the form ``module_name.function_name``.
With ``pytest``, we can use the ``-k`` switch to filter tests that match a given expression.

.. code-block:: python

    import os.path

    def pre_mutation(context):
        """Extract the coverage contexts if possible and only run the tests matching this data."""
        if not context.config.coverage_data:
            # mutmut was run without ``--use-coverage``
            return
        fname = os.path.abspath(context.filename)
        contexts_for_file = context.config.coverage_data.get(fname, {})
        contexts_for_line = contexts_for_file.get(context.current_line_index, [])
        test_names = [
            ctx.rsplit(".", 1)[-1]  # extract only the final part after the last dot, which is the test function name
            for ctx in contexts_for_line
            if ctx  # skip empty strings
        ]
        if not test_names:
            return
        context.config.test_command += f' -k "{" or ".join(test_names)}"'

Pay attention that the format of the context name varies depending on the tool you use for creating the contexts.
For example, the ``pytest-cov`` plugin uses ``::`` as separator between module and test function.
Furthermore, not all tools are able to correctly pick up the correct contexts. ``coverage.py`` for instance is (at the time of writing)
unable to pick up tests that are inside a class when using ``pytest``.
You will have to inspect your ``.coverage`` database using the `Coverage.py API <https://coverage.readthedocs.io/en/coverage-5.5/api.html>`_
first to determine how you can extract the correct information to use with your test runner.

Making things more robust
^^^^^^^^^^^^^^^^^^^^^^^^^

Despite your best efforts in picking the right subset of tests, it may happen that the mutant survives because the test which is able
to kill it was not included in the test set. You can tell ``mutmut`` to re-run the full test suite in that case, to verify that this
mutant indeed survives.
You can do so by passing the ``--rerun-all`` option to ``mutmut run``. This option is disabled by default.


JUnit XML support
-----------------

In order to better integrate with CI/CD systems, ``mutmut`` supports the
generation of a JUnit XML report (using https://pypi.org/project/junit-xml/).
This option is available by calling ``mutmut junitxml``. In order to define how
to deal with suspicious and untested mutants, you can use

.. code-block:: console

    mutmut junitxml --suspicious-policy=ignore --untested-policy=ignore

The possible values for these policies are:

- ``ignore``: Do not include the results on the report at all
- ``skipped``: Include the mutant on the report as "skipped"
- ``error``: Include the mutant on the report as "error"
- ``failure``: Include the mutant on the report as "failure"

If a failed mutant is included in the report, then the unified diff of the
mutant will also be included for debugging purposes.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/boxed/mutmut",
    "name": "mutmut",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "mutmut mutant mutation test testing",
    "author": "Anders Hovm\u00f6ller",
    "author_email": "boxed@killingar.net",
    "download_url": "https://files.pythonhosted.org/packages/60/1b/8ff501da12b9ae464bce175eb3610bbaedfc783b97aca7e58a3df9bb8b11/mutmut-2.4.5.tar.gz",
    "platform": null,
    "description": "mutmut - python mutation tester\n===============================\n\n.. image:: https://travis-ci.org/boxed/mutmut.svg?branch=master\n    :target: https://travis-ci.org/boxed/mutmut\n\n.. image:: https://readthedocs.org/projects/mutmut/badge/?version=latest\n    :target: https://mutmut.readthedocs.io/en/latest/?badge=latest\n    :alt: Documentation Status\n\n.. image:: https://codecov.io/gh/boxed/mutmut/branch/master/graph/badge.svg\n  :target: https://codecov.io/gh/boxed/mutmut\n\n.. image:: https://img.shields.io/discord/767914934016802818.svg\n  :target: https://discord.gg/cwb9uNt\n\nMutmut is a mutation testing system for Python, with a strong focus on ease\nof use. If you don't know what mutation testing is try starting with\n`this article <https://hackernoon.com/mutmut-a-python-mutation-testing-system-9b9639356c78>`_.\n\nSome highlight features:\n\n- Found mutants can be applied on disk with a simple command making it very\n  easy to work with the results\n- Remembers work that has been done, so you can work incrementally\n- Supports all test runners (because mutmut only needs an exit code from the\n  test command)\n- If you use the `hammett <https://github.com/boxed/hammett>`_ test runner\n  you can go extremely fast! There's special handling for this runner\n  that has some pretty dramatic results.\n- Can use coverage data to only do mutation testing on covered lines\n- Battle tested on real libraries by multiple companies\n\n\nIf you need to run mutmut on a python 2 code base use mutmut ``1.5.0``. Mutmut\n``1.9.0`` is the last version to support python ``3.4``, ``3.5`` and ``3.6``.\n\n\nInstall and run\n---------------\n\nYou can get started with a simple:\n\n.. code-block:: console\n\n    pip install mutmut\n    mutmut run\n\nThis will by default run pytest (or unittest if pytest is unavailable)\non tests in the \"tests\" or \"test\" folder and\nit will try to figure out where the code to mutate lies.\n\nNOTE that mutmut will apply the mutations directly, one at a time;\nit is **highly** recommended to add all changes to source control\nbefore running.\n\nEnter\n\n.. code-block:: console\n\n    mutmut run --help\n\nfor the available flags, to use other runners, etc. The recommended way to use\nmutmut if the defaults aren't working for you is to add a\nblock in ``setup.cfg`` or ``project.toml``.\nThen when you come back to mutmut weeks later you don't have to figure out the\nflags again, just run ``mutmut run`` and it works.\nLike this in ``setup.cfg``:\n\n.. code-block:: ini\n\n    [mutmut]\n    paths_to_mutate=src/\n    backup=False\n    runner=python -m hammett -x\n    tests_dir=tests/\n    dict_synonyms=Struct, NamedStruct\n\nor like this in ``pyproject.toml``:\n\n.. code-block:: ini\n\n    [tool.mutmut]\n    paths_to_mutate=\"src\"\n    runner=\"python -m hammett -x\"\n\nTo use multiple paths either in the ``paths_to_mutate`` or ``tests_dir`` option\nuse a comma or colon separated list. For example:\n\n.. code-block:: ini\n\n    [mutmut]\n    paths_to_mutate=src/,src2/\n    tests_dir=tests/:tests2/\n\nYou can stop the mutation run at any time and mutmut will restart where you\nleft off. It's also smart enough to retest only the surviving mutants when the\ntest suite changes.\n\nTo print the results run ``mutmut show``. It will give you a list of the mutants\ngrouped by file. You can now look at a specific mutant diff with ``mutmut show 3``,\nall mutants for a specific file with ``mutmut show path/to/file.py`` or all mutants\nwith ``mutmut show all``.\n\nYou can also write a mutant to disk with ``mutmut apply 3``. You should **REALLY**\nhave the file you mutate under source code control and committed before you apply\na mutant!\n\nTo generate a HTML report for a web browser: ``mutmut html``\n\nWhitelisting\n------------\n\nYou can mark lines like this:\n\n.. code-block:: python\n\n    some_code_here()  # pragma: no mutate\n\nto stop mutation on those lines. Some cases we've found where you need to\nwhitelist lines are:\n\n- The version string on your library. You really shouldn't have a test for this :P\n- Optimizing break instead of continue. The code runs fine when mutating break\n  to continue, but it's slower.\n\nSee also `Advanced whitelisting and configuration`_\n\n\nExample mutations\n-----------------\n\n- Integer literals are changed by adding 1. So 0 becomes 1, 5 becomes 6, etc.\n- ``<`` is changed to ``<=``\n- break is changed to continue and vice versa\n\nIn general the idea is that the mutations should be as subtle as possible.\nSee ``__init__.py`` for the full list.\n\n\nWorkflow\n--------\n\nThis section describes how to work with mutmut to enhance your test suite.\n\n1. Run mutmut with ``mutmut run``. A full run is preferred but if you're just\n   getting started you can exit in the middle and start working with what you\n   have found so far.\n2. Show the mutants with ``mutmut results``\n3. Apply a surviving mutant to disk running ``mutmut apply 3`` (replace 3 with\n   the relevant mutant ID from ``mutmut results``)\n4. Write a new test that fails\n5. Revert the mutant on disk\n6. Rerun the new test to see that it now passes\n7. Go back to point 2.\n\nMutmut keeps a result cache in ``.mutmut-cache`` so if you want to make sure you\nrun a full mutmut run just delete this file.\n\nIf you want to re-run all survivors after changing a lot of code or even the configuration,\nyou can use `for ID in $(mutmut result-ids survived); do mutmut run $ID; done` (for bash).\n\nYou can also tell mutmut to just check a single mutant:\n\n.. code-block:: console\n\n    mutmut run 3\n\n\nAdvanced whitelisting and configuration\n---------------------------------------\n\nmutmut has an advanced configuration system. You create a file called\n``mutmut_config.py``. You can define two functions there: ``init()`` and\n``pre_mutation(context)``. ``init`` gets called when mutmut starts and\n``pre_mutation`` gets called before each mutant is applied and tested. You can\nmutate the ``context`` object as you need. You can modify the test command like\nthis:\n\n.. code-block:: python\n\n    def pre_mutation(context):\n        context.config.test_command = 'python -m pytest -x ' + something_else\n\nor skip a mutant:\n\n.. code-block:: python\n\n    def pre_mutation(context):\n        if context.filename == 'foo.py':\n            context.skip = True\n\nor skip logging:\n\n\n.. code-block:: python\n\n    def pre_mutation(context):\n        line = context.current_source_line.strip()\n        if line.startswith('log.'):\n            context.skip = True\n\nlook at the code for the ``Context`` class for what you can modify. Please\nopen a github issue if you need help.\n\nIt is also possible to disable mutation of specific node types by passing the\n``--disable-mutation-types`` option. Multiple types can be specified by separating them\nby comma:\n\n.. code-block:: console\n\n    mutmut run --disable-mutation-types=string,decorator\n\nInversely, you can also only specify to only run specific mutations with ``--enable-mutation-types``.\nNote that ``--disable-mutation-types`` and ``--enable-mutation-types`` are exclusive and cannot\nbe combined.\n\n\nSelecting tests to run\n----------------------\n\nIf you have a large test suite or long running tests, it can be beneficial to narrow the set of tests to\nrun for each mutant down to the tests that have a chance of killing it.\nDetermining the relevant subset of tests depends on your project, its structure, and the metadata that you\nknow about your tests.\n``mutmut`` provides information like the file to mutate and `coverage contexts <https://coverage.readthedocs.io/en/coverage-5.5/contexts.html>`_\n(if used with the ``--use-coverage`` switch).\nYou can set the ``context.config.test_command`` in the ``pre_mutation(context)`` hook of ``mutmut_config.py``.\nThe ``test_command`` is reset after each mutant, so you don't have to explicitly (re)set it for each mutant.\n\nThis section gives examples to show how this could be done for some concrete use cases.\nAll examples use the default test runner (``python -m pytest -x --assert=plain``).\n\nSelection based on source and test layout\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIf the location of the test module has a strict correlation with your source code layout, you can simply\nconstruct the path to the corresponding test file from ``context.filename``.\nSuppose your layout follows the following structure where the test file is always located right beside the\nproduction code:\n\n.. code-block:: console\n\n    mypackage\n    \u251c\u2500\u2500 production_module.py\n    \u251c\u2500\u2500 test_production_module.py\n    \u2514\u2500\u2500 subpackage\n        \u251c\u2500\u2500 submodule.py\n        \u2514\u2500\u2500 test_submodule.py\n\nYour ``mutmut_config.py`` in this case would look like this:\n\n.. code-block:: python\n\n    import os.path\n\n    def pre_mutation(context):\n        dirname, filename = os.path.split(context.filename)\n        testfile = \"test_\" + filename\n        context.config.test_command += ' ' + os.path.join(dirname, testfile)\n\nSelection based on imports\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIf you can't rely on the directory structure or naming of the test files, you may assume that the tests most likely\nto kill the mutant are located in test files that directly import the module that is affected by the mutant.\nUsing the ``ast`` module of the Python standard library, you can use the ``init()`` hook to build a map which test file\nimports which module, and then lookup all test files importing the mutated module and only run those:\n\n.. code-block:: python\n\n    import ast\n    from pathlib import Path\n\n    test_imports = {}\n\n\n    class ImportVisitor(ast.NodeVisitor):\n        \"\"\"Visitor which records which modules are imported.\"\"\"\n        def __init__(self) -> None:\n            super().__init__()\n            self.imports = []\n\n        def visit_Import(self, node: ast.Import) -> None:\n            for alias in node.names:\n                self.imports.append(alias.name)\n\n        def visit_ImportFrom(self, node: ast.ImportFrom) -> None:\n            self.imports.append(node.module)\n\n\n    def init():\n        \"\"\"Find all test files located under the 'tests' directory and create an abstract syntax tree for each.\n        Let the ``ImportVisitor`` find out what modules they import and store the information in a global dictionary\n        which can be accessed by ``pre_mutation(context)``.\"\"\"\n        test_files = (Path(__file__).parent / \"tests\").rglob(\"test*.py\")\n        for fpath in test_files:\n            visitor = ImportVisitor()\n            visitor.visit(ast.parse(fpath.read_bytes()))\n            test_imports[str(fpath)] = visitor.imports\n\n\n    def pre_mutation(context):\n        \"\"\"Construct the module name from the filename and run all test files which import that module.\"\"\"\n        tests_to_run = []\n        for testfile, imports in test_imports.items():\n            module_name = context.filename.rstrip(\".py\").replace(\"/\", \".\")\n            if module_name in imports:\n                tests_to_run.append(testfile)\n        context.config.test_command += f\"{' '.join(tests_to_run)}\"\n\nSelection based on coverage contexts\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIf you recorded `coverage contexts <https://coverage.readthedocs.io/en/coverage-5.5/contexts.html>`_ and use\nthe ``--use-coverage`` switch, you can access this coverage data inside the ``pre_mutation(context)`` hook\nvia the ``context.config.coverage_data`` attribute. This attribute is a dictionary in the form\n``{filename: {lineno: [contexts]}}``.\n\nLet's say you have used the built-in dynamic context option of ``Coverage.py`` by adding the following to\nyour ``.coveragerc`` file:\n\n.. code-block:: console\n\n    [run]\n    dynamic_context = test_function\n\n``coverage`` will create a new context for each test function that you run in the form ``module_name.function_name``.\nWith ``pytest``, we can use the ``-k`` switch to filter tests that match a given expression.\n\n.. code-block:: python\n\n    import os.path\n\n    def pre_mutation(context):\n        \"\"\"Extract the coverage contexts if possible and only run the tests matching this data.\"\"\"\n        if not context.config.coverage_data:\n            # mutmut was run without ``--use-coverage``\n            return\n        fname = os.path.abspath(context.filename)\n        contexts_for_file = context.config.coverage_data.get(fname, {})\n        contexts_for_line = contexts_for_file.get(context.current_line_index, [])\n        test_names = [\n            ctx.rsplit(\".\", 1)[-1]  # extract only the final part after the last dot, which is the test function name\n            for ctx in contexts_for_line\n            if ctx  # skip empty strings\n        ]\n        if not test_names:\n            return\n        context.config.test_command += f' -k \"{\" or \".join(test_names)}\"'\n\nPay attention that the format of the context name varies depending on the tool you use for creating the contexts.\nFor example, the ``pytest-cov`` plugin uses ``::`` as separator between module and test function.\nFurthermore, not all tools are able to correctly pick up the correct contexts. ``coverage.py`` for instance is (at the time of writing)\nunable to pick up tests that are inside a class when using ``pytest``.\nYou will have to inspect your ``.coverage`` database using the `Coverage.py API <https://coverage.readthedocs.io/en/coverage-5.5/api.html>`_\nfirst to determine how you can extract the correct information to use with your test runner.\n\nMaking things more robust\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nDespite your best efforts in picking the right subset of tests, it may happen that the mutant survives because the test which is able\nto kill it was not included in the test set. You can tell ``mutmut`` to re-run the full test suite in that case, to verify that this\nmutant indeed survives.\nYou can do so by passing the ``--rerun-all`` option to ``mutmut run``. This option is disabled by default.\n\n\nJUnit XML support\n-----------------\n\nIn order to better integrate with CI/CD systems, ``mutmut`` supports the\ngeneration of a JUnit XML report (using https://pypi.org/project/junit-xml/).\nThis option is available by calling ``mutmut junitxml``. In order to define how\nto deal with suspicious and untested mutants, you can use\n\n.. code-block:: console\n\n    mutmut junitxml --suspicious-policy=ignore --untested-policy=ignore\n\nThe possible values for these policies are:\n\n- ``ignore``: Do not include the results on the report at all\n- ``skipped``: Include the mutant on the report as \"skipped\"\n- ``error``: Include the mutant on the report as \"error\"\n- ``failure``: Include the mutant on the report as \"failure\"\n\nIf a failed mutant is included in the report, then the unified diff of the\nmutant will also be included for debugging purposes.",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "mutation testing for Python 3",
    "version": "2.4.5",
    "project_urls": {
        "Homepage": "https://github.com/boxed/mutmut"
    },
    "split_keywords": [
        "mutmut",
        "mutant",
        "mutation",
        "test",
        "testing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "601b8ff501da12b9ae464bce175eb3610bbaedfc783b97aca7e58a3df9bb8b11",
                "md5": "29674d2ccb6ddcdc0cf34575badff1a3",
                "sha256": "cd4455074569fb7b66720b6ecf262a08c6fec13689beb4efe528388f596ab52d"
            },
            "downloads": -1,
            "filename": "mutmut-2.4.5.tar.gz",
            "has_sig": false,
            "md5_digest": "29674d2ccb6ddcdc0cf34575badff1a3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 50450,
            "upload_time": "2024-04-04T07:16:24",
            "upload_time_iso_8601": "2024-04-04T07:16:24.714017Z",
            "url": "https://files.pythonhosted.org/packages/60/1b/8ff501da12b9ae464bce175eb3610bbaedfc783b97aca7e58a3df9bb8b11/mutmut-2.4.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-04 07:16:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "boxed",
    "github_project": "mutmut",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "glob2",
            "specs": []
        },
        {
            "name": "parso",
            "specs": []
        },
        {
            "name": "click",
            "specs": []
        },
        {
            "name": "pony",
            "specs": []
        },
        {
            "name": "junit-xml",
            "specs": [
                [
                    "<",
                    "2"
                ],
                [
                    ">=",
                    "1.8"
                ]
            ]
        },
        {
            "name": "toml",
            "specs": []
        }
    ],
    "test_requirements": [
        {
            "name": "pytest",
            "specs": []
        },
        {
            "name": "pytest-cov",
            "specs": []
        },
        {
            "name": "hammett",
            "specs": []
        },
        {
            "name": "mock",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "coverage",
            "specs": []
        },
        {
            "name": "whatthepatch",
            "specs": [
                [
                    "==",
                    "0.0.6"
                ]
            ]
        }
    ],
    "tox": true,
    "lcname": "mutmut"
}
        
Elapsed time: 0.21961s