pytest-check


Namepytest-check JSON
Version 2.4.1 PyPI version JSON
download
home_pageNone
SummaryA pytest plugin that allows multiple failures per test.
upload_time2024-08-28 04:53:36
maintainerNone
docs_urlNone
authorBrian Okken
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # pytest-check

A pytest plugin that allows multiple failures per test.

----

Normally, a test function will fail and stop running with the first failed `assert`.
That's totally fine for tons of kinds of software tests.
However, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails.

`pytest-check` allows multiple failed "checks" per test function, so you can see the whole picture of what's going wrong.

## Installation

From PyPI:

```
$ pip install pytest-check
```

From conda (conda-forge):
```
$ conda install -c conda-forge pytest-check
```

## Example

Quick example of where you might want multiple checks:

```python
import httpx
from pytest_check import check

def test_httpx_get():
    r = httpx.get('https://www.example.org/')
    # bail if bad status code
    assert r.status_code == 200
    # but if we get to here
    # then check everything else without stopping
    with check:
        assert r.is_redirect is False
    with check:
        assert r.encoding == 'utf-8'
    with check:
        assert 'Example Domain' in r.text
```

## Import vs fixture

The example above used import: `from pytest_check import check`.

You can also grab `check` as a fixture with no import:

```python
def test_httpx_get(check):
    r = httpx.get('https://www.example.org/')
    ...
    with check:
        assert r.is_redirect == False
    ...
```

## Validation functions

`check` also helper functions for common checks. 
These methods do NOT need to be inside of a `with check:` block.

| Function    | Meaning    | Notes    |
|------------------------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------|
| `equal(a, b, msg="")`    | `a == b`    |    |
| `not_equal(a, b, msg="")`    | `a != b`    |    |
| `is_(a, b, msg="")`    | `a is b`    |    |
| `is_not(a, b, msg="")`    | `a is not b`    |    |
| `is_true(x, msg="")`    | `bool(x) is True`    |    |
| `is_false(x, msg="")`    | `bool(x) is False`    |    |
| `is_none(x, msg="")`    | `x is None`    |    |
| `is_not_none(x, msg="")`    | `x is not None`    |    |
| `is_in(a, b, msg="")`    | `a in b`    |    |
| `is_not_in(a, b, msg="")`    | `a not in b`    |    |
| `is_instance(a, b, msg="")`    | `isinstance(a, b)`    |    |
| `is_not_instance(a, b, msg="")`    | `not isinstance(a, b)`    |    |
| `is_nan(x, msg="")`    | `math.isnan(x)`    | [math.isnan](https://docs.python.org/3/library/math.html#math.isnan)   |
| `is_not_nan(x, msg="")`    | `not math.isnan(x) `    | [math.isnan](https://docs.python.org/3/library/math.html#math.isnan)   | 
| `almost_equal(a, b, rel=None, abs=None, msg="")`    | `a == pytest.approx(b, rel, abs)` | [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx)    |
| `not_almost_equal(a, b, rel=None, abs=None, msg="")` | `a != pytest.approx(b, rel, abs)` | [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx)    | 
| `greater(a, b, msg="")`    | `a > b`    |    |
| `greater_equal(a, b, msg="")`    | `a >= b`    |    |
| `less(a, b, msg="")`    | `a < b`    |    |
| `less_equal(a, b, msg="")`    | `a <= b`    |    |
| `between(b, a, c, msg="", ge=False, le=False)`    | `a < b < c`    |    |
| `between_equal(b, a, c, msg="")`    | `a <= b <= c`    | same as `between(b, a, c, msg, ge=True, le=True)`    |
| `raises(expected_exception, *args, **kwargs)`    | *Raises given exception*    | similar to [pytest.raises](https://docs.pytest.org/en/latest/reference/reference.html#pytest-raises) | 
| `fail(msg)`    | *Log a failure*    |    |

**Note: This is a list of relatively common logic operators. I'm reluctant to add to the list too much, as it's easy to add your own.**


The httpx example can be rewritten with helper functions:

```python
def test_httpx_get_with_helpers():
    r = httpx.get('https://www.example.org/')
    assert r.status_code == 200
    check.is_false(r.is_redirect)
    check.equal(r.encoding, 'utf-8')
    check.is_in('Example Domain', r.text)
```

Which you use is personal preference.

## Defining your own check functions

### Using `@check.check_func`

The `@check.check_func` decorator allows you to wrap any test helper that has an assert statement in it to be a non-blocking assert function.


```python
from pytest_check import check

@check.check_func
def is_four(a):
    assert a == 4

def test_all_four():
    is_four(1)
    is_four(2)
    is_four(3)
    is_four(4)
```


### Using `check.fail()`

Using `@check.check_func` is probably the easiest. 
However, it does have a bit of overhead in the passing cases 
that can affect large loops of checks.

If you need a bit of a speedup, use the following style with the help of `check.fail()`.

```python
from pytest_check import check

def is_four(a):
    __tracebackhide__ = True
    if a == 4:
        return True
    else: 
        check.fail(f"check {a} == 4")
        return False

def test_all_four():
  is_four(1)
  is_four(2)
  is_four(3)
  is_four(4)
```

## Using raises as a context manager

`raises` is used as context manager, much like `pytest.raises`. The main difference being that a failure to raise the right exception won't stop the execution of the test method.


```python
from pytest_check import check

def test_raises():
    with check.raises(AssertionError):
        x = 3
        assert 1 < x < 4
```

## Pseudo-tracebacks

With `check`, tests can have multiple failures per test.
This would possibly make for extensive output if we include the full traceback for
every failure.
To make the output a little more concise, `pytest-check` implements a shorter version, which we call pseudo-tracebacks.

For example, take this test:

```python
def test_example():
    a = 1
    b = 2
    c = [2, 4, 6]
    check.greater(a, b)
    check.less_equal(b, a)
    check.is_in(a, c, "Is 1 in the list")
    check.is_not_in(b, c, "make sure 2 isn't in list")
```

This will result in:

```
=================================== FAILURES ===================================
_________________________________ test_example _________________________________
FAILURE:
assert 1 > 2
  test_check.py, line 14, in test_example() -> check.greater(a, b)
FAILURE:
assert 2 <= 1
  test_check.py, line 15, in test_example() -> check.less_equal(b, a)
FAILURE: Is 1 in the list
assert 1 in [2, 4, 6]
  test_check.py, line 16, in test_example() -> check.is_in(a, c, "Is 1 in the list")
FAILURE: make sure 2 isn't in list
assert 2 not in [2, 4, 6]
  test_check.py, line 17, in test_example() -> check.is_not_in(b, c, "make sure 2 isn't in list")
------------------------------------------------------------
Failed Checks: 4
=========================== 1 failed in 0.11 seconds ===========================
```

## Red output

The failures will also be red, unless you turn that off with pytests `--color=no`.

## No output

You can turn off the failure reports with pytests `--tb=no`.

## Stop on Fail (maxfail behavior)

Setting `-x` or `--maxfail=1` will cause this plugin to abort testing after the first failed check.

Setting `-maxfail=2` or greater will turn off any handling of maxfail within this plugin and the behavior is controlled by pytest.

In other words, the `maxfail` count is counting tests, not checks.
The exception is the case of `1`, where we want to stop on the very first failed check.

## any_failures()

Use `any_failures()` to see if there are any failures.  
One use case is to make a block of checks conditional on not failing in a previous set of checks:

```python
from pytest_check import check

def test_with_groups_of_checks():
    # always check these
    check.equal(1, 1)
    check.equal(2, 3)
    if not check.any_failures():
        # only check these if the above passed
        check.equal(1, 2)
        check.equal(2, 2)
```

## Speedups

If you have lots of check failures, your tests may not run as fast as you want.
There are a few ways to speed things up.

* `--check-max-tb=5` - Only first 5 failures per test will include pseudo-tracebacks (rest without them).
    * The example shows `5` but any number can be used.
    * pytest-check uses custom traceback code I'm calling a pseudo-traceback.
    * This is visually shorter than normal assert tracebacks.
    * Internally, it uses introspection, which can be slow.
    * Allowing a limited number of pseudo-tracebacks speeds things up quite a bit.
    * Default is 1. 
        * Set a large number, e.g: 1000, if you want pseudo-tracebacks for all failures

* `--check-max-report=10` - limit reported failures per test.
    * The example shows `10` but any number can be used.
    * The test will still have the total nuber of failures reported.
    * Default is no maximum.

* `--check-max-fail=20` - Stop the test after this many check failures.
    * This is useful if your code under test is slow-ish and you want to bail early.
    * Default is no maximum.

* Any of these can be used on their own, or combined.

* Recommendation:
    * Leave the default, equivelant to `--check-max-tb=1`.
    * If excessive output is annoying, set `--check-max-report=10` or some tolerable number.

## Local speedups

The flags above are global settings, and apply to every test in the test run.  

Locally, you can set these values per test.

From `examples/test_example_speedup_funcs.py`:

```python
def test_max_tb():
    check.set_max_tb(2)
    for i in range(1, 11):
        check.equal(i, 100)

def test_max_report():
    check.set_max_report(5)
    for i in range(1, 11):
        check.equal(i, 100)

def test_max_fail():
    check.set_max_fail(5)
    for i in range(1, 11):
        check.equal(i, 100)
```

## Contributing

Contributions are very welcome. Tests can be run with [tox](https://tox.readthedocs.io/en/latest/).
Test coverage is now 100%. Please make sure to keep it at 100%.
If you have an awesome pull request and need help with getting coverage back up, let me know.


## License

Distributed under the terms of the [MIT](http://opensource.org/licenses/MIT) license, "pytest-check" is free and open source software

## Issues

If you encounter any problems, please [file an issue](https://github.com/okken/pytest-check/issues) along with a detailed description.

## Changelog

See [changelog.md](https://github.com/okken/pytest-check/blob/main/changelog.md)


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pytest-check",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Brian Okken",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/15/d3/178a723f0420cf4e06fb6ddf43fc1ec68c1d0d4ea3db1ecf8f6df21b345f/pytest_check-2.4.1.tar.gz",
    "platform": null,
    "description": "# pytest-check\n\nA pytest plugin that allows multiple failures per test.\n\n----\n\nNormally, a test function will fail and stop running with the first failed `assert`.\nThat's totally fine for tons of kinds of software tests.\nHowever, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails.\n\n`pytest-check` allows multiple failed \"checks\" per test function, so you can see the whole picture of what's going wrong.\n\n## Installation\n\nFrom PyPI:\n\n```\n$ pip install pytest-check\n```\n\nFrom conda (conda-forge):\n```\n$ conda install -c conda-forge pytest-check\n```\n\n## Example\n\nQuick example of where you might want multiple checks:\n\n```python\nimport httpx\nfrom pytest_check import check\n\ndef test_httpx_get():\n    r = httpx.get('https://www.example.org/')\n    # bail if bad status code\n    assert r.status_code == 200\n    # but if we get to here\n    # then check everything else without stopping\n    with check:\n        assert r.is_redirect is False\n    with check:\n        assert r.encoding == 'utf-8'\n    with check:\n        assert 'Example Domain' in r.text\n```\n\n## Import vs fixture\n\nThe example above used import: `from pytest_check import check`.\n\nYou can also grab `check` as a fixture with no import:\n\n```python\ndef test_httpx_get(check):\n    r = httpx.get('https://www.example.org/')\n    ...\n    with check:\n        assert r.is_redirect == False\n    ...\n```\n\n## Validation functions\n\n`check` also helper functions for common checks. \nThese methods do NOT need to be inside of a `with check:` block.\n\n| Function    | Meaning    | Notes    |\n|------------------------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------|\n| `equal(a, b, msg=\"\")`    | `a == b`    |    |\n| `not_equal(a, b, msg=\"\")`    | `a != b`    |    |\n| `is_(a, b, msg=\"\")`    | `a is b`    |    |\n| `is_not(a, b, msg=\"\")`    | `a is not b`    |    |\n| `is_true(x, msg=\"\")`    | `bool(x) is True`    |    |\n| `is_false(x, msg=\"\")`    | `bool(x) is False`    |    |\n| `is_none(x, msg=\"\")`    | `x is None`    |    |\n| `is_not_none(x, msg=\"\")`    | `x is not None`    |    |\n| `is_in(a, b, msg=\"\")`    | `a in b`    |    |\n| `is_not_in(a, b, msg=\"\")`    | `a not in b`    |    |\n| `is_instance(a, b, msg=\"\")`    | `isinstance(a, b)`    |    |\n| `is_not_instance(a, b, msg=\"\")`    | `not isinstance(a, b)`    |    |\n| `is_nan(x, msg=\"\")`    | `math.isnan(x)`    | [math.isnan](https://docs.python.org/3/library/math.html#math.isnan)   |\n| `is_not_nan(x, msg=\"\")`    | `not math.isnan(x) `    | [math.isnan](https://docs.python.org/3/library/math.html#math.isnan)   | \n| `almost_equal(a, b, rel=None, abs=None, msg=\"\")`    | `a == pytest.approx(b, rel, abs)` | [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx)    |\n| `not_almost_equal(a, b, rel=None, abs=None, msg=\"\")` | `a != pytest.approx(b, rel, abs)` | [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx)    | \n| `greater(a, b, msg=\"\")`    | `a > b`    |    |\n| `greater_equal(a, b, msg=\"\")`    | `a >= b`    |    |\n| `less(a, b, msg=\"\")`    | `a < b`    |    |\n| `less_equal(a, b, msg=\"\")`    | `a <= b`    |    |\n| `between(b, a, c, msg=\"\", ge=False, le=False)`    | `a < b < c`    |    |\n| `between_equal(b, a, c, msg=\"\")`    | `a <= b <= c`    | same as `between(b, a, c, msg, ge=True, le=True)`    |\n| `raises(expected_exception, *args, **kwargs)`    | *Raises given exception*    | similar to [pytest.raises](https://docs.pytest.org/en/latest/reference/reference.html#pytest-raises) | \n| `fail(msg)`    | *Log a failure*    |    |\n\n**Note: This is a list of relatively common logic operators. I'm reluctant to add to the list too much, as it's easy to add your own.**\n\n\nThe httpx example can be rewritten with helper functions:\n\n```python\ndef test_httpx_get_with_helpers():\n    r = httpx.get('https://www.example.org/')\n    assert r.status_code == 200\n    check.is_false(r.is_redirect)\n    check.equal(r.encoding, 'utf-8')\n    check.is_in('Example Domain', r.text)\n```\n\nWhich you use is personal preference.\n\n## Defining your own check functions\n\n### Using `@check.check_func`\n\nThe `@check.check_func` decorator allows you to wrap any test helper that has an assert statement in it to be a non-blocking assert function.\n\n\n```python\nfrom pytest_check import check\n\n@check.check_func\ndef is_four(a):\n    assert a == 4\n\ndef test_all_four():\n    is_four(1)\n    is_four(2)\n    is_four(3)\n    is_four(4)\n```\n\n\n### Using `check.fail()`\n\nUsing `@check.check_func` is probably the easiest. \nHowever, it does have a bit of overhead in the passing cases \nthat can affect large loops of checks.\n\nIf you need a bit of a speedup, use the following style with the help of `check.fail()`.\n\n```python\nfrom pytest_check import check\n\ndef is_four(a):\n    __tracebackhide__ = True\n    if a == 4:\n        return True\n    else: \n        check.fail(f\"check {a} == 4\")\n        return False\n\ndef test_all_four():\n  is_four(1)\n  is_four(2)\n  is_four(3)\n  is_four(4)\n```\n\n## Using raises as a context manager\n\n`raises` is used as context manager, much like `pytest.raises`. The main difference being that a failure to raise the right exception won't stop the execution of the test method.\n\n\n```python\nfrom pytest_check import check\n\ndef test_raises():\n    with check.raises(AssertionError):\n        x = 3\n        assert 1 < x < 4\n```\n\n## Pseudo-tracebacks\n\nWith `check`, tests can have multiple failures per test.\nThis would possibly make for extensive output if we include the full traceback for\nevery failure.\nTo make the output a little more concise, `pytest-check` implements a shorter version, which we call pseudo-tracebacks.\n\nFor example, take this test:\n\n```python\ndef test_example():\n    a = 1\n    b = 2\n    c = [2, 4, 6]\n    check.greater(a, b)\n    check.less_equal(b, a)\n    check.is_in(a, c, \"Is 1 in the list\")\n    check.is_not_in(b, c, \"make sure 2 isn't in list\")\n```\n\nThis will result in:\n\n```\n=================================== FAILURES ===================================\n_________________________________ test_example _________________________________\nFAILURE:\nassert 1 > 2\n  test_check.py, line 14, in test_example() -> check.greater(a, b)\nFAILURE:\nassert 2 <= 1\n  test_check.py, line 15, in test_example() -> check.less_equal(b, a)\nFAILURE: Is 1 in the list\nassert 1 in [2, 4, 6]\n  test_check.py, line 16, in test_example() -> check.is_in(a, c, \"Is 1 in the list\")\nFAILURE: make sure 2 isn't in list\nassert 2 not in [2, 4, 6]\n  test_check.py, line 17, in test_example() -> check.is_not_in(b, c, \"make sure 2 isn't in list\")\n------------------------------------------------------------\nFailed Checks: 4\n=========================== 1 failed in 0.11 seconds ===========================\n```\n\n## Red output\n\nThe failures will also be red, unless you turn that off with pytests `--color=no`.\n\n## No output\n\nYou can turn off the failure reports with pytests `--tb=no`.\n\n## Stop on Fail (maxfail behavior)\n\nSetting `-x` or `--maxfail=1` will cause this plugin to abort testing after the first failed check.\n\nSetting `-maxfail=2` or greater will turn off any handling of maxfail within this plugin and the behavior is controlled by pytest.\n\nIn other words, the `maxfail` count is counting tests, not checks.\nThe exception is the case of `1`, where we want to stop on the very first failed check.\n\n## any_failures()\n\nUse `any_failures()` to see if there are any failures.  \nOne use case is to make a block of checks conditional on not failing in a previous set of checks:\n\n```python\nfrom pytest_check import check\n\ndef test_with_groups_of_checks():\n    # always check these\n    check.equal(1, 1)\n    check.equal(2, 3)\n    if not check.any_failures():\n        # only check these if the above passed\n        check.equal(1, 2)\n        check.equal(2, 2)\n```\n\n## Speedups\n\nIf you have lots of check failures, your tests may not run as fast as you want.\nThere are a few ways to speed things up.\n\n* `--check-max-tb=5` - Only first 5 failures per test will include pseudo-tracebacks (rest without them).\n    * The example shows `5` but any number can be used.\n    * pytest-check uses custom traceback code I'm calling a pseudo-traceback.\n    * This is visually shorter than normal assert tracebacks.\n    * Internally, it uses introspection, which can be slow.\n    * Allowing a limited number of pseudo-tracebacks speeds things up quite a bit.\n    * Default is 1. \n        * Set a large number, e.g: 1000, if you want pseudo-tracebacks for all failures\n\n* `--check-max-report=10` - limit reported failures per test.\n    * The example shows `10` but any number can be used.\n    * The test will still have the total nuber of failures reported.\n    * Default is no maximum.\n\n* `--check-max-fail=20` - Stop the test after this many check failures.\n    * This is useful if your code under test is slow-ish and you want to bail early.\n    * Default is no maximum.\n\n* Any of these can be used on their own, or combined.\n\n* Recommendation:\n    * Leave the default, equivelant to `--check-max-tb=1`.\n    * If excessive output is annoying, set `--check-max-report=10` or some tolerable number.\n\n## Local speedups\n\nThe flags above are global settings, and apply to every test in the test run.  \n\nLocally, you can set these values per test.\n\nFrom `examples/test_example_speedup_funcs.py`:\n\n```python\ndef test_max_tb():\n    check.set_max_tb(2)\n    for i in range(1, 11):\n        check.equal(i, 100)\n\ndef test_max_report():\n    check.set_max_report(5)\n    for i in range(1, 11):\n        check.equal(i, 100)\n\ndef test_max_fail():\n    check.set_max_fail(5)\n    for i in range(1, 11):\n        check.equal(i, 100)\n```\n\n## Contributing\n\nContributions are very welcome. Tests can be run with [tox](https://tox.readthedocs.io/en/latest/).\nTest coverage is now 100%. Please make sure to keep it at 100%.\nIf you have an awesome pull request and need help with getting coverage back up, let me know.\n\n\n## License\n\nDistributed under the terms of the [MIT](http://opensource.org/licenses/MIT) license, \"pytest-check\" is free and open source software\n\n## Issues\n\nIf you encounter any problems, please [file an issue](https://github.com/okken/pytest-check/issues) along with a detailed description.\n\n## Changelog\n\nSee [changelog.md](https://github.com/okken/pytest-check/blob/main/changelog.md)\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A pytest plugin that allows multiple failures per test.",
    "version": "2.4.1",
    "project_urls": {
        "Home": "https://github.com/okken/pytest-check"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c025465756acbb66db47ad40e0be6b457d8644c7b9f882b2ff7f5e92dde07915",
                "md5": "7daae5f3e3cc026edef8b5eb9a0d49bd",
                "sha256": "74f38938183880d9921aeb85662437d2b13e1e053e1bed7d186d54613d3068c7"
            },
            "downloads": -1,
            "filename": "pytest_check-2.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7daae5f3e3cc026edef8b5eb9a0d49bd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 13789,
            "upload_time": "2024-08-28T04:53:35",
            "upload_time_iso_8601": "2024-08-28T04:53:35.026317Z",
            "url": "https://files.pythonhosted.org/packages/c0/25/465756acbb66db47ad40e0be6b457d8644c7b9f882b2ff7f5e92dde07915/pytest_check-2.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "15d3178a723f0420cf4e06fb6ddf43fc1ec68c1d0d4ea3db1ecf8f6df21b345f",
                "md5": "fa5e064cf5fc3fceccf4e992e9f88048",
                "sha256": "5224efcef059bf7f0cda253f8d0f62704b4819ff48c93f51c675aea6a014f650"
            },
            "downloads": -1,
            "filename": "pytest_check-2.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "fa5e064cf5fc3fceccf4e992e9f88048",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 28933,
            "upload_time": "2024-08-28T04:53:36",
            "upload_time_iso_8601": "2024-08-28T04:53:36.394854Z",
            "url": "https://files.pythonhosted.org/packages/15/d3/178a723f0420cf4e06fb6ddf43fc1ec68c1d0d4ea3db1ecf8f6df21b345f/pytest_check-2.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-28 04:53:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "okken",
    "github_project": "pytest-check",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "tox": true,
    "lcname": "pytest-check"
}
        
Elapsed time: 0.34954s