pytest-oof


Namepytest-oof JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/jeffwright13/pytest-oof
SummaryA Pytest plugin providing structured access to a test run's results
upload_time2023-12-04 06:47:16
maintainer
docs_urlNone
authorJeff Wright
requires_python>=3.8
licenseMIT
keywords pytest pytest-plugin testing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # pytest-oof: pytest Outcomes and Output-Fields

## A pytest plugin providing structured access to post-run pytest results

### Test Outcomes:
- Passes
- Failures
- Errors
- Skips
- Xfails
- XPasses
- Warnings
- Reruns

### Grouped Reruns:
- Rerun tests listed individually
- Reruns listed by "rerun group" (i.e. all reruns of a given test, with final outcome assigned to group)

### Test Output Fields (aka "sections"):
- test_session_starts
- errors
- failures
- passes
- warnings_summary
- rerun_test_summary
- short_test_summary
- lastline

## Target Audience:
- Pytest plugin developers and others who need access to pytest's results after a test run has completed
- Testers who want a summary of their test run *as reported by pytest on the console* (doesn't get more authoritative than that), without having to parse pytest's complex console output
- Taylor Swift fans

# Installation

## Standard install
`pip install -i https://test.pypi.org/simple/ pytest-oof`

## For Local Development
- Clone the repo
- Make a venv; required dependencies are:
  - pytest (*duh*)
  - rich
  - strip-ansi
  - single-source
  - pytest-rerunfailures (if you want to run the demo tests)
  - faker (if you want to run the demo tests)
- Install the plugin: `pip install .`
- Use as below:
    - Run the demo console script: `oofda` (specify `--help` for options)
    - In your own code, `from pytest-oof.utils import Results` and use as you wish
    - In your `conftest.py`, use the custom hook as you wish


# Usage


## Demo Script

First, run your pytest campaign with the `--oof` option:

`$ pytest --oof`

This generates two files in the `/oof` directory:
- oof/results.pickle: a pickled collection of dataclasses representing all results in an easy-to-consume format
- oof/terminal_output.ansi: a copy of the entire terminal output from your test session, encoded in ANSI escape codes

Now run the included console script `oofda`:

`$ oofda`

This script invokes the example code in `__main__.py`, shows how to consume the oof files, and presents basic results on the console.

Go ahead - compare the results with the last line of output from `pytest --oof` .

## As an Importable Module

Run your pytest campaign with the `--oof` option:

`$ pytest --oof`

Now use as you wish:

```
from pytest_oof.utils import Results

results = Results.from_files(
    results_file_path="oof/results.pickle",
    output_file_path="oof/terminal_output.ansi",
)
```

## As a Pytest Plugin with Custom Hook

The 'results' parameter will be filled by pytest when the hook is called.
You can then access the test session data within this block, and do whatever you want with it.

`plugin.py` or `conftest.py`:
```
@pytest.hookimpl
def pytest_oof_results(results):
    print(f"Received results: {results}")
```

### Example output

Here's a quick test that has all of the outcomes and scenarios you might encounter during a typical run.

```
$ pytest --oof

=========================================== test session starts ===========================================
platform darwin -- Python 3.11.4, pytest-7.4.3, pluggy-1.3.0 -- /Users/jwr003/coding/pytest-oof/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/jwr003/coding/pytest-oof
plugins: oof-0.2.0, anyio-4.0.0, rerunfailures-12.0, tally-1.3.1
collecting ...
collected 11 items

demo-tests/test_basic.py::test_basic_pass_1 PASSED                                                  [  9%]
demo-tests/test_basic.py::test_basic_pass_3_error_in_fixture ERROR                                  [ 18%]
demo-tests/test_basic.py::test_basic_fail_1 FAILED                                                  [ 27%]
demo-tests/test_basic.py::test_basic_skip PASSED                                                    [ 36%]
demo-tests/test_basic.py::test_basic_xfail XFAIL                                                    [ 45%]
demo-tests/test_basic.py::test_basic_xpass XPASS                                                    [ 54%]
demo-tests/test_basic.py::test_basic_warning_1 PASSED                                               [ 63%]
demo-tests/test_basic.py::test_basic_warning_2 PASSED                                               [ 72%]
demo-tests/test_basic.py::test_basic_rerun_pass RERUN                                               [ 81%]
demo-tests/test_basic.py::test_basic_rerun_pass RERUN                                               [ 81%]
demo-tests/test_basic.py::test_basic_rerun_pass PASSED                                              [ 81%]
demo-tests/test_basic.py::test_basic_rerun_fail RERUN                                               [ 90%]
demo-tests/test_basic.py::test_basic_rerun_fail RERUN                                               [ 90%]
demo-tests/test_basic.py::test_basic_rerun_fail FAILED                                              [ 90%]
demo-tests/test_basic.py::test_basic_skip_marker SKIPPED (Skip this test with marker.)              [100%]

================================================= ERRORS ==================================================
__________________________ ERROR at setup of test_basic_pass_3_error_in_fixture ___________________________

fake_data = 'Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui ...odo id ut enim. Morbi ornare, nisi vel consectetur bibendum, nibh elit mollis quam, ac vestibulum velit est at turpis.'

    @pytest.fixture
    def error_fixt(fake_data):
>       raise Exception("Error in fixture")
E       Exception: Error in fixture

demo-tests/test_basic.py:27: Exception
================================================ FAILURES =================================================
____________________________________________ test_basic_fail_1 ____________________________________________

fake_data = 'Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commo... metus feugiat, gravida mi ac, sagittis nisl. Mauris varius sapien sed turpis congue, ac ullamcorper tortor tincidunt.'

    def test_basic_fail_1(fake_data):
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
        logger.debug(fake_data)
>       assert 1 == 2
E       assert 1 == 2

demo-tests/test_basic.py:57: AssertionError
__________________________________________ test_basic_rerun_fail __________________________________________

    @pytest.mark.flaky(reruns=2)
    def test_basic_rerun_fail():
>       assert False
E       assert False

demo-tests/test_basic.py:144: AssertionError
============================================ warnings summary =============================================
demo-tests/test_basic.py::test_basic_warning_1
  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:112: UserWarning: api v1, should use functions from v2
    warnings.warn(UserWarning("api v1, should use functions from v2"))

demo-tests/test_basic.py::test_basic_warning_2
  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:117: UserWarning: api v2, should use functions from v3
    warnings.warn(UserWarning("api v2, should use functions from v3"))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================= PASSES ==================================================
========================================= rerun test summary info =========================================
RERUN demo-tests/test_basic.py::test_basic_rerun_pass
RERUN demo-tests/test_basic.py::test_basic_rerun_pass
RERUN demo-tests/test_basic.py::test_basic_rerun_fail
RERUN demo-tests/test_basic.py::test_basic_rerun_fail
========================================= short test summary info =========================================
PASSED demo-tests/test_basic.py::test_basic_pass_1
PASSED demo-tests/test_basic.py::test_basic_skip
PASSED demo-tests/test_basic.py::test_basic_warning_1
PASSED demo-tests/test_basic.py::test_basic_warning_2
PASSED demo-tests/test_basic.py::test_basic_rerun_pass
SKIPPED [1] demo-tests/test_basic.py:147: Skip this test with marker.
XFAIL demo-tests/test_basic.py::test_basic_xfail
XPASS demo-tests/test_basic.py::test_basic_xpass
ERROR demo-tests/test_basic.py::test_basic_pass_3_error_in_fixture - Exception: Error in fixture
FAILED demo-tests/test_basic.py::test_basic_fail_1 - assert 1 == 2
FAILED demo-tests/test_basic.py::test_basic_rerun_fail - assert False
======= 2 failed, 5 passed, 1 skipped, 1 xfailed, 1 xpassed, 2 warnings, 1 error, 4 rerun in 0.23s ========
```

And here's the result of the included sample script that consumes pytest-oof's output files. As you can see, you have easy access to all the individual test results, as well as the various sections of the console output.

```
$ oofda

Session start time: 2023-11-05 16:42:48.540273
Session end time: 2023-11-05 16:42:48.804730
Session duration: 0:00:00.264457


Number of tests: 15
Number of passes: 5
Number of failures: 2
Number of errors: 1
Number of skips: 1
Number of xfails: 1
Number of xpasses: 1
Number of warnings: 2
Number or reruns: 4


Output field name: pre_test
Output field content:


Output field name: test_session_starts
Output field content:
[1m=========================================== test session starts
===========================================[0m
platform darwin -- Python 3.11.4, pytest-7.4.3, pluggy-1.3.0 --
/Users/jwr003/coding/pytest-oof/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/jwr003/coding/pytest-oof
plugins: oof-0.2.0, anyio-4.0.0, rerunfailures-12.0, tally-1.3.1
[1mcollecting ...
[0m[1mcollected 11 items
[0m

demo-tests/test_basic.py::test_basic_pass_1 [32mPASSED[0m[32m
[  9%][0m
demo-tests/test_basic.py::test_basic_pass_3_error_in_fixture [31mERROR[0m[31m
[ 18%][0m
demo-tests/test_basic.py::test_basic_fail_1 [31mFAILED[0m[31m
[ 27%][0m
demo-tests/test_basic.py::test_basic_skip [32mPASSED[0m[31m
[ 36%][0m
demo-tests/test_basic.py::test_basic_xfail [33mXFAIL[0m[31m
[ 45%][0m
demo-tests/test_basic.py::test_basic_xpass [33mXPASS[0m[31m
[ 54%][0m
demo-tests/test_basic.py::test_basic_warning_1 [32mPASSED[0m[31m
[ 63%][0m
demo-tests/test_basic.py::test_basic_warning_2 [32mPASSED[0m[31m
[ 72%][0m
demo-tests/test_basic.py::test_basic_rerun_pass [33mRERUN[0m[31m
[ 81%][0m
demo-tests/test_basic.py::test_basic_rerun_pass [33mRERUN[0m[31m
[ 81%][0m
demo-tests/test_basic.py::test_basic_rerun_pass [32mPASSED[0m[31m
[ 81%][0m
demo-tests/test_basic.py::test_basic_rerun_fail [33mRERUN[0m[31m
[ 90%][0m
demo-tests/test_basic.py::test_basic_rerun_fail [33mRERUN[0m[31m
[ 90%][0m
demo-tests/test_basic.py::test_basic_rerun_fail [31mFAILED[0m[31m
[ 90%][0m
demo-tests/test_basic.py::test_basic_skip_marker [33mSKIPPED[0m (Skip this test with marker.)[31m
[100%][0m



Output field name: errors
Output field content:
================================================= ERRORS ==================================================
[31m[1m__________________________ ERROR at setup of test_basic_pass_3_error_in_fixture
___________________________[0m

fake_data = 'Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae
consequatur, vel illum qui ...odo id ut enim. Morbi ornare, nisi vel consectetur bibendum, nibh elit mollis
quam, ac vestibulum velit est at turpis.'

    [37m@pytest[39;49;00m.fixture[90m[39;49;00m
    [94mdef[39;49;00m [92merror_fixt[39;49;00m(fake_data):[90m[39;49;00m
>       [94mraise[39;49;00m [96mException[39;49;00m([33m"[39;49;00m[33mError in
fixture[39;49;00m[33m"[39;49;00m)[90m[39;49;00m
[1m[31mE       Exception: Error in fixture[0m

[1m[31mdemo-tests/test_basic.py[0m:27: Exception


Output field name: failures
Output field content:
================================================ FAILURES =================================================
[31m[1m____________________________________________ test_basic_fail_1
____________________________________________[0m

fake_data = 'Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi
ut aliquid ex ea commo... metus feugiat, gravida mi ac, sagittis nisl. Mauris varius sapien sed turpis
congue, ac ullamcorper tortor tincidunt.'

    [94mdef[39;49;00m [92mtest_basic_fail_1[39;49;00m(fake_data):[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
        logger.debug(fake_data)[90m[39;49;00m
>       [94massert[39;49;00m [94m1[39;49;00m == [94m2[39;49;00m[90m[39;49;00m
[1m[31mE       assert 1 == 2[0m

[1m[31mdemo-tests/test_basic.py[0m:57: AssertionError
[31m[1m__________________________________________ test_basic_rerun_fail
__________________________________________[0m

    [37m@pytest[39;49;00m.mark.flaky(reruns=[94m2[39;49;00m)[90m[39;49;00m
    [94mdef[39;49;00m [92mtest_basic_rerun_fail[39;49;00m():[90m[39;49;00m
>       [94massert[39;49;00m [94mFalse[39;49;00m[90m[39;49;00m
[1m[31mE       assert False[0m

[1m[31mdemo-tests/test_basic.py[0m:144: AssertionError


Output field name: passes
Output field content:
================================================= PASSES ==================================================


Output field name: warnings_summary
Output field content:
[33m============================================ warnings summary
=============================================[0m
demo-tests/test_basic.py::test_basic_warning_1
  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:112: UserWarning: api v1, should use functions
from v2
    warnings.warn(UserWarning("api v1, should use functions from v2"))

demo-tests/test_basic.py::test_basic_warning_2
  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:117: UserWarning: api v2, should use functions
from v3
    warnings.warn(UserWarning("api v2, should use functions from v3"))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html


Output field name: rerun_test_summary
Output field content:
========================================= rerun test summary info =========================================
RERUN demo-tests/test_basic.py::test_basic_rerun_pass
RERUN demo-tests/test_basic.py::test_basic_rerun_pass
RERUN demo-tests/test_basic.py::test_basic_rerun_fail
RERUN demo-tests/test_basic.py::test_basic_rerun_fail


Output field name: short_test_summary
Output field content:
[36m[1m========================================= short test summary info
=========================================[0m
[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_pass_1[0m
[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_skip[0m
[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_warning_1[0m
[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_warning_2[0m
[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_rerun_pass[0m
[33mSKIPPED[0m [1] demo-tests/test_basic.py:147: Skip this test with marker.
[33mXFAIL[0m demo-tests/test_basic.py::[1mtest_basic_xfail[0m
[33mXPASS[0m demo-tests/test_basic.py::[1mtest_basic_xpass[0m
[31mERROR[0m demo-tests/test_basic.py::[1mtest_basic_pass_3_error_in_fixture[0m - Exception: Error in
fixture
[31mFAILED[0m demo-tests/test_basic.py::[1mtest_basic_fail_1[0m - assert 1 == 2
[31mFAILED[0m demo-tests/test_basic.py::[1mtest_basic_rerun_fail[0m - assert False


Output field name: lastline
Output field content:
[31m======= [31m[1m2 failed[0m, [32m5 passed[0m, [33m1 skipped[0m, [33m1 xfailed[0m, [33m1 xpassed[0m,
[33m2 warnings[0m, [31m[1m1 error[0m, [33m4 rerun[0m[31m in 0.23s[0m[31m ========[0m
```

# Format

`pytest-oof` provides a structured Python object representation of the results of a pytest test run. Esentially, it is a collection of dataclasses, each representing a single test result. The dataclasses are organized into lists/dictionaries, and are pickled to a file for later consumption.

## `Results` (top-level object)

At the highest level you are presented with a `Results` object, defined as follows:

| Attribute | Description |
| --- | --- |
| `session_start_time` | datetime object representing UTC time when test session started |
| `session_stop_time` | datetime object representing UTC time when test session ended |
| `session_duration` | timedelta object representing duration of test session (to µs resolution) |
| `test_results` | a single `TestResults` object (see below for definition, but it is essentially a list of `TestResult` instances, with helpful methods to gather TestResult instances based on outcome) |
| `output_fields` | a dictionary of `OutputField` objects (see below for definition, but basically a dictionary of strings containing the full ANSI-encoded content of a section) |
| `rerun_test_groups` | a single `RerunTestGroup` instance (see below for complete definition) |

The data structures are defined in `pytest_oof/util.py`. The dataclasses are:

### TestResult

A single test result, which is a single test run of a single test.

| attribute | data type | description |
| --- | ---- | --- |
| `nodeid`      | str | canonical test name, with format `source file:::test name` |
| `outcome` | str | the individual outcome of this test |
| `start_time` | datetime | UTC time when test started |
| `duration` | float | duration of test in seconds |
| `caplog` | str | the contents of the captured log |
| `capstderr` | str | the contents of the captured stderr |
| `capstdout` | str | the contents of the captured stdout |
| `longreprtext` | str | the contents of the captured longreprtext |
| `has_warning` | bool | whether or not this test had a warning |
| `to_dict()` | method | returns a dictionary representation of the TestResult object |

### TestResults

A collection of TestResult objects, with convenience methods for accessing subsets of the collection.

| attribute | data type | description |
| --- | ---- | --- |
| `test_results` | list | a list of TestResult objects |
| `all_tests` | method | returns a list of all TestResult objects |
| `all_passes` | method | returns a list of all TestResult objects with outcome == "passed" |
| `all_failures` | method | returns a list of all TestResult objects with outcome == "failed" |
| `all_errors` | method | returns a list of all TestResult objects with outcome == "error" |
| `all_skips` | method | returns a list of all TestResult objects with outcome == "skipped" |
| `all_xfails` | method | returns a list of all TestResult objects with outcome == "xfail" |
| `all_xpasses` | method | returns a list of all TestResult objects with outcome == "xpass" |
| `all_warnings` | method | returns a list of all TestResult objects with outcome == "warning" |
| `all_reruns` | method | returns a list of all TestResult objects with outcome == "rerun" |
| `all_reruns` | method | returns a list of all TestResult objects with outcome == "rerun_group" |

### OutputField

An 'output field' (aka a 'section') is a block of text that is displayed in the terminal
output during a pytest run. It provides additional information about the test run:
warnings, errors, etc.

| attribute | data type | description |
| --- | ---- | --- |
| `name` | str | the name of the output field |
| `content` | str | the full ANSI-encoded content of the output field |

### OutputFields

A collection of all available types of OutputField objects. Not all fields will
be present in every test run. It depends on the plugins that are installed and
which "-r" flags are specified. This plugin forces the use of "-r RA" to ensure
any fields that are available are included in the output.

| attribute | data type | description |
| --- | ---- | --- |
| `test_session_starts` | OutputField | the second output field, which contains the start time of each test |
| `errors` | OutputField | the third output field, which contains the error output of each test |
| `failures` | OutputField | the fourth output field, which contains the failure output of each test |
| `passes` | OutputField | the fifth output field, which contains the pass output of each test |
| `warnings_summary` | OutputField | the sixth output field, which contains a summary of warnings |
| `rerun_test_summary` | OutputField | the seventh output field, which contains a summary of rerun tests |
| `short_test_summary` | OutputField | the eighth output field, which contains a summary of test outcomes |
| `lastline` | OutputField | the ninth output field, which contains the last line of terminal output |

### RerunTestGroup

'RerunTestGroup': a single test that has been run multiple times using the 'pytest-rerunfailures' plugin

| attribute | data type | description |
| --- | ---- | --- |
| `nodeid` | str | canonical test name, with format `source file:::test name` |
| `final_outcome` | str | the final outcome of the test group |
| `final_test` | TestResult | the final TestResult object of the test group |
| `forerunners` | list | a list of TestResult objects that were rerun |
| `full_test_list` | list | a chronological list of all TestResult objects in the test group |




# Limitations and Disclaimer

`pytest-oof` uses pytest's console output in order to generate its results. This means that if pytest changes its output format, `pytest-oof` may break. I will do my best to keep up with changes to pytest, but I make no guarantees. So far the same algorithm has held up for 2+ years, but who knows what the pytest devs will do next?

Because it is parsing the console output, it also means that you won't have access to the results until after the test run has completed (specifically, in `pytest_unconfigure`). Once the test run is over, you are left with two files, as discussed above. If you want to consume a test run's results in real-time, you'll need to use pytest's hooks, and/or other plugins (see below for other suggestions).

I developed the algorithm used in this plugin while writing [pytest-tui](https://github.com/jeffwright13/pytest-tui), because I couldn't find another way to correctly determine the outcome types for the more esoteric outcomes like XPass, XFail, or Rerun. I knew there was a way to determine some of this from analyzing succesive TestReport objects, but that still didn't do Reruns correctly, nor Warnings (which are technically not an outcome, but a field in the console output). This plugin gives you all that, plus a string of the individual fields/sections of the console output (like "warnings_summary," "errors," "failures," etc).

If you have any problems or questions with pytest-oof, open an issue. I'll do my best to address it.

# Other Ways to Get Test Run Info ##

[pytest's junitxml](https://docs.pytest.org/en/6.2.x/usage.html#creating-junitxml-format-files)
[pytest-json-report](https://pypi.org/project/pytest-json-report/)


I also have code that outputs JSON-formatted results in real-time (part of [pytest-tally](https://github.com/jeffwright13/pytest-tally)). This code does *not* rely on the console output, intead getting its information from internal TestReport ojects as they are populated during a test run. In that respect, they are less fragile than pytest-oof. This method gets close to providing a complete representation of a test run's information, but does not include fields/sections, nor does it correectly handle all ways of skipping tests. However, that code is embedded in the tally library and is not prductized. I may do so and include it here in the future if there is any demand.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/jeffwright13/pytest-oof",
    "name": "pytest-oof",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "pytest pytest-plugin testing",
    "author": "Jeff Wright",
    "author_email": "jeff.washcloth@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/a3/3f/f4be486f853256b6872b8e7b1b018e02d5e99e43bcb72f17db8d6d314381/pytest-oof-1.0.0.tar.gz",
    "platform": null,
    "description": "# pytest-oof: pytest Outcomes and Output-Fields\n\n## A pytest plugin providing structured access to post-run pytest results\n\n### Test Outcomes:\n- Passes\n- Failures\n- Errors\n- Skips\n- Xfails\n- XPasses\n- Warnings\n- Reruns\n\n### Grouped Reruns:\n- Rerun tests listed individually\n- Reruns listed by \"rerun group\" (i.e. all reruns of a given test, with final outcome assigned to group)\n\n### Test Output Fields (aka \"sections\"):\n- test_session_starts\n- errors\n- failures\n- passes\n- warnings_summary\n- rerun_test_summary\n- short_test_summary\n- lastline\n\n## Target Audience:\n- Pytest plugin developers and others who need access to pytest's results after a test run has completed\n- Testers who want a summary of their test run *as reported by pytest on the console* (doesn't get more authoritative than that), without having to parse pytest's complex console output\n- Taylor Swift fans\n\n# Installation\n\n## Standard install\n`pip install -i https://test.pypi.org/simple/ pytest-oof`\n\n## For Local Development\n- Clone the repo\n- Make a venv; required dependencies are:\n  - pytest (*duh*)\n  - rich\n  - strip-ansi\n  - single-source\n  - pytest-rerunfailures (if you want to run the demo tests)\n  - faker (if you want to run the demo tests)\n- Install the plugin: `pip install .`\n- Use as below:\n    - Run the demo console script: `oofda` (specify `--help` for options)\n    - In your own code, `from pytest-oof.utils import Results` and use as you wish\n    - In your `conftest.py`, use the custom hook as you wish\n\n\n# Usage\n\n\n## Demo Script\n\nFirst, run your pytest campaign with the `--oof` option:\n\n`$ pytest --oof`\n\nThis generates two files in the `/oof` directory:\n- oof/results.pickle: a pickled collection of dataclasses representing all results in an easy-to-consume format\n- oof/terminal_output.ansi: a copy of the entire terminal output from your test session, encoded in ANSI escape codes\n\nNow run the included console script `oofda`:\n\n`$ oofda`\n\nThis script invokes the example code in `__main__.py`, shows how to consume the oof files, and presents basic results on the console.\n\nGo ahead - compare the results with the last line of output from `pytest --oof` .\n\n## As an Importable Module\n\nRun your pytest campaign with the `--oof` option:\n\n`$ pytest --oof`\n\nNow use as you wish:\n\n```\nfrom pytest_oof.utils import Results\n\nresults = Results.from_files(\n    results_file_path=\"oof/results.pickle\",\n    output_file_path=\"oof/terminal_output.ansi\",\n)\n```\n\n## As a Pytest Plugin with Custom Hook\n\nThe 'results' parameter will be filled by pytest when the hook is called.\nYou can then access the test session data within this block, and do whatever you want with it.\n\n`plugin.py` or `conftest.py`:\n```\n@pytest.hookimpl\ndef pytest_oof_results(results):\n    print(f\"Received results: {results}\")\n```\n\n### Example output\n\nHere's a quick test that has all of the outcomes and scenarios you might encounter during a typical run.\n\n```\n$ pytest --oof\n\n=========================================== test session starts ===========================================\nplatform darwin -- Python 3.11.4, pytest-7.4.3, pluggy-1.3.0 -- /Users/jwr003/coding/pytest-oof/venv/bin/python\ncachedir: .pytest_cache\nrootdir: /Users/jwr003/coding/pytest-oof\nplugins: oof-0.2.0, anyio-4.0.0, rerunfailures-12.0, tally-1.3.1\ncollecting ...\ncollected 11 items\n\ndemo-tests/test_basic.py::test_basic_pass_1 PASSED                                                  [  9%]\ndemo-tests/test_basic.py::test_basic_pass_3_error_in_fixture ERROR                                  [ 18%]\ndemo-tests/test_basic.py::test_basic_fail_1 FAILED                                                  [ 27%]\ndemo-tests/test_basic.py::test_basic_skip PASSED                                                    [ 36%]\ndemo-tests/test_basic.py::test_basic_xfail XFAIL                                                    [ 45%]\ndemo-tests/test_basic.py::test_basic_xpass XPASS                                                    [ 54%]\ndemo-tests/test_basic.py::test_basic_warning_1 PASSED                                               [ 63%]\ndemo-tests/test_basic.py::test_basic_warning_2 PASSED                                               [ 72%]\ndemo-tests/test_basic.py::test_basic_rerun_pass RERUN                                               [ 81%]\ndemo-tests/test_basic.py::test_basic_rerun_pass RERUN                                               [ 81%]\ndemo-tests/test_basic.py::test_basic_rerun_pass PASSED                                              [ 81%]\ndemo-tests/test_basic.py::test_basic_rerun_fail RERUN                                               [ 90%]\ndemo-tests/test_basic.py::test_basic_rerun_fail RERUN                                               [ 90%]\ndemo-tests/test_basic.py::test_basic_rerun_fail FAILED                                              [ 90%]\ndemo-tests/test_basic.py::test_basic_skip_marker SKIPPED (Skip this test with marker.)              [100%]\n\n================================================= ERRORS ==================================================\n__________________________ ERROR at setup of test_basic_pass_3_error_in_fixture ___________________________\n\nfake_data = 'Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui ...odo id ut enim. Morbi ornare, nisi vel consectetur bibendum, nibh elit mollis quam, ac vestibulum velit est at turpis.'\n\n    @pytest.fixture\n    def error_fixt(fake_data):\n>       raise Exception(\"Error in fixture\")\nE       Exception: Error in fixture\n\ndemo-tests/test_basic.py:27: Exception\n================================================ FAILURES =================================================\n____________________________________________ test_basic_fail_1 ____________________________________________\n\nfake_data = 'Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commo... metus feugiat, gravida mi ac, sagittis nisl. Mauris varius sapien sed turpis congue, ac ullamcorper tortor tincidunt.'\n\n    def test_basic_fail_1(fake_data):\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n        logger.debug(fake_data)\n>       assert 1 == 2\nE       assert 1 == 2\n\ndemo-tests/test_basic.py:57: AssertionError\n__________________________________________ test_basic_rerun_fail __________________________________________\n\n    @pytest.mark.flaky(reruns=2)\n    def test_basic_rerun_fail():\n>       assert False\nE       assert False\n\ndemo-tests/test_basic.py:144: AssertionError\n============================================ warnings summary =============================================\ndemo-tests/test_basic.py::test_basic_warning_1\n  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:112: UserWarning: api v1, should use functions from v2\n    warnings.warn(UserWarning(\"api v1, should use functions from v2\"))\n\ndemo-tests/test_basic.py::test_basic_warning_2\n  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:117: UserWarning: api v2, should use functions from v3\n    warnings.warn(UserWarning(\"api v2, should use functions from v3\"))\n\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\n================================================= PASSES ==================================================\n========================================= rerun test summary info =========================================\nRERUN demo-tests/test_basic.py::test_basic_rerun_pass\nRERUN demo-tests/test_basic.py::test_basic_rerun_pass\nRERUN demo-tests/test_basic.py::test_basic_rerun_fail\nRERUN demo-tests/test_basic.py::test_basic_rerun_fail\n========================================= short test summary info =========================================\nPASSED demo-tests/test_basic.py::test_basic_pass_1\nPASSED demo-tests/test_basic.py::test_basic_skip\nPASSED demo-tests/test_basic.py::test_basic_warning_1\nPASSED demo-tests/test_basic.py::test_basic_warning_2\nPASSED demo-tests/test_basic.py::test_basic_rerun_pass\nSKIPPED [1] demo-tests/test_basic.py:147: Skip this test with marker.\nXFAIL demo-tests/test_basic.py::test_basic_xfail\nXPASS demo-tests/test_basic.py::test_basic_xpass\nERROR demo-tests/test_basic.py::test_basic_pass_3_error_in_fixture - Exception: Error in fixture\nFAILED demo-tests/test_basic.py::test_basic_fail_1 - assert 1 == 2\nFAILED demo-tests/test_basic.py::test_basic_rerun_fail - assert False\n======= 2 failed, 5 passed, 1 skipped, 1 xfailed, 1 xpassed, 2 warnings, 1 error, 4 rerun in 0.23s ========\n```\n\nAnd here's the result of the included sample script that consumes pytest-oof's output files. As you can see, you have easy access to all the individual test results, as well as the various sections of the console output.\n\n```\n$ oofda\n\nSession start time: 2023-11-05 16:42:48.540273\nSession end time: 2023-11-05 16:42:48.804730\nSession duration: 0:00:00.264457\n\n\nNumber of tests: 15\nNumber of passes: 5\nNumber of failures: 2\nNumber of errors: 1\nNumber of skips: 1\nNumber of xfails: 1\nNumber of xpasses: 1\nNumber of warnings: 2\nNumber or reruns: 4\n\n\nOutput field name: pre_test\nOutput field content:\n\n\nOutput field name: test_session_starts\nOutput field content:\n[1m=========================================== test session starts\n===========================================[0m\nplatform darwin -- Python 3.11.4, pytest-7.4.3, pluggy-1.3.0 --\n/Users/jwr003/coding/pytest-oof/venv/bin/python\ncachedir: .pytest_cache\nrootdir: /Users/jwr003/coding/pytest-oof\nplugins: oof-0.2.0, anyio-4.0.0, rerunfailures-12.0, tally-1.3.1\n[1mcollecting ...\n[0m[1mcollected 11 items\n[0m\n\ndemo-tests/test_basic.py::test_basic_pass_1 [32mPASSED[0m[32m\n[  9%][0m\ndemo-tests/test_basic.py::test_basic_pass_3_error_in_fixture [31mERROR[0m[31m\n[ 18%][0m\ndemo-tests/test_basic.py::test_basic_fail_1 [31mFAILED[0m[31m\n[ 27%][0m\ndemo-tests/test_basic.py::test_basic_skip [32mPASSED[0m[31m\n[ 36%][0m\ndemo-tests/test_basic.py::test_basic_xfail [33mXFAIL[0m[31m\n[ 45%][0m\ndemo-tests/test_basic.py::test_basic_xpass [33mXPASS[0m[31m\n[ 54%][0m\ndemo-tests/test_basic.py::test_basic_warning_1 [32mPASSED[0m[31m\n[ 63%][0m\ndemo-tests/test_basic.py::test_basic_warning_2 [32mPASSED[0m[31m\n[ 72%][0m\ndemo-tests/test_basic.py::test_basic_rerun_pass [33mRERUN[0m[31m\n[ 81%][0m\ndemo-tests/test_basic.py::test_basic_rerun_pass [33mRERUN[0m[31m\n[ 81%][0m\ndemo-tests/test_basic.py::test_basic_rerun_pass [32mPASSED[0m[31m\n[ 81%][0m\ndemo-tests/test_basic.py::test_basic_rerun_fail [33mRERUN[0m[31m\n[ 90%][0m\ndemo-tests/test_basic.py::test_basic_rerun_fail [33mRERUN[0m[31m\n[ 90%][0m\ndemo-tests/test_basic.py::test_basic_rerun_fail [31mFAILED[0m[31m\n[ 90%][0m\ndemo-tests/test_basic.py::test_basic_skip_marker [33mSKIPPED[0m (Skip this test with marker.)[31m\n[100%][0m\n\n\n\nOutput field name: errors\nOutput field content:\n================================================= ERRORS ==================================================\n[31m[1m__________________________ ERROR at setup of test_basic_pass_3_error_in_fixture\n___________________________[0m\n\nfake_data = 'Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae\nconsequatur, vel illum qui ...odo id ut enim. Morbi ornare, nisi vel consectetur bibendum, nibh elit mollis\nquam, ac vestibulum velit est at turpis.'\n\n    [37m@pytest[39;49;00m.fixture[90m[39;49;00m\n    [94mdef[39;49;00m [92merror_fixt[39;49;00m(fake_data):[90m[39;49;00m\n>       [94mraise[39;49;00m [96mException[39;49;00m([33m\"[39;49;00m[33mError in\nfixture[39;49;00m[33m\"[39;49;00m)[90m[39;49;00m\n[1m[31mE       Exception: Error in fixture[0m\n\n[1m[31mdemo-tests/test_basic.py[0m:27: Exception\n\n\nOutput field name: failures\nOutput field content:\n================================================ FAILURES =================================================\n[31m[1m____________________________________________ test_basic_fail_1\n____________________________________________[0m\n\nfake_data = 'Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi\nut aliquid ex ea commo... metus feugiat, gravida mi ac, sagittis nisl. Mauris varius sapien sed turpis\ncongue, ac ullamcorper tortor tincidunt.'\n\n    [94mdef[39;49;00m [92mtest_basic_fail_1[39;49;00m(fake_data):[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n        logger.debug(fake_data)[90m[39;49;00m\n>       [94massert[39;49;00m [94m1[39;49;00m == [94m2[39;49;00m[90m[39;49;00m\n[1m[31mE       assert 1 == 2[0m\n\n[1m[31mdemo-tests/test_basic.py[0m:57: AssertionError\n[31m[1m__________________________________________ test_basic_rerun_fail\n__________________________________________[0m\n\n    [37m@pytest[39;49;00m.mark.flaky(reruns=[94m2[39;49;00m)[90m[39;49;00m\n    [94mdef[39;49;00m [92mtest_basic_rerun_fail[39;49;00m():[90m[39;49;00m\n>       [94massert[39;49;00m [94mFalse[39;49;00m[90m[39;49;00m\n[1m[31mE       assert False[0m\n\n[1m[31mdemo-tests/test_basic.py[0m:144: AssertionError\n\n\nOutput field name: passes\nOutput field content:\n================================================= PASSES ==================================================\n\n\nOutput field name: warnings_summary\nOutput field content:\n[33m============================================ warnings summary\n=============================================[0m\ndemo-tests/test_basic.py::test_basic_warning_1\n  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:112: UserWarning: api v1, should use functions\nfrom v2\n    warnings.warn(UserWarning(\"api v1, should use functions from v2\"))\n\ndemo-tests/test_basic.py::test_basic_warning_2\n  /Users/jwr003/coding/pytest-oof/demo-tests/test_basic.py:117: UserWarning: api v2, should use functions\nfrom v3\n    warnings.warn(UserWarning(\"api v2, should use functions from v3\"))\n\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\n\n\nOutput field name: rerun_test_summary\nOutput field content:\n========================================= rerun test summary info =========================================\nRERUN demo-tests/test_basic.py::test_basic_rerun_pass\nRERUN demo-tests/test_basic.py::test_basic_rerun_pass\nRERUN demo-tests/test_basic.py::test_basic_rerun_fail\nRERUN demo-tests/test_basic.py::test_basic_rerun_fail\n\n\nOutput field name: short_test_summary\nOutput field content:\n[36m[1m========================================= short test summary info\n=========================================[0m\n[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_pass_1[0m\n[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_skip[0m\n[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_warning_1[0m\n[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_warning_2[0m\n[32mPASSED[0m demo-tests/test_basic.py::[1mtest_basic_rerun_pass[0m\n[33mSKIPPED[0m [1] demo-tests/test_basic.py:147: Skip this test with marker.\n[33mXFAIL[0m demo-tests/test_basic.py::[1mtest_basic_xfail[0m\n[33mXPASS[0m demo-tests/test_basic.py::[1mtest_basic_xpass[0m\n[31mERROR[0m demo-tests/test_basic.py::[1mtest_basic_pass_3_error_in_fixture[0m - Exception: Error in\nfixture\n[31mFAILED[0m demo-tests/test_basic.py::[1mtest_basic_fail_1[0m - assert 1 == 2\n[31mFAILED[0m demo-tests/test_basic.py::[1mtest_basic_rerun_fail[0m - assert False\n\n\nOutput field name: lastline\nOutput field content:\n[31m======= [31m[1m2 failed[0m, [32m5 passed[0m, [33m1 skipped[0m, [33m1 xfailed[0m, [33m1 xpassed[0m,\n[33m2 warnings[0m, [31m[1m1 error[0m, [33m4 rerun[0m[31m in 0.23s[0m[31m ========[0m\n```\n\n# Format\n\n`pytest-oof` provides a structured Python object representation of the results of a pytest test run. Esentially, it is a collection of dataclasses, each representing a single test result. The dataclasses are organized into lists/dictionaries, and are pickled to a file for later consumption.\n\n## `Results` (top-level object)\n\nAt the highest level you are presented with a `Results` object, defined as follows:\n\n| Attribute | Description |\n| --- | --- |\n| `session_start_time` | datetime object representing UTC time when test session started |\n| `session_stop_time` | datetime object representing UTC time when test session ended |\n| `session_duration` | timedelta object representing duration of test session (to \u00b5s resolution) |\n| `test_results` | a single `TestResults` object (see below for definition, but it is essentially a list of `TestResult` instances, with helpful methods to gather TestResult instances based on outcome) |\n| `output_fields` | a dictionary of `OutputField` objects (see below for definition, but basically a dictionary of strings containing the full ANSI-encoded content of a section) |\n| `rerun_test_groups` | a single `RerunTestGroup` instance (see below for complete definition) |\n\nThe data structures are defined in `pytest_oof/util.py`. The dataclasses are:\n\n### TestResult\n\nA single test result, which is a single test run of a single test.\n\n| attribute | data type | description |\n| --- | ---- | --- |\n| `nodeid`      | str | canonical test name, with format `source file:::test name` |\n| `outcome` | str | the individual outcome of this test |\n| `start_time` | datetime | UTC time when test started |\n| `duration` | float | duration of test in seconds |\n| `caplog` | str | the contents of the captured log |\n| `capstderr` | str | the contents of the captured stderr |\n| `capstdout` | str | the contents of the captured stdout |\n| `longreprtext` | str | the contents of the captured longreprtext |\n| `has_warning` | bool | whether or not this test had a warning |\n| `to_dict()` | method | returns a dictionary representation of the TestResult object |\n\n### TestResults\n\nA collection of TestResult objects, with convenience methods for accessing subsets of the collection.\n\n| attribute | data type | description |\n| --- | ---- | --- |\n| `test_results` | list | a list of TestResult objects |\n| `all_tests` | method | returns a list of all TestResult objects |\n| `all_passes` | method | returns a list of all TestResult objects with outcome == \"passed\" |\n| `all_failures` | method | returns a list of all TestResult objects with outcome == \"failed\" |\n| `all_errors` | method | returns a list of all TestResult objects with outcome == \"error\" |\n| `all_skips` | method | returns a list of all TestResult objects with outcome == \"skipped\" |\n| `all_xfails` | method | returns a list of all TestResult objects with outcome == \"xfail\" |\n| `all_xpasses` | method | returns a list of all TestResult objects with outcome == \"xpass\" |\n| `all_warnings` | method | returns a list of all TestResult objects with outcome == \"warning\" |\n| `all_reruns` | method | returns a list of all TestResult objects with outcome == \"rerun\" |\n| `all_reruns` | method | returns a list of all TestResult objects with outcome == \"rerun_group\" |\n\n### OutputField\n\nAn 'output field' (aka a 'section') is a block of text that is displayed in the terminal\noutput during a pytest run. It provides additional information about the test run:\nwarnings, errors, etc.\n\n| attribute | data type | description |\n| --- | ---- | --- |\n| `name` | str | the name of the output field |\n| `content` | str | the full ANSI-encoded content of the output field |\n\n### OutputFields\n\nA collection of all available types of OutputField objects. Not all fields will\nbe present in every test run. It depends on the plugins that are installed and\nwhich \"-r\" flags are specified. This plugin forces the use of \"-r RA\" to ensure\nany fields that are available are included in the output.\n\n| attribute | data type | description |\n| --- | ---- | --- |\n| `test_session_starts` | OutputField | the second output field, which contains the start time of each test |\n| `errors` | OutputField | the third output field, which contains the error output of each test |\n| `failures` | OutputField | the fourth output field, which contains the failure output of each test |\n| `passes` | OutputField | the fifth output field, which contains the pass output of each test |\n| `warnings_summary` | OutputField | the sixth output field, which contains a summary of warnings |\n| `rerun_test_summary` | OutputField | the seventh output field, which contains a summary of rerun tests |\n| `short_test_summary` | OutputField | the eighth output field, which contains a summary of test outcomes |\n| `lastline` | OutputField | the ninth output field, which contains the last line of terminal output |\n\n### RerunTestGroup\n\n'RerunTestGroup': a single test that has been run multiple times using the 'pytest-rerunfailures' plugin\n\n| attribute | data type | description |\n| --- | ---- | --- |\n| `nodeid` | str | canonical test name, with format `source file:::test name` |\n| `final_outcome` | str | the final outcome of the test group |\n| `final_test` | TestResult | the final TestResult object of the test group |\n| `forerunners` | list | a list of TestResult objects that were rerun |\n| `full_test_list` | list | a chronological list of all TestResult objects in the test group |\n\n\n\n\n# Limitations and Disclaimer\n\n`pytest-oof` uses pytest's console output in order to generate its results. This means that if pytest changes its output format, `pytest-oof` may break. I will do my best to keep up with changes to pytest, but I make no guarantees. So far the same algorithm has held up for 2+ years, but who knows what the pytest devs will do next?\n\nBecause it is parsing the console output, it also means that you won't have access to the results until after the test run has completed (specifically, in `pytest_unconfigure`). Once the test run is over, you are left with two files, as discussed above. If you want to consume a test run's results in real-time, you'll need to use pytest's hooks, and/or other plugins (see below for other suggestions).\n\nI developed the algorithm used in this plugin while writing [pytest-tui](https://github.com/jeffwright13/pytest-tui), because I couldn't find another way to correctly determine the outcome types for the more esoteric outcomes like XPass, XFail, or Rerun. I knew there was a way to determine some of this from analyzing succesive TestReport objects, but that still didn't do Reruns correctly, nor Warnings (which are technically not an outcome, but a field in the console output). This plugin gives you all that, plus a string of the individual fields/sections of the console output (like \"warnings_summary,\" \"errors,\" \"failures,\" etc).\n\nIf you have any problems or questions with pytest-oof, open an issue. I'll do my best to address it.\n\n# Other Ways to Get Test Run Info ##\n\n[pytest's junitxml](https://docs.pytest.org/en/6.2.x/usage.html#creating-junitxml-format-files)\n[pytest-json-report](https://pypi.org/project/pytest-json-report/)\n\n\nI also have code that outputs JSON-formatted results in real-time (part of [pytest-tally](https://github.com/jeffwright13/pytest-tally)). This code does *not* rely on the console output, intead getting its information from internal TestReport ojects as they are populated during a test run. In that respect, they are less fragile than pytest-oof. This method gets close to providing a complete representation of a test run's information, but does not include fields/sections, nor does it correectly handle all ways of skipping tests. However, that code is embedded in the tally library and is not prductized. I may do so and include it here in the future if there is any demand.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Pytest plugin providing structured access to a test run's results",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/jeffwright13/pytest-oof"
    },
    "split_keywords": [
        "pytest",
        "pytest-plugin",
        "testing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a305403e47134fd460b8b0d852767b8fb0b80377a31d65653844b515635d7ec6",
                "md5": "9c4d93a1559f05eb5b0aa808d4420a46",
                "sha256": "8f42ac762906bd262f50676f5633aa8d04af4227849667c9b386f717b0f40ce0"
            },
            "downloads": -1,
            "filename": "pytest_oof-1.0.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c4d93a1559f05eb5b0aa808d4420a46",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.8",
            "size": 16910,
            "upload_time": "2023-12-04T06:47:14",
            "upload_time_iso_8601": "2023-12-04T06:47:14.635677Z",
            "url": "https://files.pythonhosted.org/packages/a3/05/403e47134fd460b8b0d852767b8fb0b80377a31d65653844b515635d7ec6/pytest_oof-1.0.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a33ff4be486f853256b6872b8e7b1b018e02d5e99e43bcb72f17db8d6d314381",
                "md5": "a0773753688b2007722bf8d0aa442925",
                "sha256": "743e0a31090466e98a4e2ad3bdb11a058e36d8c7206cf2e58b713e0af23e33cd"
            },
            "downloads": -1,
            "filename": "pytest-oof-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a0773753688b2007722bf8d0aa442925",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 35276,
            "upload_time": "2023-12-04T06:47:16",
            "upload_time_iso_8601": "2023-12-04T06:47:16.770818Z",
            "url": "https://files.pythonhosted.org/packages/a3/3f/f4be486f853256b6872b8e7b1b018e02d5e99e43bcb72f17db8d6d314381/pytest-oof-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-04 06:47:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jeffwright13",
    "github_project": "pytest-oof",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "pytest-oof"
}
        
Elapsed time: 0.45369s