pytest-jtr


Namepytest-jtr JSON
Version 1.2.0 PyPI version JSON
download
home_pageNone
Summarypytest plugin supporting json test report output
upload_time2024-04-15 10:31:49
maintainerNone
docs_urlNone
authorGleams API user
requires_python<3.11.0,>=3.8
licenseMIT
keywords test pytest json report
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Pytest jtr JSON Test Report

[![CI](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml/badge.svg)](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml)

Forked from [numirias/pytest-json-report](https://github.com/numirias/pytest-json-report)

This pytest plugin creates test reports as JSON files. This can make it easier to process test results in other applications.

It can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to [add metadata](#metadata) and [customize](#modifying-the-report) the report as you like.

## Table of contents

* [Installation](#installation)
* [Options](#options)
* [Usage](#usage)
   * [Metadata](#metadata)
   * [Modifying the report](#modifying-the-report)
   * [Direct invocation](#direct-invocation)
* [Format](#format)
   * [Summary](#summary)
   * [Environment](#environment)
   * [Collectors](#collectors)
   * [Tests](#tests)
   * [Test stage](#test-stage)
   * [Log](#log)
   * [Warnings](#warnings)
* [Related tools](#related-tools)

## Installation

```

pip install pytest-jtr

# or

poetry add pytest-jtr

```

## Options

| Option | Description |
| --- | --- |
| `--json-report` | Create JSON report |
| `--json-report-file=PATH` | Target path to save JSON report (use "none" to not save the report) |
| `--json-report-summary` | Just create a summary without per-test details |
| `--json-report-omit=FIELD_LIST` | List of fields to omit in the report (choose from: `collectors`, `log`, `traceback`, `streams`, `warnings`, `keywords`) |
| `--json-report-indent=LEVEL` | Pretty-print JSON with specified indentation level |
| `--json-report-verbosity=LEVEL` | Set verbosity (default is value of `--verbosity`) |

## Usage

Just run pytest with `--json-report`. The report is saved in `.report.json` by default.

```bash
$ pytest --json-report -v tests/
$ cat .report.json
{"created": 1518371686.7981803, ... "tests":[{"nodeid": "test_foo.py", "outcome": "passed", ...}, ...]}
```

If you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:

```bash
$ pytest --json-report --json-report-summary
```

Many fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:

```bash
$ pytest --json-report --json-report-omit keywords streams
```

If you don't like to have the report saved, you can specify `none` as the target file name:

```bash
$ pytest --json-report --json-report-file none
```

## Advanced usage

### Metadata

The easiest way to add your own metadata to a test item is by using the `json_metadata` [test fixture](https://docs.pytest.org/en/stable/fixture.html):

```python
def test_something(json_metadata):
    json_metadata['foo'] = {"some": "thing"}
    json_metadata['bar'] = 123
```

Or use the `pytest_json_runtest_metadata` [hook](https://docs.pytest.org/en/stable/reference.html#hooks) (in your `conftest.py`) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's `call` stage:

```python
def pytest_json_runtest_metadata(item, call):
    if call.when != 'call':
        return {}
    return {'start': call.start, 'stop': call.stop}
```

Also, you could add metadata using [pytest-metadata's `--metadata` switch](https://github.com/pytest-dev/pytest-metadata#additional-metadata) which will add metadata to the report's `environment` section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.

### A note on hooks

If you're using a `pytest_json_*` hook although the plugin is not installed or not active (not using `--json-report`), pytest doesn't recognize it and may fail with an internal error like this:
```
INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
```
You can avoid this by declaring the hook implementation optional:

```python
import pytest
@pytest.hookimpl(optionalhook=True)
def pytest_json_runtest_metadata(item, call):
    ...
```

### Modifying the report

You can modify the entire report before it's saved by using the `pytest_json_modifyreport` hook.

Just implement the hook in your `conftest.py`, e.g.:

```python
def pytest_json_modifyreport(json_report):
    # Add a key to the report
    json_report['foo'] = 'bar'
    # Delete the summary from the report
    del json_report['summary']
```

After `pytest_sessionfinish`, the report object is also directly available to script via `config._json_report.report`. So you can access it using some built-in hook:

```python
def pytest_sessionfinish(session):
    report = session.config._json_report.report
    print('exited with', report['exitcode'])
```

If you *really* want to change how the result of a test stage run is turned into JSON, you can use the `pytest_json_runtest_stage` hook. It takes a [`TestReport`](https://docs.pytest.org/en/latest/reference.html#_pytest.runner.TestReport) and returns a JSON-serializable dict:

```python
def pytest_json_runtest_stage(report):
    return {'outcome': report.outcome}
```

### Direct invocation

You can use the plugin when invoking `pytest.main()` directly from code:

```python
import pytest
from pytest_jtr.plugin import JSONReport

plugin = JSONReport()
pytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])
```

You can then access the `report` object:

```python
print(plugin.report)
```

And save the report manually:

```python
plugin.save_report('/tmp/my_report.json')
```


## Format

The JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in [`sample_report.json`](sample_report.json).

| Key | Description |
| --- | --- |
| `created` | Report creation date. (Unix time) |
| `duration` | Session duration in seconds. |
| `exitcode` | Process exit code as listed [in the pytest docs](https://docs.pytest.org/en/latest/usage.html#possible-exit-codes). The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |
| `root` | Absolute root path from which the session was started. |
| `environment` | [Environment](#environment) entry. |
| `summary` | [Summary](#summary) entry. |
| `collectors` | [Collectors](#collectors) entry. (absent if `--json-report-summary` or if no collectors)  |
| `tests` | [Tests](#tests) entry. (absent if `--json-report-summary`)  |
| `warnings` | [Warnings](#warnings) entry. (absent if `--json-report-summary` or if no warnings)  |

#### Example

```python
{
    "created": 1518371686.7981803,
    "duration": 0.1235666275024414,
    "exitcode": 1,
    "root": "/path/to/tests",
    "environment": ENVIRONMENT,
    "summary": SUMMARY,
    "collectors": COLLECTORS,
    "tests": TESTS,
    "warnings": WARNINGS,
}
```

### Summary

Number of outcomes per category and the total number of test items.

| Key | Description |
| --- | --- |
|  `collected` | Total number of tests collected. |
|  `total` | Total number of tests run. |
|  `deselected` | Total number of tests deselected. (absent if number is 0) |
| `<outcome>` | Number of tests with that outcome. (absent if number is 0) |

#### Example

```python
{
    "collected": 10,
    "passed": 2,
    "failed": 3,
    "xfailed": 1,
    "xpassed": 1,
    "error": 2,
    "skipped": 1,
    "total": 10
}
```

### Environment

The environment section is provided by [pytest-metadata](https://github.com/pytest-dev/pytest-metadata). All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.

#### Example

```python
{
    "Python": "3.6.4",
    "Platform": "Linux-4.56.78-9-ARCH-x86_64-with-arch",
    "Packages": {
        "pytest": "3.4.0",
        "py": "1.5.2",
        "pluggy": "0.6.0"
    },
    "Plugins": {
        "json-report": "0.4.1",
        "xdist": "1.22.0",
        "metadata": "1.5.1",
        "forked": "0.2",
        "cov": "2.5.1"
    },
    "foo": "bar", # Custom metadata entry passed via pytest-metadata
}
```

### Collectors

A list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.

| Key | Description |
| --- | --- |
| `nodeid` | ID of the collector node. ([See docs](https://docs.pytest.org/en/latest/example/markers.html#node-id)) The root node has an empty node ID. |
| `outcome` | Outcome of the collection. (Not the test outcome!) |
| `result` | Nodes collected by the collector. |
| `longrepr` | Representation of the collection error. (absent if no error occurred) |

The `result` is a list of the collected nodes:

| Key | Description |
| --- | --- |
| `nodeid` | ID of the node. |
| `type` | Type of the collected node. |
| `lineno` | Line number. (absent if not applicable) |
| `deselected` | `true` if the test is deselected. (absent if not deselected) |

#### Example

```python
[
    {
        "nodeid": "",
        "outcome": "passed",
        "result": [
            {
                "nodeid": "test_foo.py",
                "type": "Module"
            }
        ]
    },
    {
        "nodeid": "test_foo.py",
        "outcome": "passed",
        "result": [
            {
                "nodeid": "test_foo.py::test_pass",
                "type": "Function",
                "lineno": 24,
                "deselected": true
            },
            ...
        ]
    },
    {
        "nodeid": "test_bar.py",
        "outcome": "failed",
        "result": [],
        "longrepr": "/usr/lib/python3.6 ... invalid syntax"
    },
    ...
]
```

### Tests

A list of test nodes. Each completed test stage produces a stage object (`setup`, `call`, `teardown`) with its own `outcome`.

| Key | Description |
| --- | --- |
| `nodeid` | ID of the test node. |
| `lineno` | Line number where the test starts. |
| `keywords` | List of keywords and markers associated with the test. |
| `outcome` | Outcome of the test run. |
| `{setup, call, teardown}` | [Test stage](#test-stage) entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |
| `metadata` | [Metadata](#metadata) item. (absent if no metadata) |

#### Example

```python
[
    {
        "nodeid": "test_foo.py::test_fail",
        "lineno": 50,
        "keywords": [
            "test_fail",
            "test_foo.py",
            "test_foo0"
        ],
        "outcome": "failed",
        "setup": TEST_STAGE,
        "call": TEST_STAGE,
        "teardown": TEST_STAGE,
        "metadata": {
            "foo": "bar",
        }
    },
    ...
]
```


### Test stage

A test stage item.

| Key | Description |
| --- | --- |
| `duration` | Duration of the test stage in seconds. |
| `outcome` | Outcome of the test stage. (can be different from the overall test outcome) |
| `crash` | Crash entry. (absent if no error occurred) |
| `traceback` | List of traceback entries. (absent if no error occurred; affected by `--tb` option) |
| `stdout` | Standard output. (absent if none available) |
| `stderr` | Standard error. (absent if none available) |
| `log` | [Log](#log) entry. (absent if none available) |
| `longrepr` | Representation of the error. (absent if no error occurred; format affected by `--tb` option) |

#### Example

```python
{
    "duration": 0.00018835067749023438,
    "outcome": "failed",
    "crash": {
        "path": "/path/to/tests/test_foo.py",
        "lineno": 54,
        "message": "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'"
    },
    "traceback": [
        {
            "path": "test_foo.py",
            "lineno": 65,
            "message": ""
        },
        {
            "path": "test_foo.py",
            "lineno": 63,
            "message": "in foo"
        },
        {
            "path": "test_foo.py",
            "lineno": 63,
            "message": "in <listcomp>"
        },
        {
            "path": "test_foo.py",
            "lineno": 54,
            "message": "TypeError"
        }
    ],
    "stdout": "foo\nbar\n",
    "stderr": "baz\n",
    "log": LOG,
    "longrepr": "def test_fail_nested():\n ..."
}
```

### Log

A list of log records. The fields of a log record are the [`logging.LogRecord` attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes), with the exception that the fields `exc_info` and `args` are always empty and `msg` contains the formatted log message.

You can apply [`logging.makeLogRecord()`](https://docs.python.org/3/library/logging.html#logging.makeLogRecord)  on a log record to convert it back to a `logging.LogRecord` object.

#### Example

```python
[
    {
        "name": "root",
        "msg": "This is a warning.",
        "args": null,
        "levelname": "WARNING",
        "levelno": 30,
        "pathname": "/path/to/tests/test_foo.py",
        "filename": "test_foo.py",
        "module": "test_foo",
        "exc_info": null,
        "exc_text": null,
        "stack_info": null,
        "lineno": 8,
        "funcName": "foo",
        "created": 1519772464.291738,
        "msecs": 291.73803329467773,
        "relativeCreated": 332.90839195251465,
        "thread": 140671803118912,
        "threadName": "MainThread",
        "processName": "MainProcess",
        "process": 31481
    },
    ...
]
```


### Warnings

A list of warnings that occurred during the session. (See the [pytest docs on warnings](https://docs.pytest.org/en/latest/warnings.html).)

| Key | Description |
| --- | --- |
| `filename` | File name. |
| `lineno` | Line number. |
| `message` | Warning message. |
| `when` | When the warning was captured. (`"config"`, `"collect"` or `"runtest"` as listed [here](https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_warning_captured)) |

#### Example

```python
[
    {
        "code": "C1",
        "path": "/path/to/tests/test_foo.py",
        "nodeid": "test_foo.py::TestFoo",
        "message": "cannot collect test class 'TestFoo' because it has a __init__ constructor"
    }
]
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pytest-jtr",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.11.0,>=3.8",
    "maintainer_email": null,
    "keywords": "test, pytest, json, report",
    "author": "Gleams API user",
    "author_email": "Stephen.Swannell+ghapi@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/9d/1c/c4fdd3cd5f6750fc24e51bd0e4a2f0fe53ca6d989f5d8cbb89a7576b9fdb/pytest_jtr-1.2.0.tar.gz",
    "platform": null,
    "description": "# Pytest jtr JSON Test Report\n\n[![CI](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml/badge.svg)](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml)\n\nForked from [numirias/pytest-json-report](https://github.com/numirias/pytest-json-report)\n\nThis pytest plugin creates test reports as JSON files. This can make it easier to process test results in other applications.\n\nIt can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to [add metadata](#metadata) and [customize](#modifying-the-report) the report as you like.\n\n## Table of contents\n\n* [Installation](#installation)\n* [Options](#options)\n* [Usage](#usage)\n   * [Metadata](#metadata)\n   * [Modifying the report](#modifying-the-report)\n   * [Direct invocation](#direct-invocation)\n* [Format](#format)\n   * [Summary](#summary)\n   * [Environment](#environment)\n   * [Collectors](#collectors)\n   * [Tests](#tests)\n   * [Test stage](#test-stage)\n   * [Log](#log)\n   * [Warnings](#warnings)\n* [Related tools](#related-tools)\n\n## Installation\n\n```\n\npip install pytest-jtr\n\n# or\n\npoetry add pytest-jtr\n\n```\n\n## Options\n\n| Option | Description |\n| --- | --- |\n| `--json-report` | Create JSON report |\n| `--json-report-file=PATH` | Target path to save JSON report (use \"none\" to not save the report) |\n| `--json-report-summary` | Just create a summary without per-test details |\n| `--json-report-omit=FIELD_LIST` | List of fields to omit in the report (choose from: `collectors`, `log`, `traceback`, `streams`, `warnings`, `keywords`) |\n| `--json-report-indent=LEVEL` | Pretty-print JSON with specified indentation level |\n| `--json-report-verbosity=LEVEL` | Set verbosity (default is value of `--verbosity`) |\n\n## Usage\n\nJust run pytest with `--json-report`. The report is saved in `.report.json` by default.\n\n```bash\n$ pytest --json-report -v tests/\n$ cat .report.json\n{\"created\": 1518371686.7981803, ... \"tests\":[{\"nodeid\": \"test_foo.py\", \"outcome\": \"passed\", ...}, ...]}\n```\n\nIf you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:\n\n```bash\n$ pytest --json-report --json-report-summary\n```\n\nMany fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:\n\n```bash\n$ pytest --json-report --json-report-omit keywords streams\n```\n\nIf you don't like to have the report saved, you can specify `none` as the target file name:\n\n```bash\n$ pytest --json-report --json-report-file none\n```\n\n## Advanced usage\n\n### Metadata\n\nThe easiest way to add your own metadata to a test item is by using the `json_metadata` [test fixture](https://docs.pytest.org/en/stable/fixture.html):\n\n```python\ndef test_something(json_metadata):\n    json_metadata['foo'] = {\"some\": \"thing\"}\n    json_metadata['bar'] = 123\n```\n\nOr use the `pytest_json_runtest_metadata` [hook](https://docs.pytest.org/en/stable/reference.html#hooks) (in your `conftest.py`) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's `call` stage:\n\n```python\ndef pytest_json_runtest_metadata(item, call):\n    if call.when != 'call':\n        return {}\n    return {'start': call.start, 'stop': call.stop}\n```\n\nAlso, you could add metadata using [pytest-metadata's `--metadata` switch](https://github.com/pytest-dev/pytest-metadata#additional-metadata) which will add metadata to the report's `environment` section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.\n\n### A note on hooks\n\nIf you're using a `pytest_json_*` hook although the plugin is not installed or not active (not using `--json-report`), pytest doesn't recognize it and may fail with an internal error like this:\n```\nINTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>\n```\nYou can avoid this by declaring the hook implementation optional:\n\n```python\nimport pytest\n@pytest.hookimpl(optionalhook=True)\ndef pytest_json_runtest_metadata(item, call):\n    ...\n```\n\n### Modifying the report\n\nYou can modify the entire report before it's saved by using the `pytest_json_modifyreport` hook.\n\nJust implement the hook in your `conftest.py`, e.g.:\n\n```python\ndef pytest_json_modifyreport(json_report):\n    # Add a key to the report\n    json_report['foo'] = 'bar'\n    # Delete the summary from the report\n    del json_report['summary']\n```\n\nAfter `pytest_sessionfinish`, the report object is also directly available to script via `config._json_report.report`. So you can access it using some built-in hook:\n\n```python\ndef pytest_sessionfinish(session):\n    report = session.config._json_report.report\n    print('exited with', report['exitcode'])\n```\n\nIf you *really* want to change how the result of a test stage run is turned into JSON, you can use the `pytest_json_runtest_stage` hook. It takes a [`TestReport`](https://docs.pytest.org/en/latest/reference.html#_pytest.runner.TestReport) and returns a JSON-serializable dict:\n\n```python\ndef pytest_json_runtest_stage(report):\n    return {'outcome': report.outcome}\n```\n\n### Direct invocation\n\nYou can use the plugin when invoking `pytest.main()` directly from code:\n\n```python\nimport pytest\nfrom pytest_jtr.plugin import JSONReport\n\nplugin = JSONReport()\npytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])\n```\n\nYou can then access the `report` object:\n\n```python\nprint(plugin.report)\n```\n\nAnd save the report manually:\n\n```python\nplugin.save_report('/tmp/my_report.json')\n```\n\n\n## Format\n\nThe JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in [`sample_report.json`](sample_report.json).\n\n| Key | Description |\n| --- | --- |\n| `created` | Report creation date. (Unix time) |\n| `duration` | Session duration in seconds. |\n| `exitcode` | Process exit code as listed [in the pytest docs](https://docs.pytest.org/en/latest/usage.html#possible-exit-codes). The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |\n| `root` | Absolute root path from which the session was started. |\n| `environment` | [Environment](#environment) entry. |\n| `summary` | [Summary](#summary) entry. |\n| `collectors` | [Collectors](#collectors) entry. (absent if `--json-report-summary` or if no collectors)  |\n| `tests` | [Tests](#tests) entry. (absent if `--json-report-summary`)  |\n| `warnings` | [Warnings](#warnings) entry. (absent if `--json-report-summary` or if no warnings)  |\n\n#### Example\n\n```python\n{\n    \"created\": 1518371686.7981803,\n    \"duration\": 0.1235666275024414,\n    \"exitcode\": 1,\n    \"root\": \"/path/to/tests\",\n    \"environment\": ENVIRONMENT,\n    \"summary\": SUMMARY,\n    \"collectors\": COLLECTORS,\n    \"tests\": TESTS,\n    \"warnings\": WARNINGS,\n}\n```\n\n### Summary\n\nNumber of outcomes per category and the total number of test items.\n\n| Key | Description |\n| --- | --- |\n|  `collected` | Total number of tests collected. |\n|  `total` | Total number of tests run. |\n|  `deselected` | Total number of tests deselected. (absent if number is 0) |\n| `<outcome>` | Number of tests with that outcome. (absent if number is 0) |\n\n#### Example\n\n```python\n{\n    \"collected\": 10,\n    \"passed\": 2,\n    \"failed\": 3,\n    \"xfailed\": 1,\n    \"xpassed\": 1,\n    \"error\": 2,\n    \"skipped\": 1,\n    \"total\": 10\n}\n```\n\n### Environment\n\nThe environment section is provided by [pytest-metadata](https://github.com/pytest-dev/pytest-metadata). All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.\n\n#### Example\n\n```python\n{\n    \"Python\": \"3.6.4\",\n    \"Platform\": \"Linux-4.56.78-9-ARCH-x86_64-with-arch\",\n    \"Packages\": {\n        \"pytest\": \"3.4.0\",\n        \"py\": \"1.5.2\",\n        \"pluggy\": \"0.6.0\"\n    },\n    \"Plugins\": {\n        \"json-report\": \"0.4.1\",\n        \"xdist\": \"1.22.0\",\n        \"metadata\": \"1.5.1\",\n        \"forked\": \"0.2\",\n        \"cov\": \"2.5.1\"\n    },\n    \"foo\": \"bar\", # Custom metadata entry passed via pytest-metadata\n}\n```\n\n### Collectors\n\nA list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.\n\n| Key | Description |\n| --- | --- |\n| `nodeid` | ID of the collector node. ([See docs](https://docs.pytest.org/en/latest/example/markers.html#node-id)) The root node has an empty node ID. |\n| `outcome` | Outcome of the collection. (Not the test outcome!) |\n| `result` | Nodes collected by the collector. |\n| `longrepr` | Representation of the collection error. (absent if no error occurred) |\n\nThe `result` is a list of the collected nodes:\n\n| Key | Description |\n| --- | --- |\n| `nodeid` | ID of the node. |\n| `type` | Type of the collected node. |\n| `lineno` | Line number. (absent if not applicable) |\n| `deselected` | `true` if the test is deselected. (absent if not deselected) |\n\n#### Example\n\n```python\n[\n    {\n        \"nodeid\": \"\",\n        \"outcome\": \"passed\",\n        \"result\": [\n            {\n                \"nodeid\": \"test_foo.py\",\n                \"type\": \"Module\"\n            }\n        ]\n    },\n    {\n        \"nodeid\": \"test_foo.py\",\n        \"outcome\": \"passed\",\n        \"result\": [\n            {\n                \"nodeid\": \"test_foo.py::test_pass\",\n                \"type\": \"Function\",\n                \"lineno\": 24,\n                \"deselected\": true\n            },\n            ...\n        ]\n    },\n    {\n        \"nodeid\": \"test_bar.py\",\n        \"outcome\": \"failed\",\n        \"result\": [],\n        \"longrepr\": \"/usr/lib/python3.6 ... invalid syntax\"\n    },\n    ...\n]\n```\n\n### Tests\n\nA list of test nodes. Each completed test stage produces a stage object (`setup`, `call`, `teardown`) with its own `outcome`.\n\n| Key | Description |\n| --- | --- |\n| `nodeid` | ID of the test node. |\n| `lineno` | Line number where the test starts. |\n| `keywords` | List of keywords and markers associated with the test. |\n| `outcome` | Outcome of the test run. |\n| `{setup, call, teardown}` | [Test stage](#test-stage) entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |\n| `metadata` | [Metadata](#metadata) item. (absent if no metadata) |\n\n#### Example\n\n```python\n[\n    {\n        \"nodeid\": \"test_foo.py::test_fail\",\n        \"lineno\": 50,\n        \"keywords\": [\n            \"test_fail\",\n            \"test_foo.py\",\n            \"test_foo0\"\n        ],\n        \"outcome\": \"failed\",\n        \"setup\": TEST_STAGE,\n        \"call\": TEST_STAGE,\n        \"teardown\": TEST_STAGE,\n        \"metadata\": {\n            \"foo\": \"bar\",\n        }\n    },\n    ...\n]\n```\n\n\n### Test stage\n\nA test stage item.\n\n| Key | Description |\n| --- | --- |\n| `duration` | Duration of the test stage in seconds. |\n| `outcome` | Outcome of the test stage. (can be different from the overall test outcome) |\n| `crash` | Crash entry. (absent if no error occurred) |\n| `traceback` | List of traceback entries. (absent if no error occurred; affected by `--tb` option) |\n| `stdout` | Standard output. (absent if none available) |\n| `stderr` | Standard error. (absent if none available) |\n| `log` | [Log](#log) entry. (absent if none available) |\n| `longrepr` | Representation of the error. (absent if no error occurred; format affected by `--tb` option) |\n\n#### Example\n\n```python\n{\n    \"duration\": 0.00018835067749023438,\n    \"outcome\": \"failed\",\n    \"crash\": {\n        \"path\": \"/path/to/tests/test_foo.py\",\n        \"lineno\": 54,\n        \"message\": \"TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'\"\n    },\n    \"traceback\": [\n        {\n            \"path\": \"test_foo.py\",\n            \"lineno\": 65,\n            \"message\": \"\"\n        },\n        {\n            \"path\": \"test_foo.py\",\n            \"lineno\": 63,\n            \"message\": \"in foo\"\n        },\n        {\n            \"path\": \"test_foo.py\",\n            \"lineno\": 63,\n            \"message\": \"in <listcomp>\"\n        },\n        {\n            \"path\": \"test_foo.py\",\n            \"lineno\": 54,\n            \"message\": \"TypeError\"\n        }\n    ],\n    \"stdout\": \"foo\\nbar\\n\",\n    \"stderr\": \"baz\\n\",\n    \"log\": LOG,\n    \"longrepr\": \"def test_fail_nested():\\n ...\"\n}\n```\n\n### Log\n\nA list of log records. The fields of a log record are the [`logging.LogRecord` attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes), with the exception that the fields `exc_info` and `args` are always empty and `msg` contains the formatted log message.\n\nYou can apply [`logging.makeLogRecord()`](https://docs.python.org/3/library/logging.html#logging.makeLogRecord)  on a log record to convert it back to a `logging.LogRecord` object.\n\n#### Example\n\n```python\n[\n    {\n        \"name\": \"root\",\n        \"msg\": \"This is a warning.\",\n        \"args\": null,\n        \"levelname\": \"WARNING\",\n        \"levelno\": 30,\n        \"pathname\": \"/path/to/tests/test_foo.py\",\n        \"filename\": \"test_foo.py\",\n        \"module\": \"test_foo\",\n        \"exc_info\": null,\n        \"exc_text\": null,\n        \"stack_info\": null,\n        \"lineno\": 8,\n        \"funcName\": \"foo\",\n        \"created\": 1519772464.291738,\n        \"msecs\": 291.73803329467773,\n        \"relativeCreated\": 332.90839195251465,\n        \"thread\": 140671803118912,\n        \"threadName\": \"MainThread\",\n        \"processName\": \"MainProcess\",\n        \"process\": 31481\n    },\n    ...\n]\n```\n\n\n### Warnings\n\nA list of warnings that occurred during the session. (See the [pytest docs on warnings](https://docs.pytest.org/en/latest/warnings.html).)\n\n| Key | Description |\n| --- | --- |\n| `filename` | File name. |\n| `lineno` | Line number. |\n| `message` | Warning message. |\n| `when` | When the warning was captured. (`\"config\"`, `\"collect\"` or `\"runtest\"` as listed [here](https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_warning_captured)) |\n\n#### Example\n\n```python\n[\n    {\n        \"code\": \"C1\",\n        \"path\": \"/path/to/tests/test_foo.py\",\n        \"nodeid\": \"test_foo.py::TestFoo\",\n        \"message\": \"cannot collect test class 'TestFoo' because it has a __init__ constructor\"\n    }\n]\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "pytest plugin supporting json test report output",
    "version": "1.2.0",
    "project_urls": null,
    "split_keywords": [
        "test",
        " pytest",
        " json",
        " report"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6a78c8462b95b144533c7198557779e443a8a2dd6acf693aab441cb6f1c1df83",
                "md5": "1e0798234570c849fdb61bff65ac588b",
                "sha256": "886b710904446178a95eb9302d40cd79f023e1a725e82970a491c24a27989cea"
            },
            "downloads": -1,
            "filename": "pytest_jtr-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1e0798234570c849fdb61bff65ac588b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11.0,>=3.8",
            "size": 12816,
            "upload_time": "2024-04-15T10:31:47",
            "upload_time_iso_8601": "2024-04-15T10:31:47.343530Z",
            "url": "https://files.pythonhosted.org/packages/6a/78/c8462b95b144533c7198557779e443a8a2dd6acf693aab441cb6f1c1df83/pytest_jtr-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9d1cc4fdd3cd5f6750fc24e51bd0e4a2f0fe53ca6d989f5d8cbb89a7576b9fdb",
                "md5": "e90d2322e199a8c52c86f73172b38345",
                "sha256": "a1444e76175bce8babc5f6c6c9edd7af4188457dfd7fd63bc78c18419050362a"
            },
            "downloads": -1,
            "filename": "pytest_jtr-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e90d2322e199a8c52c86f73172b38345",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.11.0,>=3.8",
            "size": 17470,
            "upload_time": "2024-04-15T10:31:49",
            "upload_time_iso_8601": "2024-04-15T10:31:49.260360Z",
            "url": "https://files.pythonhosted.org/packages/9d/1c/c4fdd3cd5f6750fc24e51bd0e4a2f0fe53ca6d989f5d8cbb89a7576b9fdb/pytest_jtr-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-15 10:31:49",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "pytest-jtr"
}
        
Elapsed time: 0.22524s