Name | pytest-jtr JSON |
Version |
1.3.0
JSON |
| download |
home_page | None |
Summary | pytest plugin supporting json test report output |
upload_time | 2024-06-04 15:52:38 |
maintainer | None |
docs_url | None |
author | Gleams API user |
requires_python | <3.13.0,>=3.8.1 |
license | MIT |
keywords |
test
pytest
json
report
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Pytest jtr JSON Test Report
[![CI](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml/badge.svg)](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml)
Forked from [numirias/pytest-json-report](https://github.com/numirias/pytest-json-report)
This pytest plugin creates test reports as JSON files. This can make it easier to process test results in other applications.
It can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to [add metadata](#metadata) and [customize](#modifying-the-report) the report as you like.
## Table of contents
* [Installation](#installation)
* [Options](#options)
* [Usage](#usage)
* [Metadata](#metadata)
* [Modifying the report](#modifying-the-report)
* [Direct invocation](#direct-invocation)
* [Format](#format)
* [Summary](#summary)
* [Environment](#environment)
* [Collectors](#collectors)
* [Tests](#tests)
* [Test stage](#test-stage)
* [Log](#log)
* [Warnings](#warnings)
* [Related tools](#related-tools)
## Installation
```
pip install pytest-jtr
# or
poetry add pytest-jtr
```
## Options
| Option | Description |
| --- | --- |
| `--json-report` | Create JSON report |
| `--json-report-file=PATH` | Target path to save JSON report (use "none" to not save the report) |
| `--json-report-summary` | Just create a summary without per-test details |
| `--json-report-omit=FIELD_LIST` | List of fields to omit in the report (choose from: `collectors`, `log`, `traceback`, `streams`, `warnings`, `keywords`) |
| `--json-report-indent=LEVEL` | Pretty-print JSON with specified indentation level |
| `--json-report-verbosity=LEVEL` | Set verbosity (default is value of `--verbosity`) |
## Usage
Just run pytest with `--json-report`. The report is saved in `.report.json` by default.
```bash
$ pytest --json-report -v tests/
$ cat .report.json
{"created": 1518371686.7981803, ... "tests":[{"nodeid": "test_foo.py", "outcome": "passed", ...}, ...]}
```
If you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:
```bash
$ pytest --json-report --json-report-summary
```
Many fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:
```bash
$ pytest --json-report --json-report-omit keywords streams
```
If you don't like to have the report saved, you can specify `none` as the target file name:
```bash
$ pytest --json-report --json-report-file none
```
## Advanced usage
### Metadata
The easiest way to add your own metadata to a test item is by using the `json_metadata` [test fixture](https://docs.pytest.org/en/stable/fixture.html):
```python
def test_something(json_metadata):
json_metadata['foo'] = {"some": "thing"}
json_metadata['bar'] = 123
```
Or use the `pytest_json_runtest_metadata` [hook](https://docs.pytest.org/en/stable/reference.html#hooks) (in your `conftest.py`) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's `call` stage:
```python
def pytest_json_runtest_metadata(item, call):
if call.when != 'call':
return {}
return {'start': call.start, 'stop': call.stop}
```
Also, you could add metadata using [pytest-metadata's `--metadata` switch](https://github.com/pytest-dev/pytest-metadata#additional-metadata) which will add metadata to the report's `environment` section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.
### A note on hooks
If you're using a `pytest_json_*` hook although the plugin is not installed or not active (not using `--json-report`), pytest doesn't recognize it and may fail with an internal error like this:
```
INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
```
You can avoid this by declaring the hook implementation optional:
```python
import pytest
@pytest.hookimpl(optionalhook=True)
def pytest_json_runtest_metadata(item, call):
...
```
### Modifying the report
You can modify the entire report before it's saved by using the `pytest_json_modifyreport` hook.
Just implement the hook in your `conftest.py`, e.g.:
```python
def pytest_json_modifyreport(json_report):
# Add a key to the report
json_report['foo'] = 'bar'
# Delete the summary from the report
del json_report['summary']
```
After `pytest_sessionfinish`, the report object is also directly available to script via `config._json_report.report`. So you can access it using some built-in hook:
```python
def pytest_sessionfinish(session):
report = session.config._json_report.report
print('exited with', report['exitcode'])
```
If you *really* want to change how the result of a test stage run is turned into JSON, you can use the `pytest_json_runtest_stage` hook. It takes a [`TestReport`](https://docs.pytest.org/en/latest/reference.html#_pytest.runner.TestReport) and returns a JSON-serializable dict:
```python
def pytest_json_runtest_stage(report):
return {'outcome': report.outcome}
```
### Direct invocation
You can use the plugin when invoking `pytest.main()` directly from code:
```python
import pytest
from pytest_jtr.plugin import JSONReport
plugin = JSONReport()
pytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])
```
You can then access the `report` object:
```python
print(plugin.report)
```
And save the report manually:
```python
plugin.save_report('/tmp/my_report.json')
```
## Format
The JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in [`sample_report.json`](sample_report.json).
| Key | Description |
| --- | --- |
| `created` | Report creation date. (Unix time) |
| `duration` | Session duration in seconds. |
| `exitcode` | Process exit code as listed [in the pytest docs](https://docs.pytest.org/en/latest/usage.html#possible-exit-codes). The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |
| `root` | Absolute root path from which the session was started. |
| `environment` | [Environment](#environment) entry. |
| `summary` | [Summary](#summary) entry. |
| `collectors` | [Collectors](#collectors) entry. (absent if `--json-report-summary` or if no collectors) |
| `tests` | [Tests](#tests) entry. (absent if `--json-report-summary`) |
| `warnings` | [Warnings](#warnings) entry. (absent if `--json-report-summary` or if no warnings) |
#### Example
```python
{
"created": 1518371686.7981803,
"duration": 0.1235666275024414,
"exitcode": 1,
"root": "/path/to/tests",
"environment": ENVIRONMENT,
"summary": SUMMARY,
"collectors": COLLECTORS,
"tests": TESTS,
"warnings": WARNINGS,
}
```
### Summary
Number of outcomes per category and the total number of test items.
| Key | Description |
| --- | --- |
| `collected` | Total number of tests collected. |
| `total` | Total number of tests run. |
| `deselected` | Total number of tests deselected. (absent if number is 0) |
| `<outcome>` | Number of tests with that outcome. (absent if number is 0) |
#### Example
```python
{
"collected": 10,
"passed": 2,
"failed": 3,
"xfailed": 1,
"xpassed": 1,
"error": 2,
"skipped": 1,
"total": 10
}
```
### Environment
The environment section is provided by [pytest-metadata](https://github.com/pytest-dev/pytest-metadata). All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.
#### Example
```python
{
"Python": "3.6.4",
"Platform": "Linux-4.56.78-9-ARCH-x86_64-with-arch",
"Packages": {
"pytest": "3.4.0",
"py": "1.5.2",
"pluggy": "0.6.0"
},
"Plugins": {
"json-report": "0.4.1",
"xdist": "1.22.0",
"metadata": "1.5.1",
"forked": "0.2",
"cov": "2.5.1"
},
"foo": "bar", # Custom metadata entry passed via pytest-metadata
}
```
### Collectors
A list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.
| Key | Description |
| --- | --- |
| `nodeid` | ID of the collector node. ([See docs](https://docs.pytest.org/en/latest/example/markers.html#node-id)) The root node has an empty node ID. |
| `outcome` | Outcome of the collection. (Not the test outcome!) |
| `result` | Nodes collected by the collector. |
| `longrepr` | Representation of the collection error. (absent if no error occurred) |
The `result` is a list of the collected nodes:
| Key | Description |
| --- | --- |
| `nodeid` | ID of the node. |
| `type` | Type of the collected node. |
| `lineno` | Line number. (absent if not applicable) |
| `deselected` | `true` if the test is deselected. (absent if not deselected) |
#### Example
```python
[
{
"nodeid": "",
"outcome": "passed",
"result": [
{
"nodeid": "test_foo.py",
"type": "Module"
}
]
},
{
"nodeid": "test_foo.py",
"outcome": "passed",
"result": [
{
"nodeid": "test_foo.py::test_pass",
"type": "Function",
"lineno": 24,
"deselected": true
},
...
]
},
{
"nodeid": "test_bar.py",
"outcome": "failed",
"result": [],
"longrepr": "/usr/lib/python3.6 ... invalid syntax"
},
...
]
```
### Tests
A list of test nodes. Each completed test stage produces a stage object (`setup`, `call`, `teardown`) with its own `outcome`.
| Key | Description |
| --- | --- |
| `nodeid` | ID of the test node. |
| `lineno` | Line number where the test starts. |
| `keywords` | List of keywords and markers associated with the test. |
| `outcome` | Outcome of the test run. |
| `{setup, call, teardown}` | [Test stage](#test-stage) entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |
| `metadata` | [Metadata](#metadata) item. (absent if no metadata) |
#### Example
```python
[
{
"nodeid": "test_foo.py::test_fail",
"lineno": 50,
"keywords": [
"test_fail",
"test_foo.py",
"test_foo0"
],
"outcome": "failed",
"setup": TEST_STAGE,
"call": TEST_STAGE,
"teardown": TEST_STAGE,
"metadata": {
"foo": "bar",
}
},
...
]
```
### Test stage
A test stage item.
| Key | Description |
| --- | --- |
| `duration` | Duration of the test stage in seconds. |
| `outcome` | Outcome of the test stage. (can be different from the overall test outcome) |
| `crash` | Crash entry. (absent if no error occurred) |
| `traceback` | List of traceback entries. (absent if no error occurred; affected by `--tb` option) |
| `stdout` | Standard output. (absent if none available) |
| `stderr` | Standard error. (absent if none available) |
| `log` | [Log](#log) entry. (absent if none available) |
| `longrepr` | Representation of the error. (absent if no error occurred; format affected by `--tb` option) |
#### Example
```python
{
"duration": 0.00018835067749023438,
"outcome": "failed",
"crash": {
"path": "/path/to/tests/test_foo.py",
"lineno": 54,
"message": "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'"
},
"traceback": [
{
"path": "test_foo.py",
"lineno": 65,
"message": ""
},
{
"path": "test_foo.py",
"lineno": 63,
"message": "in foo"
},
{
"path": "test_foo.py",
"lineno": 63,
"message": "in <listcomp>"
},
{
"path": "test_foo.py",
"lineno": 54,
"message": "TypeError"
}
],
"stdout": "foo\nbar\n",
"stderr": "baz\n",
"log": LOG,
"longrepr": "def test_fail_nested():\n ..."
}
```
### Log
A list of log records. The fields of a log record are the [`logging.LogRecord` attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes), with the exception that the fields `exc_info` and `args` are always empty and `msg` contains the formatted log message.
You can apply [`logging.makeLogRecord()`](https://docs.python.org/3/library/logging.html#logging.makeLogRecord) on a log record to convert it back to a `logging.LogRecord` object.
#### Example
```python
[
{
"name": "root",
"msg": "This is a warning.",
"args": null,
"levelname": "WARNING",
"levelno": 30,
"pathname": "/path/to/tests/test_foo.py",
"filename": "test_foo.py",
"module": "test_foo",
"exc_info": null,
"exc_text": null,
"stack_info": null,
"lineno": 8,
"funcName": "foo",
"created": 1519772464.291738,
"msecs": 291.73803329467773,
"relativeCreated": 332.90839195251465,
"thread": 140671803118912,
"threadName": "MainThread",
"processName": "MainProcess",
"process": 31481
},
...
]
```
### Warnings
A list of warnings that occurred during the session. (See the [pytest docs on warnings](https://docs.pytest.org/en/latest/warnings.html).)
| Key | Description |
| --- | --- |
| `filename` | File name. |
| `lineno` | Line number. |
| `message` | Warning message. |
| `when` | When the warning was captured. (`"config"`, `"collect"` or `"runtest"` as listed [here](https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_warning_captured)) |
#### Example
```python
[
{
"code": "C1",
"path": "/path/to/tests/test_foo.py",
"nodeid": "test_foo.py::TestFoo",
"message": "cannot collect test class 'TestFoo' because it has a __init__ constructor"
}
]
```
Raw data
{
"_id": null,
"home_page": null,
"name": "pytest-jtr",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13.0,>=3.8.1",
"maintainer_email": null,
"keywords": "test, pytest, json, report",
"author": "Gleams API user",
"author_email": "Stephen.Swannell+ghapi@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/85/41/36c59ebb53b9efdfbf949cbb433ad5cde5e97736eeb99f434d8db2320591/pytest_jtr-1.3.0.tar.gz",
"platform": null,
"description": "# Pytest jtr JSON Test Report\n\n[![CI](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml/badge.svg)](https://github.com/Gleams-Machine/pytest-jtr/actions/workflows/main.yml)\n\nForked from [numirias/pytest-json-report](https://github.com/numirias/pytest-json-report)\n\nThis pytest plugin creates test reports as JSON files. This can make it easier to process test results in other applications.\n\nIt can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to [add metadata](#metadata) and [customize](#modifying-the-report) the report as you like.\n\n## Table of contents\n\n* [Installation](#installation)\n* [Options](#options)\n* [Usage](#usage)\n * [Metadata](#metadata)\n * [Modifying the report](#modifying-the-report)\n * [Direct invocation](#direct-invocation)\n* [Format](#format)\n * [Summary](#summary)\n * [Environment](#environment)\n * [Collectors](#collectors)\n * [Tests](#tests)\n * [Test stage](#test-stage)\n * [Log](#log)\n * [Warnings](#warnings)\n* [Related tools](#related-tools)\n\n## Installation\n\n```\n\npip install pytest-jtr\n\n# or\n\npoetry add pytest-jtr\n\n```\n\n## Options\n\n| Option | Description |\n| --- | --- |\n| `--json-report` | Create JSON report |\n| `--json-report-file=PATH` | Target path to save JSON report (use \"none\" to not save the report) |\n| `--json-report-summary` | Just create a summary without per-test details |\n| `--json-report-omit=FIELD_LIST` | List of fields to omit in the report (choose from: `collectors`, `log`, `traceback`, `streams`, `warnings`, `keywords`) |\n| `--json-report-indent=LEVEL` | Pretty-print JSON with specified indentation level |\n| `--json-report-verbosity=LEVEL` | Set verbosity (default is value of `--verbosity`) |\n\n## Usage\n\nJust run pytest with `--json-report`. The report is saved in `.report.json` by default.\n\n```bash\n$ pytest --json-report -v tests/\n$ cat .report.json\n{\"created\": 1518371686.7981803, ... \"tests\":[{\"nodeid\": \"test_foo.py\", \"outcome\": \"passed\", ...}, ...]}\n```\n\nIf you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:\n\n```bash\n$ pytest --json-report --json-report-summary\n```\n\nMany fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:\n\n```bash\n$ pytest --json-report --json-report-omit keywords streams\n```\n\nIf you don't like to have the report saved, you can specify `none` as the target file name:\n\n```bash\n$ pytest --json-report --json-report-file none\n```\n\n## Advanced usage\n\n### Metadata\n\nThe easiest way to add your own metadata to a test item is by using the `json_metadata` [test fixture](https://docs.pytest.org/en/stable/fixture.html):\n\n```python\ndef test_something(json_metadata):\n json_metadata['foo'] = {\"some\": \"thing\"}\n json_metadata['bar'] = 123\n```\n\nOr use the `pytest_json_runtest_metadata` [hook](https://docs.pytest.org/en/stable/reference.html#hooks) (in your `conftest.py`) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's `call` stage:\n\n```python\ndef pytest_json_runtest_metadata(item, call):\n if call.when != 'call':\n return {}\n return {'start': call.start, 'stop': call.stop}\n```\n\nAlso, you could add metadata using [pytest-metadata's `--metadata` switch](https://github.com/pytest-dev/pytest-metadata#additional-metadata) which will add metadata to the report's `environment` section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.\n\n### A note on hooks\n\nIf you're using a `pytest_json_*` hook although the plugin is not installed or not active (not using `--json-report`), pytest doesn't recognize it and may fail with an internal error like this:\n```\nINTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>\n```\nYou can avoid this by declaring the hook implementation optional:\n\n```python\nimport pytest\n@pytest.hookimpl(optionalhook=True)\ndef pytest_json_runtest_metadata(item, call):\n ...\n```\n\n### Modifying the report\n\nYou can modify the entire report before it's saved by using the `pytest_json_modifyreport` hook.\n\nJust implement the hook in your `conftest.py`, e.g.:\n\n```python\ndef pytest_json_modifyreport(json_report):\n # Add a key to the report\n json_report['foo'] = 'bar'\n # Delete the summary from the report\n del json_report['summary']\n```\n\nAfter `pytest_sessionfinish`, the report object is also directly available to script via `config._json_report.report`. So you can access it using some built-in hook:\n\n```python\ndef pytest_sessionfinish(session):\n report = session.config._json_report.report\n print('exited with', report['exitcode'])\n```\n\nIf you *really* want to change how the result of a test stage run is turned into JSON, you can use the `pytest_json_runtest_stage` hook. It takes a [`TestReport`](https://docs.pytest.org/en/latest/reference.html#_pytest.runner.TestReport) and returns a JSON-serializable dict:\n\n```python\ndef pytest_json_runtest_stage(report):\n return {'outcome': report.outcome}\n```\n\n### Direct invocation\n\nYou can use the plugin when invoking `pytest.main()` directly from code:\n\n```python\nimport pytest\nfrom pytest_jtr.plugin import JSONReport\n\nplugin = JSONReport()\npytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])\n```\n\nYou can then access the `report` object:\n\n```python\nprint(plugin.report)\n```\n\nAnd save the report manually:\n\n```python\nplugin.save_report('/tmp/my_report.json')\n```\n\n\n## Format\n\nThe JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in [`sample_report.json`](sample_report.json).\n\n| Key | Description |\n| --- | --- |\n| `created` | Report creation date. (Unix time) |\n| `duration` | Session duration in seconds. |\n| `exitcode` | Process exit code as listed [in the pytest docs](https://docs.pytest.org/en/latest/usage.html#possible-exit-codes). The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |\n| `root` | Absolute root path from which the session was started. |\n| `environment` | [Environment](#environment) entry. |\n| `summary` | [Summary](#summary) entry. |\n| `collectors` | [Collectors](#collectors) entry. (absent if `--json-report-summary` or if no collectors) |\n| `tests` | [Tests](#tests) entry. (absent if `--json-report-summary`) |\n| `warnings` | [Warnings](#warnings) entry. (absent if `--json-report-summary` or if no warnings) |\n\n#### Example\n\n```python\n{\n \"created\": 1518371686.7981803,\n \"duration\": 0.1235666275024414,\n \"exitcode\": 1,\n \"root\": \"/path/to/tests\",\n \"environment\": ENVIRONMENT,\n \"summary\": SUMMARY,\n \"collectors\": COLLECTORS,\n \"tests\": TESTS,\n \"warnings\": WARNINGS,\n}\n```\n\n### Summary\n\nNumber of outcomes per category and the total number of test items.\n\n| Key | Description |\n| --- | --- |\n| `collected` | Total number of tests collected. |\n| `total` | Total number of tests run. |\n| `deselected` | Total number of tests deselected. (absent if number is 0) |\n| `<outcome>` | Number of tests with that outcome. (absent if number is 0) |\n\n#### Example\n\n```python\n{\n \"collected\": 10,\n \"passed\": 2,\n \"failed\": 3,\n \"xfailed\": 1,\n \"xpassed\": 1,\n \"error\": 2,\n \"skipped\": 1,\n \"total\": 10\n}\n```\n\n### Environment\n\nThe environment section is provided by [pytest-metadata](https://github.com/pytest-dev/pytest-metadata). All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.\n\n#### Example\n\n```python\n{\n \"Python\": \"3.6.4\",\n \"Platform\": \"Linux-4.56.78-9-ARCH-x86_64-with-arch\",\n \"Packages\": {\n \"pytest\": \"3.4.0\",\n \"py\": \"1.5.2\",\n \"pluggy\": \"0.6.0\"\n },\n \"Plugins\": {\n \"json-report\": \"0.4.1\",\n \"xdist\": \"1.22.0\",\n \"metadata\": \"1.5.1\",\n \"forked\": \"0.2\",\n \"cov\": \"2.5.1\"\n },\n \"foo\": \"bar\", # Custom metadata entry passed via pytest-metadata\n}\n```\n\n### Collectors\n\nA list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.\n\n| Key | Description |\n| --- | --- |\n| `nodeid` | ID of the collector node. ([See docs](https://docs.pytest.org/en/latest/example/markers.html#node-id)) The root node has an empty node ID. |\n| `outcome` | Outcome of the collection. (Not the test outcome!) |\n| `result` | Nodes collected by the collector. |\n| `longrepr` | Representation of the collection error. (absent if no error occurred) |\n\nThe `result` is a list of the collected nodes:\n\n| Key | Description |\n| --- | --- |\n| `nodeid` | ID of the node. |\n| `type` | Type of the collected node. |\n| `lineno` | Line number. (absent if not applicable) |\n| `deselected` | `true` if the test is deselected. (absent if not deselected) |\n\n#### Example\n\n```python\n[\n {\n \"nodeid\": \"\",\n \"outcome\": \"passed\",\n \"result\": [\n {\n \"nodeid\": \"test_foo.py\",\n \"type\": \"Module\"\n }\n ]\n },\n {\n \"nodeid\": \"test_foo.py\",\n \"outcome\": \"passed\",\n \"result\": [\n {\n \"nodeid\": \"test_foo.py::test_pass\",\n \"type\": \"Function\",\n \"lineno\": 24,\n \"deselected\": true\n },\n ...\n ]\n },\n {\n \"nodeid\": \"test_bar.py\",\n \"outcome\": \"failed\",\n \"result\": [],\n \"longrepr\": \"/usr/lib/python3.6 ... invalid syntax\"\n },\n ...\n]\n```\n\n### Tests\n\nA list of test nodes. Each completed test stage produces a stage object (`setup`, `call`, `teardown`) with its own `outcome`.\n\n| Key | Description |\n| --- | --- |\n| `nodeid` | ID of the test node. |\n| `lineno` | Line number where the test starts. |\n| `keywords` | List of keywords and markers associated with the test. |\n| `outcome` | Outcome of the test run. |\n| `{setup, call, teardown}` | [Test stage](#test-stage) entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |\n| `metadata` | [Metadata](#metadata) item. (absent if no metadata) |\n\n#### Example\n\n```python\n[\n {\n \"nodeid\": \"test_foo.py::test_fail\",\n \"lineno\": 50,\n \"keywords\": [\n \"test_fail\",\n \"test_foo.py\",\n \"test_foo0\"\n ],\n \"outcome\": \"failed\",\n \"setup\": TEST_STAGE,\n \"call\": TEST_STAGE,\n \"teardown\": TEST_STAGE,\n \"metadata\": {\n \"foo\": \"bar\",\n }\n },\n ...\n]\n```\n\n\n### Test stage\n\nA test stage item.\n\n| Key | Description |\n| --- | --- |\n| `duration` | Duration of the test stage in seconds. |\n| `outcome` | Outcome of the test stage. (can be different from the overall test outcome) |\n| `crash` | Crash entry. (absent if no error occurred) |\n| `traceback` | List of traceback entries. (absent if no error occurred; affected by `--tb` option) |\n| `stdout` | Standard output. (absent if none available) |\n| `stderr` | Standard error. (absent if none available) |\n| `log` | [Log](#log) entry. (absent if none available) |\n| `longrepr` | Representation of the error. (absent if no error occurred; format affected by `--tb` option) |\n\n#### Example\n\n```python\n{\n \"duration\": 0.00018835067749023438,\n \"outcome\": \"failed\",\n \"crash\": {\n \"path\": \"/path/to/tests/test_foo.py\",\n \"lineno\": 54,\n \"message\": \"TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'\"\n },\n \"traceback\": [\n {\n \"path\": \"test_foo.py\",\n \"lineno\": 65,\n \"message\": \"\"\n },\n {\n \"path\": \"test_foo.py\",\n \"lineno\": 63,\n \"message\": \"in foo\"\n },\n {\n \"path\": \"test_foo.py\",\n \"lineno\": 63,\n \"message\": \"in <listcomp>\"\n },\n {\n \"path\": \"test_foo.py\",\n \"lineno\": 54,\n \"message\": \"TypeError\"\n }\n ],\n \"stdout\": \"foo\\nbar\\n\",\n \"stderr\": \"baz\\n\",\n \"log\": LOG,\n \"longrepr\": \"def test_fail_nested():\\n ...\"\n}\n```\n\n### Log\n\nA list of log records. The fields of a log record are the [`logging.LogRecord` attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes), with the exception that the fields `exc_info` and `args` are always empty and `msg` contains the formatted log message.\n\nYou can apply [`logging.makeLogRecord()`](https://docs.python.org/3/library/logging.html#logging.makeLogRecord) on a log record to convert it back to a `logging.LogRecord` object.\n\n#### Example\n\n```python\n[\n {\n \"name\": \"root\",\n \"msg\": \"This is a warning.\",\n \"args\": null,\n \"levelname\": \"WARNING\",\n \"levelno\": 30,\n \"pathname\": \"/path/to/tests/test_foo.py\",\n \"filename\": \"test_foo.py\",\n \"module\": \"test_foo\",\n \"exc_info\": null,\n \"exc_text\": null,\n \"stack_info\": null,\n \"lineno\": 8,\n \"funcName\": \"foo\",\n \"created\": 1519772464.291738,\n \"msecs\": 291.73803329467773,\n \"relativeCreated\": 332.90839195251465,\n \"thread\": 140671803118912,\n \"threadName\": \"MainThread\",\n \"processName\": \"MainProcess\",\n \"process\": 31481\n },\n ...\n]\n```\n\n\n### Warnings\n\nA list of warnings that occurred during the session. (See the [pytest docs on warnings](https://docs.pytest.org/en/latest/warnings.html).)\n\n| Key | Description |\n| --- | --- |\n| `filename` | File name. |\n| `lineno` | Line number. |\n| `message` | Warning message. |\n| `when` | When the warning was captured. (`\"config\"`, `\"collect\"` or `\"runtest\"` as listed [here](https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_warning_captured)) |\n\n#### Example\n\n```python\n[\n {\n \"code\": \"C1\",\n \"path\": \"/path/to/tests/test_foo.py\",\n \"nodeid\": \"test_foo.py::TestFoo\",\n \"message\": \"cannot collect test class 'TestFoo' because it has a __init__ constructor\"\n }\n]\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "pytest plugin supporting json test report output",
"version": "1.3.0",
"project_urls": null,
"split_keywords": [
"test",
" pytest",
" json",
" report"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0f65f2234cd8ce64facc04a0badbd1b7d77999a745db20b4609bf4f20e62c203",
"md5": "7f5f7fc6a06cc83ce46f0909e108601d",
"sha256": "128db435c5d347d0a7e2daaeb8ff59eafb1a1c25408b455db86e25a52b68cc30"
},
"downloads": -1,
"filename": "pytest_jtr-1.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7f5f7fc6a06cc83ce46f0909e108601d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13.0,>=3.8.1",
"size": 12845,
"upload_time": "2024-06-04T15:52:36",
"upload_time_iso_8601": "2024-06-04T15:52:36.409306Z",
"url": "https://files.pythonhosted.org/packages/0f/65/f2234cd8ce64facc04a0badbd1b7d77999a745db20b4609bf4f20e62c203/pytest_jtr-1.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "854136c59ebb53b9efdfbf949cbb433ad5cde5e97736eeb99f434d8db2320591",
"md5": "b37a88f6fe61d0aebd681a7ecfff5ccf",
"sha256": "35607c97f4aa28f1daa06cb608076ef22e7188335fdc36f2d1b9a26017c89718"
},
"downloads": -1,
"filename": "pytest_jtr-1.3.0.tar.gz",
"has_sig": false,
"md5_digest": "b37a88f6fe61d0aebd681a7ecfff5ccf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13.0,>=3.8.1",
"size": 17555,
"upload_time": "2024-06-04T15:52:38",
"upload_time_iso_8601": "2024-06-04T15:52:38.389429Z",
"url": "https://files.pythonhosted.org/packages/85/41/36c59ebb53b9efdfbf949cbb433ad5cde5e97736eeb99f434d8db2320591/pytest_jtr-1.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-04 15:52:38",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pytest-jtr"
}