# pytest-elk-reporter
[![PyPI version](https://img.shields.io/pypi/v/pytest-elk-reporter.svg?style=flat)](https://pypi.org/project/pytest-elk-reporter)
[![Python versions](https://img.shields.io/pypi/pyversions/pytest-elk-reporter.svg?style=flat)](https://pypi.org/project/pytest-elk-reporter)
[![.github/workflows/tests.yml](https://github.com/fruch/pytest-elk-reporter/workflows/.github/workflows/tests.yml/badge.svg)](https://github.com/fruch/pytest-elk-reporter/actions?query=branch%3Amaster)
[![Libraries.io dependency status for GitHub repo](https://img.shields.io/librariesio/github/fruch/pytest-elk-reporter.svg?style=flat)](https://libraries.io/github/fruch/pytest-elk-reporter)
[![Using Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/python/black)
[![Codecov Reports](https://codecov.io/gh/fruch/pytest-elk-reporter/branch/master/graph/badge.svg)](https://codecov.io/gh/fruch/pytest-elk-reporter)
A plugin to send pytest test results to [ELK] stack, with extra context data
## Features
* Report each test result into Elasticsearch as they finish
* Automatically append contextual data to each test:
* git information such as `branch` or `last commit` and more
* all of CI env variables
* Jenkins
* Travis
* Circle CI
* Github Actions
* username if available
* Report a test summary to Elastic for each session with all the context data
* Append any user data into the context sent to Elastic
## Requirements
* having [pytest] tests written
## Installation
You can install "pytest-elk-reporter" via [pip] from [PyPI]
``` bash
pip install pytest-elk-reporter
```
### Elasticsearch configuration
We need this `auto_create_index` setting enabled for the indexes that are going to be used,
since we don't have code to create the indexes, this is the default
```bash
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"action.auto_create_index": "true"
}
}
'
```
For more info on this Elasticsearch feature check their [index documention](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#index-creation)
## Usage
### Run and configure from command line
```bash
pytest --es-address 127.0.0.1:9200
# or if you need user/password to authenticate
pytest --es-address my-elk-server.io:9200 --es-username fruch --es-password 'passwordsarenicetohave'
```
### Configure from code (ideally in conftest.py)
```python
from pytest_elk_reporter import ElkReporter
def pytest_plugin_registered(plugin, manager):
if isinstance(plugin, ElkReporter):
# TODO: get credentials in more secure fashion programmatically, maybe AWS secrets or the likes
# or put them in plain-text in the code... what can ever go wrong...
plugin.es_address = "my-elk-server.io:9200"
plugin.es_user = 'fruch'
plugin.es_password = 'passwordsarenicetohave'
plugin.es_index_name = 'test_data'
```
### Configure from pytest ini file
```ini
# put this in pytest.ini / tox.ini / setup.cfg
[pytest]
es_address = my-elk-server.io:9200
es_user = fruch
es_password = passwordsarenicetohave
es_index_name = test_data
```
see [pytest docs](https://docs.pytest.org/en/latest/customize.html)
for more about how to configure pytest using .ini files
### Collect context data for the whole session
In this example, I'll be able to build a dashboard for each version:
```python
import pytest
@pytest.fixture(scope="session", autouse=True)
def report_formal_version_to_elk(request):
"""
Append my own data specific, for example which of the code under test is used
"""
# TODO: programmatically set to the version of the code under test...
my_data = {"formal_version": "1.0.0-rc2" }
elk = request.config.pluginmanager.get_plugin("elk-reporter-runtime")
elk.session_data.update(**my_data)
```
### Collect data for specific tests
```python
import requests
def test_my_service_and_collect_timings(request, elk_reporter):
response = requests.get("http://my-server.io/api/do_something")
assert response.status_code == 200
elk_reporter.append_test_data(request, {"do_something_response_time": response.elapsed.total_seconds() })
# now, a dashboard showing response time by version should be quite easy
# and yeah, it's not exactly a real usable metric, but it's just one example...
```
Or via the `record_property` built-in fixture (that is normally used to collect data into junit.xml reports):
```python
import requests
def test_my_service_and_collect_timings(record_property):
response = requests.get("http://my-server.io/api/do_something")
assert response.status_code == 200
record_property("do_something_response_time", response.elapsed.total_seconds())
```
## Split tests based on their duration histories
One cool thing that can be done now that you have a history of the tests,
is to split the tests based on their actual runtime when passing.
For long-running integration tests, this is priceless.
In this example, we're going to split the run into a maximum of 4 min slices.
Any test that doesn't have history information is assumed to be 60 sec long.
```bash
# pytest --collect-only --es-splice --es-max-splice-time=4 --es-default-test-time=60
...
0: 0:04:00 - 3 - ['test_history_slices.py::test_should_pass_1', 'test_history_slices.py::test_should_pass_2', 'test_history_slices.py::test_should_pass_3']
1: 0:04:00 - 2 - ['test_history_slices.py::test_with_history_data', 'test_history_slices.py::test_that_failed']
...
# cat include000.txt
test_history_slices.py::test_should_pass_1
test_history_slices.py::test_should_pass_2
test_history_slices.py::test_should_pass_3
# cat include000.txt
test_history_slices.py::test_with_history_data
test_history_slices.py::test_that_failed
### now we can run each slice on its own machine
### on machine1
# pytest $(cat include000.txt)
### on machine2
# pytest $(cat include001.txt)
```
## Contributing
Contributions are very welcome. Tests can be run with [`tox`][tox]. Please ensure
the coverage at least stays the same before you submit a pull request.
## License
Distributed under the terms of the [MIT][MIT] license, "pytest-elk-reporter" is free and open source software
## Issues
If you encounter any problems, please [file an issue] along with a detailed description.
## Thanks
This [pytest] plugin was generated with [Cookiecutter] along with [@hackebrot]'s [cookiecutter-pytest-plugin] template.
[ELK]: https://www.elastic.co/elk-stack
[Cookiecutter]: https://github.com/audreyr/cookiecutter
[@hackebrot]: https://github.com/hackebrot
[MIT]: http://opensource.org/licenses/MIT
[cookiecutter-pytest-plugin]: https://github.com/pytest-dev/cookiecutter-pytest-plugin
[file an issue]: https://github.com/fruch/pytest-elk-reporter/issues
[pytest]: https://github.com/pytest-dev/pytest
[tox]: https://tox.readthedocs.io/en/latest/
[pip]: https://pypi.org/project/pip/
[PyPI]: https://pypi.org/project
Raw data
{
"_id": null,
"home_page": "https://github.com/fruch/pytest-elk-reporter",
"name": "pytest-elk-reporter",
"maintainer": "Israel Fruchter",
"docs_url": null,
"requires_python": "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7",
"maintainer_email": "israel.fruchter@gmail.com",
"keywords": null,
"author": "Israel Fruchter",
"author_email": "israel.fruchter@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/cd/c9/510e0456c722e91da678d82dfacb2a7da8533e84157875112c890d90d0be/pytest-elk-reporter-0.2.3.tar.gz",
"platform": null,
"description": "# pytest-elk-reporter\n\n[![PyPI version](https://img.shields.io/pypi/v/pytest-elk-reporter.svg?style=flat)](https://pypi.org/project/pytest-elk-reporter)\n[![Python versions](https://img.shields.io/pypi/pyversions/pytest-elk-reporter.svg?style=flat)](https://pypi.org/project/pytest-elk-reporter)\n[![.github/workflows/tests.yml](https://github.com/fruch/pytest-elk-reporter/workflows/.github/workflows/tests.yml/badge.svg)](https://github.com/fruch/pytest-elk-reporter/actions?query=branch%3Amaster)\n[![Libraries.io dependency status for GitHub repo](https://img.shields.io/librariesio/github/fruch/pytest-elk-reporter.svg?style=flat)](https://libraries.io/github/fruch/pytest-elk-reporter)\n[![Using Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/python/black)\n[![Codecov Reports](https://codecov.io/gh/fruch/pytest-elk-reporter/branch/master/graph/badge.svg)](https://codecov.io/gh/fruch/pytest-elk-reporter)\n\nA plugin to send pytest test results to [ELK] stack, with extra context data\n\n## Features\n\n* Report each test result into Elasticsearch as they finish\n* Automatically append contextual data to each test:\n * git information such as `branch` or `last commit` and more\n * all of CI env variables\n * Jenkins\n * Travis\n * Circle CI\n * Github Actions\n * username if available\n* Report a test summary to Elastic for each session with all the context data\n* Append any user data into the context sent to Elastic\n\n## Requirements\n\n* having [pytest] tests written\n\n## Installation\n\nYou can install \"pytest-elk-reporter\" via [pip] from [PyPI]\n\n``` bash\npip install pytest-elk-reporter\n```\n\n### Elasticsearch configuration\n\nWe need this `auto_create_index` setting enabled for the indexes that are going to be used,\nsince we don't have code to create the indexes, this is the default\n\n```bash\ncurl -X PUT \"localhost:9200/_cluster/settings\" -H 'Content-Type: application/json' -d'\n{\n \"persistent\": {\n \"action.auto_create_index\": \"true\"\n }\n}\n'\n```\n\nFor more info on this Elasticsearch feature check their [index documention](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#index-creation)\n\n## Usage\n\n### Run and configure from command line\n\n```bash\npytest --es-address 127.0.0.1:9200\n# or if you need user/password to authenticate\npytest --es-address my-elk-server.io:9200 --es-username fruch --es-password 'passwordsarenicetohave'\n```\n\n### Configure from code (ideally in conftest.py)\n\n```python\nfrom pytest_elk_reporter import ElkReporter\n\ndef pytest_plugin_registered(plugin, manager):\n if isinstance(plugin, ElkReporter):\n # TODO: get credentials in more secure fashion programmatically, maybe AWS secrets or the likes\n # or put them in plain-text in the code... what can ever go wrong...\n plugin.es_address = \"my-elk-server.io:9200\"\n plugin.es_user = 'fruch'\n plugin.es_password = 'passwordsarenicetohave'\n plugin.es_index_name = 'test_data'\n\n```\n\n### Configure from pytest ini file\n\n```ini\n# put this in pytest.ini / tox.ini / setup.cfg\n[pytest]\nes_address = my-elk-server.io:9200\nes_user = fruch\nes_password = passwordsarenicetohave\nes_index_name = test_data\n```\n\nsee [pytest docs](https://docs.pytest.org/en/latest/customize.html)\nfor more about how to configure pytest using .ini files\n\n### Collect context data for the whole session\n\nIn this example, I'll be able to build a dashboard for each version:\n\n```python\nimport pytest\n\n@pytest.fixture(scope=\"session\", autouse=True)\ndef report_formal_version_to_elk(request):\n \"\"\"\n Append my own data specific, for example which of the code under test is used\n \"\"\"\n # TODO: programmatically set to the version of the code under test...\n my_data = {\"formal_version\": \"1.0.0-rc2\" }\n\n elk = request.config.pluginmanager.get_plugin(\"elk-reporter-runtime\")\n elk.session_data.update(**my_data)\n```\n\n### Collect data for specific tests\n\n\n```python\nimport requests\n\ndef test_my_service_and_collect_timings(request, elk_reporter):\n response = requests.get(\"http://my-server.io/api/do_something\")\n assert response.status_code == 200\n\n elk_reporter.append_test_data(request, {\"do_something_response_time\": response.elapsed.total_seconds() })\n # now, a dashboard showing response time by version should be quite easy\n # and yeah, it's not exactly a real usable metric, but it's just one example...\n```\n\nOr via the `record_property` built-in fixture (that is normally used to collect data into junit.xml reports):\n\n```python\nimport requests\n\ndef test_my_service_and_collect_timings(record_property):\n response = requests.get(\"http://my-server.io/api/do_something\")\n assert response.status_code == 200\n\n record_property(\"do_something_response_time\", response.elapsed.total_seconds())\n```\n\n## Split tests based on their duration histories\n\nOne cool thing that can be done now that you have a history of the tests,\nis to split the tests based on their actual runtime when passing.\nFor long-running integration tests, this is priceless.\n\nIn this example, we're going to split the run into a maximum of 4 min slices.\nAny test that doesn't have history information is assumed to be 60 sec long.\n\n```bash\n# pytest --collect-only --es-splice --es-max-splice-time=4 --es-default-test-time=60\n...\n\n0: 0:04:00 - 3 - ['test_history_slices.py::test_should_pass_1', 'test_history_slices.py::test_should_pass_2', 'test_history_slices.py::test_should_pass_3']\n1: 0:04:00 - 2 - ['test_history_slices.py::test_with_history_data', 'test_history_slices.py::test_that_failed']\n\n...\n\n# cat include000.txt\ntest_history_slices.py::test_should_pass_1\ntest_history_slices.py::test_should_pass_2\ntest_history_slices.py::test_should_pass_3\n\n# cat include000.txt\ntest_history_slices.py::test_with_history_data\ntest_history_slices.py::test_that_failed\n\n### now we can run each slice on its own machine\n### on machine1\n# pytest $(cat include000.txt)\n\n### on machine2\n# pytest $(cat include001.txt)\n```\n\n## Contributing\n\nContributions are very welcome. Tests can be run with [`tox`][tox]. Please ensure\nthe coverage at least stays the same before you submit a pull request.\n\n## License\n\nDistributed under the terms of the [MIT][MIT] license, \"pytest-elk-reporter\" is free and open source software\n\n## Issues\n\nIf you encounter any problems, please [file an issue] along with a detailed description.\n\n## Thanks\n\nThis [pytest] plugin was generated with [Cookiecutter] along with [@hackebrot]'s [cookiecutter-pytest-plugin] template.\n\n[ELK]: https://www.elastic.co/elk-stack\n[Cookiecutter]: https://github.com/audreyr/cookiecutter\n[@hackebrot]: https://github.com/hackebrot\n[MIT]: http://opensource.org/licenses/MIT\n[cookiecutter-pytest-plugin]: https://github.com/pytest-dev/cookiecutter-pytest-plugin\n[file an issue]: https://github.com/fruch/pytest-elk-reporter/issues\n[pytest]: https://github.com/pytest-dev/pytest\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/project\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A simple plugin to use with pytest",
"version": "0.2.3",
"project_urls": {
"Homepage": "https://github.com/fruch/pytest-elk-reporter"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3e49901a7af11737c56bc0683285906a3750f64846aa0b66b2455e87da2bfec1",
"md5": "4309997869d345bfaedbc263abbf163a",
"sha256": "780a4a118c0cffddcf48f0443972a7f13a15da9a855e2317165cfdc4bfed3d78"
},
"downloads": -1,
"filename": "pytest_elk_reporter-0.2.3-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "4309997869d345bfaedbc263abbf163a",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7",
"size": 10201,
"upload_time": "2024-04-04T11:15:32",
"upload_time_iso_8601": "2024-04-04T11:15:32.378437Z",
"url": "https://files.pythonhosted.org/packages/3e/49/901a7af11737c56bc0683285906a3750f64846aa0b66b2455e87da2bfec1/pytest_elk_reporter-0.2.3-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cdc9510e0456c722e91da678d82dfacb2a7da8533e84157875112c890d90d0be",
"md5": "d2c429d9855204e276a2afbbf1d5e908",
"sha256": "39ba12fe6023727f39a09aa5e160e1d0179c7eb5e36d8fda999c40f3b814af3c"
},
"downloads": -1,
"filename": "pytest-elk-reporter-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "d2c429d9855204e276a2afbbf1d5e908",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7",
"size": 24621,
"upload_time": "2024-04-04T11:15:34",
"upload_time_iso_8601": "2024-04-04T11:15:34.334059Z",
"url": "https://files.pythonhosted.org/packages/cd/c9/510e0456c722e91da678d82dfacb2a7da8533e84157875112c890d90d0be/pytest-elk-reporter-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-04 11:15:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fruch",
"github_project": "pytest-elk-reporter",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "pytest-elk-reporter"
}