# pytest-performancetotal
With this plugin for [pytest](https://github.com/pytest-dev/pytest), which complements the [playwright-pytest](https://github.com/microsoft/playwright-pytest) integration, you can seamlessly incorporate performance analysis into your test flows. It’s designed to work with UI interactions, API calls, or a combination of both, providing a straightforward method for measuring response times and pinpointing potential performance issues within your application. By leveraging this data, you can make strategic decisions to optimize and enhance your application’s performance. For insights into the original concept and additional details, refer to the [article](https://www.linkedin.com/pulse/elevating-your-playwright-tests-plugin-tzur-paldi-phd) on the Node.js version of this plugin.
## Installation
```bash
$ pip install pytest-performancetotal
```
## Usage
To use pytest-performancetotal, simply add the **performancetotal** fixture to the test method. This will include the performance functionality in your test. No further setup is required. Here's an example:
```python
import pytest
@pytest.mark.parametrize("iteration", [1, 2, 3])
def test_features(performancetotal, iteration):
performancetotal.sample_start("feature1")
time.sleep(1)
performancetotal.sample_end("feature1")
performancetotal.sample_start("feature2")
time.sleep(0.5)
performancetotal.sample_end("feature2")
```
You can also get immediate time span for a single sample inside a test:
```python
feature1_timespan = performancetotal.get_sample_time("feature1")
```
be aware that get_sample_time returns a single measurement with no statistical analysis.
To use type hints follow this example:
```python
from pytest_performancetotal.performance import Performance
def test_features(performancetotal: Performance, iteration):
# ... your test code here
```
### Options
#### __--performance-noappend__
To disable appending new results into existing file and start fresh every run use:
```bash
pytest --performance-noappend
```
> **⚠️ Caution:**
>
> This action will delete all your performance data permanently. Ensure that you have a backup before proceeding.
#### __--performance-drop-failed-results__
To drops results for failed tests use:
```bash
pytest --performance-drop-failed-results
```
#### __--performance-recent-days__
To set the umber of days to consider for performance analysis use:
`pytest --performance-recent-days=7` or use day portion like: `pytest --performance-recent-days=0.5`
#### __--performance-results-dir__
Set a custom results directory name/path:
On WIndows:
```bash
pytest --performance-results-dir=results\01012025
```
or
```bash
pytest --performance-results-dir=myCustomDir
```
On Linux:
```bash
pytest --performance-results-dir=results/01012025
```
or
```bash
pytest --performance-results-dir=myCustomDir
```
#### __--performance-results-file__
Set a custom results file name:
```bash
pytest --performance-results-file=myCustomFile
```
### Configuring Logging in pytest.ini
This plugin uses the native Python logging module to provide detailed logs during its execution. To ensure you can see these logs during testing, proper configuration is needed. The following instructions will guide you on how to configure pytest to output log messages to the console. This setup is particularly useful for debugging and tracking the behavior of your code.
Steps to Configure Logging:
Create or Update pytest.ini: If you do not already have a pytest.ini file, create one in the root directory of your project. If you have one, open it for editing.
For example add the following configuration in file `pytest.ini`:
```no-highlight
[pytest]
log_cli = true
log_cli_level = DEBUG # or INFO|WARNING|ERROR|CRITICAL
log_cli_format = %(asctime)s - %(name)s - %(levelname)s - %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
```
__log_cli__: Enables logging to the console.
__log_cli_level__: Sets the logging level. You can choose from DEBUG, INFO, WARNING, ERROR, or CRITICAL.
__log_cli_format__: Defines the format of the log messages.
__log_cli_date_format__: Specifies the date format used in log messages.
## Getting the results
A new directory named `performance_results` is created inside your project's root folder (unless you use a custom directory name). Once all the tests are completed, two files are created inside the performance-results directory: `results.json` and `results.csv`. The analyzed data includes average time, standard error of mean (SEM), number of samples, minimum value, maximum value, earliest time, and latest time. The results table is also printed to the terminal log.
### Analyzing performance data in bulk
To analyze existing performance data in bulk without generating new tests, it is recommended to use the [__performancetotal-cli__ tool](https://www.npmjs.com/package/performancetotal-cli). Although it is necessary to have Node.js, the tool can handle the results generated by __pytest-performancetotal__.
## Support
For any questions or suggestions contact me at: [tzur.paldi@outlook.com](mailto:tzur.paldi@outlook.com?subjet=pytest-performancetotal%20Support)
Raw data
{
"_id": null,
"home_page": "https://github.com/tzurp/pytest_performancetotal",
"name": "pytest-performancetotal",
"maintainer": "Tzur Paldi",
"docs_url": null,
"requires_python": null,
"maintainer_email": "tzur.paldi@outlook.com",
"keywords": "pytest plugin performance playwright",
"author": "Tzur Paldi",
"author_email": "tzur.paldi@outlook.com",
"download_url": "https://files.pythonhosted.org/packages/07/86/ba509c83c7f72599d69db2958cca2414a293d87e0fd5f7f945bac0e27520/pytest_performancetotal-0.2.9.tar.gz",
"platform": null,
"description": "# pytest-performancetotal\r\n\r\nWith this plugin for [pytest](https://github.com/pytest-dev/pytest), which complements the [playwright-pytest](https://github.com/microsoft/playwright-pytest) integration, you can seamlessly incorporate performance analysis into your test flows. It\u2019s designed to work with UI interactions, API calls, or a combination of both, providing a straightforward method for measuring response times and pinpointing potential performance issues within your application. By leveraging this data, you can make strategic decisions to optimize and enhance your application\u2019s performance. For insights into the original concept and additional details, refer to the [article](https://www.linkedin.com/pulse/elevating-your-playwright-tests-plugin-tzur-paldi-phd) on the Node.js version of this plugin.\r\n\r\n## Installation\r\n\r\n```bash\r\n$ pip install pytest-performancetotal\r\n```\r\n\r\n## Usage\r\n\r\nTo use pytest-performancetotal, simply add the **performancetotal** fixture to the test method. This will include the performance functionality in your test. No further setup is required. Here's an example:\r\n\r\n```python\r\nimport pytest\r\n\r\n@pytest.mark.parametrize(\"iteration\", [1, 2, 3])\r\ndef test_features(performancetotal, iteration):\r\n performancetotal.sample_start(\"feature1\")\r\n time.sleep(1)\r\n performancetotal.sample_end(\"feature1\")\r\n \r\n performancetotal.sample_start(\"feature2\")\r\n time.sleep(0.5)\r\n performancetotal.sample_end(\"feature2\")\r\n```\r\n\r\nYou can also get immediate time span for a single sample inside a test:\r\n\r\n```python\r\nfeature1_timespan = performancetotal.get_sample_time(\"feature1\")\r\n```\r\nbe aware that get_sample_time returns a single measurement with no statistical analysis.\r\n\r\n\r\nTo use type hints follow this example:\r\n\r\n```python\r\nfrom pytest_performancetotal.performance import Performance\r\n\r\ndef test_features(performancetotal: Performance, iteration):\r\n # ... your test code here\r\n```\r\n\r\n### Options\r\n\r\n#### __--performance-noappend__\r\n\r\nTo disable appending new results into existing file and start fresh every run use:\r\n\r\n```bash\r\npytest --performance-noappend\r\n```\r\n\r\n> **\u26a0\ufe0f Caution:**\r\n>\r\n> This action will delete all your performance data permanently. Ensure that you have a backup before proceeding.\r\n\r\n#### __--performance-drop-failed-results__\r\n\r\nTo drops results for failed tests use:\r\n\r\n```bash\r\npytest --performance-drop-failed-results\r\n```\r\n\r\n#### __--performance-recent-days__\r\n\r\nTo set the umber of days to consider for performance analysis use:\r\n\r\n`pytest --performance-recent-days=7` or use day portion like: `pytest --performance-recent-days=0.5`\r\n\r\n#### __--performance-results-dir__\r\n\r\nSet a custom results directory name/path:\r\n\r\nOn WIndows:\r\n\r\n```bash\r\npytest --performance-results-dir=results\\01012025\r\n```\r\n\r\nor\r\n\r\n```bash\r\npytest --performance-results-dir=myCustomDir\r\n```\r\n\r\nOn Linux:\r\n\r\n```bash\r\npytest --performance-results-dir=results/01012025\r\n```\r\n\r\nor\r\n\r\n```bash\r\npytest --performance-results-dir=myCustomDir\r\n```\r\n\r\n#### __--performance-results-file__\r\n\r\nSet a custom results file name:\r\n\r\n```bash\r\npytest --performance-results-file=myCustomFile\r\n```\r\n\r\n### Configuring Logging in pytest.ini\r\n\r\nThis plugin uses the native Python logging module to provide detailed logs during its execution. To ensure you can see these logs during testing, proper configuration is needed. The following instructions will guide you on how to configure pytest to output log messages to the console. This setup is particularly useful for debugging and tracking the behavior of your code.\r\n\r\nSteps to Configure Logging:\r\n\r\nCreate or Update pytest.ini: If you do not already have a pytest.ini file, create one in the root directory of your project. If you have one, open it for editing.\r\n\r\nFor example add the following configuration in file `pytest.ini`:\r\n\r\n```no-highlight\r\n[pytest]\r\nlog_cli = true\r\nlog_cli_level = DEBUG # or INFO|WARNING|ERROR|CRITICAL\r\nlog_cli_format = %(asctime)s - %(name)s - %(levelname)s - %(message)s\r\nlog_cli_date_format = %Y-%m-%d %H:%M:%S\r\n```\r\n\r\n__log_cli__: Enables logging to the console.\r\n\r\n__log_cli_level__: Sets the logging level. You can choose from DEBUG, INFO, WARNING, ERROR, or CRITICAL.\r\n\r\n__log_cli_format__: Defines the format of the log messages.\r\n\r\n__log_cli_date_format__: Specifies the date format used in log messages.\r\n\r\n## Getting the results\r\n\r\nA new directory named `performance_results` is created inside your project's root folder (unless you use a custom directory name). Once all the tests are completed, two files are created inside the performance-results directory: `results.json` and `results.csv`. The analyzed data includes average time, standard error of mean (SEM), number of samples, minimum value, maximum value, earliest time, and latest time. The results table is also printed to the terminal log.\r\n\r\n### Analyzing performance data in bulk\r\n\r\nTo analyze existing performance data in bulk without generating new tests, it is recommended to use the [__performancetotal-cli__ tool](https://www.npmjs.com/package/performancetotal-cli). Although it is necessary to have Node.js, the tool can handle the results generated by __pytest-performancetotal__.\r\n\r\n## Support\r\n\r\nFor any questions or suggestions contact me at: [tzur.paldi@outlook.com](mailto:tzur.paldi@outlook.com?subjet=pytest-performancetotal%20Support)\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A performance plugin for pytest",
"version": "0.2.9",
"project_urls": {
"Homepage": "https://github.com/tzurp/pytest_performancetotal"
},
"split_keywords": [
"pytest",
"plugin",
"performance",
"playwright"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0b097fc6002b351fed9beef42c9ba5c11807e10135c102c51dfa50927c7fed77",
"md5": "f519acdf2373d12c2ca9063e010808a8",
"sha256": "21e96eb51146e9eef6fe17e57fa97d2c07fdbc22a9303e482e95de11f4d8e4e1"
},
"downloads": -1,
"filename": "pytest_performancetotal-0.2.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f519acdf2373d12c2ca9063e010808a8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 14313,
"upload_time": "2025-02-01T17:35:24",
"upload_time_iso_8601": "2025-02-01T17:35:24.369190Z",
"url": "https://files.pythonhosted.org/packages/0b/09/7fc6002b351fed9beef42c9ba5c11807e10135c102c51dfa50927c7fed77/pytest_performancetotal-0.2.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0786ba509c83c7f72599d69db2958cca2414a293d87e0fd5f7f945bac0e27520",
"md5": "2bfdf8d1da2692791f97452d037d9e99",
"sha256": "d099100694ac4f6270fad0a2a916f3b21b0d544c35a7ba539cb423996e58b3b0"
},
"downloads": -1,
"filename": "pytest_performancetotal-0.2.9.tar.gz",
"has_sig": false,
"md5_digest": "2bfdf8d1da2692791f97452d037d9e99",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 12534,
"upload_time": "2025-02-01T17:35:26",
"upload_time_iso_8601": "2025-02-01T17:35:26.487326Z",
"url": "https://files.pythonhosted.org/packages/07/86/ba509c83c7f72599d69db2958cca2414a293d87e0fd5f7f945bac0e27520/pytest_performancetotal-0.2.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-01 17:35:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tzurp",
"github_project": "pytest_performancetotal",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "certifi",
"specs": [
[
"==",
"2025.1.31"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.1"
]
]
},
{
"name": "colorama",
"specs": [
[
"==",
"0.4.6"
]
]
},
{
"name": "filelock",
"specs": [
[
"==",
"3.17.0"
]
]
},
{
"name": "greenlet",
"specs": [
[
"==",
"3.1.1"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "iniconfig",
"specs": [
[
"==",
"2.0.0"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"24.2"
]
]
},
{
"name": "playwright",
"specs": [
[
"==",
"1.49.1"
]
]
},
{
"name": "pluggy",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "pyee",
"specs": [
[
"==",
"12.0.0"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"8.3.4"
]
]
},
{
"name": "pytest-base-url",
"specs": [
[
"==",
"2.1.0"
]
]
},
{
"name": "pytest-playwright",
"specs": [
[
"==",
"0.6.2"
]
]
},
{
"name": "python-slugify",
"specs": [
[
"==",
"8.0.4"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.3"
]
]
},
{
"name": "setuptools",
"specs": [
[
"==",
"75.8.0"
]
]
},
{
"name": "text-unidecode",
"specs": [
[
"==",
"1.3"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"==",
"4.12.2"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.3.0"
]
]
}
],
"lcname": "pytest-performancetotal"
}