# pytest-performancetotal
With this plugin for [pytest](https://github.com/pytest-dev/pytest), which complements the [playwright-pytest](https://github.com/microsoft/playwright-pytest) integration, you can seamlessly incorporate performance analysis into your test flows. It’s designed to work with UI interactions, API calls, or a combination of both, providing a straightforward method for measuring response times and pinpointing potential performance issues within your application. By leveraging this data, you can make strategic decisions to optimize and enhance your application’s performance. For insights into the original concept and additional details, refer to the [article](https://www.linkedin.com/pulse/elevating-your-playwright-tests-plugin-tzur-paldi-phd) on the Node.js version of this plugin.
## Installation
```no-highlight
$ pip install pytest-performancetotal
```
## Usage
To use pytest-performancetotal, simply add the **performancetotal** fixture to the test method. This will include the performance functionality in your test. No further setup is required. Here's an example:
```no-highlight
import pytest
@pytest.mark.parametrize("iteration", [1, 2, 3])
def test_features(performancetotal, iteration):
performancetotal.sample_start("feature1")
time.sleep(1)
performancetotal.sample_end("feature1")
performancetotal.sample_start("feature2")
time.sleep(0.5)
performancetotal.sample_end("feature2")
```
You can also get immediate time span for a single sample inside a test:
```no-highlight
feature1_timespan = performancetotal.get_sample_time("feature1")
```
be aware that get_sample_time returns a single measurement with no statistical analysis.
To use type hints follow this example:
```no-highlight
from pytest_performancetotal.performance import Performance
def test_features(performancetotal: Performance, iteration):
# ... your test code here
```
## Options
To disable appending new results into existing file and start fresh every run use:
```no-highlight
pytest --performance-noappend
```
## Getting the results
A new directory named `performance_results` is created inside your project's root folder. Once all the tests are completed, two files are created inside the performance-results directory: `results.json` and `results.csv`. The analyzed data includes average time, standard error of mean (SEM), number of samples, minimum value, maximum value, earliest time, and latest time. The results table is also printed to the terminal log.
## Support
For any questions or suggestions contact me at: [tzur.paldi@outlook.com](mailto:tzur.paldi@outlook.com?subjet=pytest-performancetotal%20Support)
Raw data
{
"_id": null,
"home_page": "https://github.com/tzurp/pytest_performancetotal",
"name": "pytest-performancetotal",
"maintainer": "Tzur Paldi",
"docs_url": null,
"requires_python": "",
"maintainer_email": "tzur.paldi@outlook.com",
"keywords": "pytest plugin performance playwright",
"author": "Tzur Paldi",
"author_email": "tzur.paldi@outlook.com",
"download_url": "https://files.pythonhosted.org/packages/4d/c1/d4c798481f5fb052e9a97a08001aeb003bbf1cbd36742b172a04d2db251d/pytest-performancetotal-0.2.2.tar.gz",
"platform": null,
"description": "# pytest-performancetotal\r\n\r\nWith this plugin for [pytest](https://github.com/pytest-dev/pytest), which complements the [playwright-pytest](https://github.com/microsoft/playwright-pytest) integration, you can seamlessly incorporate performance analysis into your test flows. It\u2019s designed to work with UI interactions, API calls, or a combination of both, providing a straightforward method for measuring response times and pinpointing potential performance issues within your application. By leveraging this data, you can make strategic decisions to optimize and enhance your application\u2019s performance. For insights into the original concept and additional details, refer to the [article](https://www.linkedin.com/pulse/elevating-your-playwright-tests-plugin-tzur-paldi-phd) on the Node.js version of this plugin.\r\n\r\n## Installation\r\n\r\n```no-highlight\r\n$ pip install pytest-performancetotal\r\n```\r\n\r\n## Usage\r\n\r\nTo use pytest-performancetotal, simply add the **performancetotal** fixture to the test method. This will include the performance functionality in your test. No further setup is required. Here's an example:\r\n\r\n```no-highlight\r\nimport pytest\r\n\r\n@pytest.mark.parametrize(\"iteration\", [1, 2, 3])\r\ndef test_features(performancetotal, iteration):\r\n performancetotal.sample_start(\"feature1\")\r\n time.sleep(1)\r\n performancetotal.sample_end(\"feature1\")\r\n \r\n performancetotal.sample_start(\"feature2\")\r\n time.sleep(0.5)\r\n performancetotal.sample_end(\"feature2\")\r\n```\r\n\r\nYou can also get immediate time span for a single sample inside a test:\r\n\r\n```no-highlight\r\nfeature1_timespan = performancetotal.get_sample_time(\"feature1\")\r\n```\r\nbe aware that get_sample_time returns a single measurement with no statistical analysis.\r\n\r\n\r\nTo use type hints follow this example:\r\n\r\n```no-highlight\r\nfrom pytest_performancetotal.performance import Performance\r\n\r\ndef test_features(performancetotal: Performance, iteration):\r\n # ... your test code here\r\n```\r\n\r\n## Options\r\n\r\nTo disable appending new results into existing file and start fresh every run use:\r\n```no-highlight\r\npytest --performance-noappend\r\n```\r\n\r\n## Getting the results\r\n\r\nA new directory named `performance_results` is created inside your project's root folder. Once all the tests are completed, two files are created inside the performance-results directory: `results.json` and `results.csv`. The analyzed data includes average time, standard error of mean (SEM), number of samples, minimum value, maximum value, earliest time, and latest time. The results table is also printed to the terminal log.\r\n\r\n## Support\r\n\r\nFor any questions or suggestions contact me at: [tzur.paldi@outlook.com](mailto:tzur.paldi@outlook.com?subjet=pytest-performancetotal%20Support)\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A performance plugin for pytest",
"version": "0.2.2",
"project_urls": {
"Homepage": "https://github.com/tzurp/pytest_performancetotal"
},
"split_keywords": [
"pytest",
"plugin",
"performance",
"playwright"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "083aa8c5775491a7eac45c9d2dd7f22b62c5712a79ac723264812814a97539d9",
"md5": "b772fdd3193f4224162beb68a4577f0f",
"sha256": "50677263d64fe0078480d37d3410f01e289202b348c7a4af46dc291b5fb96207"
},
"downloads": -1,
"filename": "pytest_performancetotal-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b772fdd3193f4224162beb68a4577f0f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 12532,
"upload_time": "2024-03-19T08:30:08",
"upload_time_iso_8601": "2024-03-19T08:30:08.970657Z",
"url": "https://files.pythonhosted.org/packages/08/3a/a8c5775491a7eac45c9d2dd7f22b62c5712a79ac723264812814a97539d9/pytest_performancetotal-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4dc1d4c798481f5fb052e9a97a08001aeb003bbf1cbd36742b172a04d2db251d",
"md5": "e0d345e3891d2cd8a608e93254d2ffb7",
"sha256": "8ab6885b8b1711df32bc8d5675f980dbf39efc1edfaeef23626440c190a6e837"
},
"downloads": -1,
"filename": "pytest-performancetotal-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "e0d345e3891d2cd8a608e93254d2ffb7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 10203,
"upload_time": "2024-03-19T08:30:10",
"upload_time_iso_8601": "2024-03-19T08:30:10.730704Z",
"url": "https://files.pythonhosted.org/packages/4d/c1/d4c798481f5fb052e9a97a08001aeb003bbf1cbd36742b172a04d2db251d/pytest-performancetotal-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-19 08:30:10",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tzurp",
"github_project": "pytest_performancetotal",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pytest-performancetotal"
}