Name | pyperf JSON |
Version |
2.8.0
JSON |
| download |
home_page | None |
Summary | Python module to run and analyze benchmarks |
upload_time | 2024-09-30 22:48:04 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.7 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
******
pyperf
******
.. image:: https://img.shields.io/pypi/v/pyperf.svg
:alt: Latest release on the Python Cheeseshop (PyPI)
:target: https://pypi.python.org/pypi/pyperf
.. image:: https://github.com/psf/pyperf/actions/workflows/build.yml/badge.svg
:alt: Build status of pyperf on GitHub Actions
:target: https://github.com/psf/pyperf/actions
The Python ``pyperf`` module is a toolkit to write, run and analyze benchmarks.
Features
========
* Simple API to run reliable benchmarks
* Automatically calibrate a benchmark for a time budget.
* Spawn multiple worker processes.
* Compute the mean and standard deviation.
* Detect if a benchmark result seems unstable.
* JSON format to store benchmark results.
* Support multiple units: seconds, bytes and integer.
Usage
=====
To `run a benchmark`_ use the ``pyperf timeit`` command (result written into
``bench.json``)::
$ python3 -m pyperf timeit '[1,2]*1000' -o bench.json
.....................
Mean +- std dev: 4.22 us +- 0.08 us
Or write a benchmark script ``bench.py``:
.. code:: python
#!/usr/bin/env python3
import pyperf
runner = pyperf.Runner()
runner.timeit(name="sort a sorted list",
stmt="sorted(s, key=f)",
setup="f = lambda x: x; s = list(range(1000))")
See `the API docs`_ for full details on the ``timeit`` function and the
``Runner`` class. To run the script and dump the results into a file named
``bench.json``::
$ python3 bench.py -o bench.json
To `analyze benchmark results`_ use the ``pyperf stats`` command::
$ python3 -m pyperf stats telco.json
Total duration: 29.2 sec
Start date: 2016-10-21 03:14:19
End date: 2016-10-21 03:14:53
Raw value minimum: 177 ms
Raw value maximum: 183 ms
Number of calibration run: 1
Number of run with values: 40
Total number of run: 41
Number of warmup per run: 1
Number of value per run: 3
Loop iterations per value: 8
Total number of values: 120
Minimum: 22.1 ms
Median +- MAD: 22.5 ms +- 0.1 ms
Mean +- std dev: 22.5 ms +- 0.2 ms
Maximum: 22.9 ms
0th percentile: 22.1 ms (-2% of the mean) -- minimum
5th percentile: 22.3 ms (-1% of the mean)
25th percentile: 22.4 ms (-1% of the mean) -- Q1
50th percentile: 22.5 ms (-0% of the mean) -- median
75th percentile: 22.7 ms (+1% of the mean) -- Q3
95th percentile: 22.9 ms (+2% of the mean)
100th percentile: 22.9 ms (+2% of the mean) -- maximum
Number of outlier (out of 22.0 ms..23.0 ms): 0
There's also:
* ``pyperf compare_to`` command tests if a difference is
significant. It supports comparison between multiple benchmark suites (made
of multiple benchmarks)
::
$ python3 -m pyperf compare_to --table mult_list_py36.json mult_list_py37.json mult_list_py38.json
+----------------+----------------+-----------------------+-----------------------+
| Benchmark | mult_list_py36 | mult_list_py37 | mult_list_py38 |
+================+================+=======================+=======================+
| [1]*1000 | 2.13 us | 2.09 us: 1.02x faster | not significant |
+----------------+----------------+-----------------------+-----------------------+
| [1,2]*1000 | 3.70 us | 5.28 us: 1.42x slower | 3.18 us: 1.16x faster |
+----------------+----------------+-----------------------+-----------------------+
| [1,2,3]*1000 | 4.61 us | 6.05 us: 1.31x slower | 4.17 us: 1.11x faster |
+----------------+----------------+-----------------------+-----------------------+
| Geometric mean | (ref) | 1.22x slower | 1.09x faster |
+----------------+----------------+-----------------------+-----------------------+
* ``pyperf system tune`` command to tune your system to run stable benchmarks.
* Automatically collect metadata on the computer and the benchmark:
use the ``pyperf metadata`` command to display them, or the
``pyperf collect_metadata`` command to manually collect them.
* ``--track-memory`` and ``--tracemalloc`` options to track
the memory usage of a benchmark.
Quick Links
===========
* `pyperf documentation
<https://pyperf.readthedocs.io/>`_
* `pyperf project homepage at GitHub
<https://github.com/psf/pyperf>`_ (code, bugs)
* `Download latest pyperf release at the Python Cheeseshop (PyPI)
<https://pypi.python.org/pypi/pyperf>`_
Command to install pyperf on Python 3::
python3 -m pip install pyperf
pyperf requires Python 3.7 or newer.
Python 2.7 users can use pyperf 1.7.1 which is the last version compatible with
Python 2.7.
pyperf is distributed under the MIT license.
The pyperf project is covered by the `PSF Code of Conduct
<https://www.python.org/psf/codeofconduct/>`_.
.. _run a benchmark: https://pyperf.readthedocs.io/en/latest/run_benchmark.html
.. _the API docs: http://pyperf.readthedocs.io/en/latest/api.html#Runner.timeit
.. _analyze benchmark results: https://pyperf.readthedocs.io/en/latest/analyze.html
Raw data
{
"_id": null,
"home_page": null,
"name": "pyperf",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "Dong-hee Na <donghee.na@python.org>",
"keywords": null,
"author": null,
"author_email": "Victor Stinner <vstinner@redhat.com>",
"download_url": "https://files.pythonhosted.org/packages/02/2a/758b3c4cc9843bd385bc595b777345fbf4cd00733b7830cdff43e30002c0/pyperf-2.8.0.tar.gz",
"platform": null,
"description": "******\npyperf\n******\n\n.. image:: https://img.shields.io/pypi/v/pyperf.svg\n :alt: Latest release on the Python Cheeseshop (PyPI)\n :target: https://pypi.python.org/pypi/pyperf\n\n.. image:: https://github.com/psf/pyperf/actions/workflows/build.yml/badge.svg\n :alt: Build status of pyperf on GitHub Actions\n :target: https://github.com/psf/pyperf/actions\n\nThe Python ``pyperf`` module is a toolkit to write, run and analyze benchmarks.\n\nFeatures\n========\n\n* Simple API to run reliable benchmarks\n* Automatically calibrate a benchmark for a time budget.\n* Spawn multiple worker processes.\n* Compute the mean and standard deviation.\n* Detect if a benchmark result seems unstable.\n* JSON format to store benchmark results.\n* Support multiple units: seconds, bytes and integer.\n\n\nUsage\n=====\n\nTo `run a benchmark`_ use the ``pyperf timeit`` command (result written into\n``bench.json``)::\n\n $ python3 -m pyperf timeit '[1,2]*1000' -o bench.json\n .....................\n Mean +- std dev: 4.22 us +- 0.08 us\n\nOr write a benchmark script ``bench.py``:\n\n.. code:: python\n\n #!/usr/bin/env python3\n import pyperf\n\n runner = pyperf.Runner()\n runner.timeit(name=\"sort a sorted list\",\n stmt=\"sorted(s, key=f)\",\n setup=\"f = lambda x: x; s = list(range(1000))\")\n\nSee `the API docs`_ for full details on the ``timeit`` function and the\n``Runner`` class. To run the script and dump the results into a file named\n``bench.json``::\n\n $ python3 bench.py -o bench.json\n\nTo `analyze benchmark results`_ use the ``pyperf stats`` command::\n\n $ python3 -m pyperf stats telco.json\n Total duration: 29.2 sec\n Start date: 2016-10-21 03:14:19\n End date: 2016-10-21 03:14:53\n Raw value minimum: 177 ms\n Raw value maximum: 183 ms\n\n Number of calibration run: 1\n Number of run with values: 40\n Total number of run: 41\n\n Number of warmup per run: 1\n Number of value per run: 3\n Loop iterations per value: 8\n Total number of values: 120\n\n Minimum: 22.1 ms\n Median +- MAD: 22.5 ms +- 0.1 ms\n Mean +- std dev: 22.5 ms +- 0.2 ms\n Maximum: 22.9 ms\n\n 0th percentile: 22.1 ms (-2% of the mean) -- minimum\n 5th percentile: 22.3 ms (-1% of the mean)\n 25th percentile: 22.4 ms (-1% of the mean) -- Q1\n 50th percentile: 22.5 ms (-0% of the mean) -- median\n 75th percentile: 22.7 ms (+1% of the mean) -- Q3\n 95th percentile: 22.9 ms (+2% of the mean)\n 100th percentile: 22.9 ms (+2% of the mean) -- maximum\n\n Number of outlier (out of 22.0 ms..23.0 ms): 0\n\n\nThere's also:\n\n* ``pyperf compare_to`` command tests if a difference is\n significant. It supports comparison between multiple benchmark suites (made\n of multiple benchmarks)\n ::\n\n $ python3 -m pyperf compare_to --table mult_list_py36.json mult_list_py37.json mult_list_py38.json\n +----------------+----------------+-----------------------+-----------------------+\n | Benchmark | mult_list_py36 | mult_list_py37 | mult_list_py38 |\n +================+================+=======================+=======================+\n | [1]*1000 | 2.13 us | 2.09 us: 1.02x faster | not significant |\n +----------------+----------------+-----------------------+-----------------------+\n | [1,2]*1000 | 3.70 us | 5.28 us: 1.42x slower | 3.18 us: 1.16x faster |\n +----------------+----------------+-----------------------+-----------------------+\n | [1,2,3]*1000 | 4.61 us | 6.05 us: 1.31x slower | 4.17 us: 1.11x faster |\n +----------------+----------------+-----------------------+-----------------------+\n | Geometric mean | (ref) | 1.22x slower | 1.09x faster |\n +----------------+----------------+-----------------------+-----------------------+\n\n* ``pyperf system tune`` command to tune your system to run stable benchmarks.\n* Automatically collect metadata on the computer and the benchmark:\n use the ``pyperf metadata`` command to display them, or the\n ``pyperf collect_metadata`` command to manually collect them.\n* ``--track-memory`` and ``--tracemalloc`` options to track\n the memory usage of a benchmark.\n\n\nQuick Links\n===========\n\n* `pyperf documentation\n <https://pyperf.readthedocs.io/>`_\n* `pyperf project homepage at GitHub\n <https://github.com/psf/pyperf>`_ (code, bugs)\n* `Download latest pyperf release at the Python Cheeseshop (PyPI)\n <https://pypi.python.org/pypi/pyperf>`_\n\nCommand to install pyperf on Python 3::\n\n python3 -m pip install pyperf\n\npyperf requires Python 3.7 or newer.\n\nPython 2.7 users can use pyperf 1.7.1 which is the last version compatible with\nPython 2.7.\n\npyperf is distributed under the MIT license.\n\nThe pyperf project is covered by the `PSF Code of Conduct\n<https://www.python.org/psf/codeofconduct/>`_.\n\n.. _run a benchmark: https://pyperf.readthedocs.io/en/latest/run_benchmark.html\n.. _the API docs: http://pyperf.readthedocs.io/en/latest/api.html#Runner.timeit\n.. _analyze benchmark results: https://pyperf.readthedocs.io/en/latest/analyze.html\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python module to run and analyze benchmarks",
"version": "2.8.0",
"project_urls": {
"Homepage": "https://github.com/psf/pyperf"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7ff7bb8965520a9b0a3d720b282e67b5cb7f3305b96e4bacaee2794550e67e94",
"md5": "ec04cfbceb74690a502cd66521a4149e",
"sha256": "1a775b5a09882f18bf876430ef78e07646f773f50774546f5f6a8b34d60e3968"
},
"downloads": -1,
"filename": "pyperf-2.8.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ec04cfbceb74690a502cd66521a4149e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 142508,
"upload_time": "2024-09-30T22:48:02",
"upload_time_iso_8601": "2024-09-30T22:48:02.947057Z",
"url": "https://files.pythonhosted.org/packages/7f/f7/bb8965520a9b0a3d720b282e67b5cb7f3305b96e4bacaee2794550e67e94/pyperf-2.8.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "022a758b3c4cc9843bd385bc595b777345fbf4cd00733b7830cdff43e30002c0",
"md5": "aac40171c00dd88742ffc2f339a7dc6a",
"sha256": "b30a20465819daf102b6543b512f6799a5a879ff2a123981e6cd732d0e6a7a79"
},
"downloads": -1,
"filename": "pyperf-2.8.0.tar.gz",
"has_sig": false,
"md5_digest": "aac40171c00dd88742ffc2f339a7dc6a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 225186,
"upload_time": "2024-09-30T22:48:04",
"upload_time_iso_8601": "2024-09-30T22:48:04.658897Z",
"url": "https://files.pythonhosted.org/packages/02/2a/758b3c4cc9843bd385bc595b777345fbf4cd00733b7830cdff43e30002c0/pyperf-2.8.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-30 22:48:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "psf",
"github_project": "pyperf",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "pyperf"
}