pyperf


Namepyperf JSON
Version 2.8.1 PyPI version JSON
download
home_pageNone
SummaryPython module to run and analyze benchmarks
upload_time2024-11-15 14:50:16
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ******
pyperf
******

.. image:: https://img.shields.io/pypi/v/pyperf.svg
   :alt: Latest release on the Python Cheeseshop (PyPI)
   :target: https://pypi.python.org/pypi/pyperf

.. image:: https://github.com/psf/pyperf/actions/workflows/build.yml/badge.svg
   :alt: Build status of pyperf on GitHub Actions
   :target: https://github.com/psf/pyperf/actions

The Python ``pyperf`` module is a toolkit to write, run and analyze benchmarks.

Features
========

* Simple API to run reliable benchmarks
* Automatically calibrate a benchmark for a time budget.
* Spawn multiple worker processes.
* Compute the mean and standard deviation.
* Detect if a benchmark result seems unstable.
* JSON format to store benchmark results.
* Support multiple units: seconds, bytes and integer.


Usage
=====

To `run a benchmark`_ use the ``pyperf timeit`` command (result written into
``bench.json``)::

    $ python3 -m pyperf timeit '[1,2]*1000' -o bench.json
    .....................
    Mean +- std dev: 4.22 us +- 0.08 us

Or write a benchmark script ``bench.py``:

.. code:: python

    #!/usr/bin/env python3
    import pyperf

    runner = pyperf.Runner()
    runner.timeit(name="sort a sorted list",
                  stmt="sorted(s, key=f)",
                  setup="f = lambda x: x; s = list(range(1000))")

See `the API docs`_ for full details on the ``timeit`` function and the
``Runner`` class. To run the script and dump the results into a file named
``bench.json``::

    $ python3 bench.py -o bench.json

To `analyze benchmark results`_ use the ``pyperf stats`` command::

    $ python3 -m pyperf stats telco.json
    Total duration: 29.2 sec
    Start date: 2016-10-21 03:14:19
    End date: 2016-10-21 03:14:53
    Raw value minimum: 177 ms
    Raw value maximum: 183 ms

    Number of calibration run: 1
    Number of run with values: 40
    Total number of run: 41

    Number of warmup per run: 1
    Number of value per run: 3
    Loop iterations per value: 8
    Total number of values: 120

    Minimum:         22.1 ms
    Median +- MAD:   22.5 ms +- 0.1 ms
    Mean +- std dev: 22.5 ms +- 0.2 ms
    Maximum:         22.9 ms

      0th percentile: 22.1 ms (-2% of the mean) -- minimum
      5th percentile: 22.3 ms (-1% of the mean)
     25th percentile: 22.4 ms (-1% of the mean) -- Q1
     50th percentile: 22.5 ms (-0% of the mean) -- median
     75th percentile: 22.7 ms (+1% of the mean) -- Q3
     95th percentile: 22.9 ms (+2% of the mean)
    100th percentile: 22.9 ms (+2% of the mean) -- maximum

    Number of outlier (out of 22.0 ms..23.0 ms): 0


There's also:

* ``pyperf compare_to`` command tests if a difference is
  significant. It supports comparison between multiple benchmark suites (made
  of multiple benchmarks)
  ::

    $ python3 -m pyperf compare_to --table mult_list_py36.json mult_list_py37.json mult_list_py38.json
    +----------------+----------------+-----------------------+-----------------------+
    | Benchmark      | mult_list_py36 | mult_list_py37        | mult_list_py38        |
    +================+================+=======================+=======================+
    | [1]*1000       | 2.13 us        | 2.09 us: 1.02x faster | not significant       |
    +----------------+----------------+-----------------------+-----------------------+
    | [1,2]*1000     | 3.70 us        | 5.28 us: 1.42x slower | 3.18 us: 1.16x faster |
    +----------------+----------------+-----------------------+-----------------------+
    | [1,2,3]*1000   | 4.61 us        | 6.05 us: 1.31x slower | 4.17 us: 1.11x faster |
    +----------------+----------------+-----------------------+-----------------------+
    | Geometric mean | (ref)          | 1.22x slower          | 1.09x faster          |
    +----------------+----------------+-----------------------+-----------------------+

* ``pyperf system tune`` command to tune your system to run stable benchmarks.
* Automatically collect metadata on the computer and the benchmark:
  use the ``pyperf metadata`` command to display them, or the
  ``pyperf collect_metadata`` command to manually collect them.
* ``--track-memory`` and ``--tracemalloc`` options to track
  the memory usage of a benchmark.


Quick Links
===========

* `pyperf documentation
  <https://pyperf.readthedocs.io/>`_
* `pyperf project homepage at GitHub
  <https://github.com/psf/pyperf>`_ (code, bugs)
* `Download latest pyperf release at the Python Cheeseshop (PyPI)
  <https://pypi.python.org/pypi/pyperf>`_

Command to install pyperf on Python 3::

    python3 -m pip install pyperf

pyperf requires Python 3.7 or newer.

Python 2.7 users can use pyperf 1.7.1 which is the last version compatible with
Python 2.7.

pyperf is distributed under the MIT license.

The pyperf project is covered by the `PSF Code of Conduct
<https://www.python.org/psf/codeofconduct/>`_.

.. _run a benchmark: https://pyperf.readthedocs.io/en/latest/run_benchmark.html
.. _the API docs: http://pyperf.readthedocs.io/en/latest/api.html#Runner.timeit
.. _analyze benchmark results: https://pyperf.readthedocs.io/en/latest/analyze.html

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pyperf",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Dong-hee Na <donghee.na@python.org>",
    "keywords": null,
    "author": null,
    "author_email": "Victor Stinner <vstinner@redhat.com>",
    "download_url": "https://files.pythonhosted.org/packages/c0/9d/833e27f9c2cee94750af544449a49e4bdc7f98c1673f4413589a0abe98a5/pyperf-2.8.1.tar.gz",
    "platform": null,
    "description": "******\npyperf\n******\n\n.. image:: https://img.shields.io/pypi/v/pyperf.svg\n   :alt: Latest release on the Python Cheeseshop (PyPI)\n   :target: https://pypi.python.org/pypi/pyperf\n\n.. image:: https://github.com/psf/pyperf/actions/workflows/build.yml/badge.svg\n   :alt: Build status of pyperf on GitHub Actions\n   :target: https://github.com/psf/pyperf/actions\n\nThe Python ``pyperf`` module is a toolkit to write, run and analyze benchmarks.\n\nFeatures\n========\n\n* Simple API to run reliable benchmarks\n* Automatically calibrate a benchmark for a time budget.\n* Spawn multiple worker processes.\n* Compute the mean and standard deviation.\n* Detect if a benchmark result seems unstable.\n* JSON format to store benchmark results.\n* Support multiple units: seconds, bytes and integer.\n\n\nUsage\n=====\n\nTo `run a benchmark`_ use the ``pyperf timeit`` command (result written into\n``bench.json``)::\n\n    $ python3 -m pyperf timeit '[1,2]*1000' -o bench.json\n    .....................\n    Mean +- std dev: 4.22 us +- 0.08 us\n\nOr write a benchmark script ``bench.py``:\n\n.. code:: python\n\n    #!/usr/bin/env python3\n    import pyperf\n\n    runner = pyperf.Runner()\n    runner.timeit(name=\"sort a sorted list\",\n                  stmt=\"sorted(s, key=f)\",\n                  setup=\"f = lambda x: x; s = list(range(1000))\")\n\nSee `the API docs`_ for full details on the ``timeit`` function and the\n``Runner`` class. To run the script and dump the results into a file named\n``bench.json``::\n\n    $ python3 bench.py -o bench.json\n\nTo `analyze benchmark results`_ use the ``pyperf stats`` command::\n\n    $ python3 -m pyperf stats telco.json\n    Total duration: 29.2 sec\n    Start date: 2016-10-21 03:14:19\n    End date: 2016-10-21 03:14:53\n    Raw value minimum: 177 ms\n    Raw value maximum: 183 ms\n\n    Number of calibration run: 1\n    Number of run with values: 40\n    Total number of run: 41\n\n    Number of warmup per run: 1\n    Number of value per run: 3\n    Loop iterations per value: 8\n    Total number of values: 120\n\n    Minimum:         22.1 ms\n    Median +- MAD:   22.5 ms +- 0.1 ms\n    Mean +- std dev: 22.5 ms +- 0.2 ms\n    Maximum:         22.9 ms\n\n      0th percentile: 22.1 ms (-2% of the mean) -- minimum\n      5th percentile: 22.3 ms (-1% of the mean)\n     25th percentile: 22.4 ms (-1% of the mean) -- Q1\n     50th percentile: 22.5 ms (-0% of the mean) -- median\n     75th percentile: 22.7 ms (+1% of the mean) -- Q3\n     95th percentile: 22.9 ms (+2% of the mean)\n    100th percentile: 22.9 ms (+2% of the mean) -- maximum\n\n    Number of outlier (out of 22.0 ms..23.0 ms): 0\n\n\nThere's also:\n\n* ``pyperf compare_to`` command tests if a difference is\n  significant. It supports comparison between multiple benchmark suites (made\n  of multiple benchmarks)\n  ::\n\n    $ python3 -m pyperf compare_to --table mult_list_py36.json mult_list_py37.json mult_list_py38.json\n    +----------------+----------------+-----------------------+-----------------------+\n    | Benchmark      | mult_list_py36 | mult_list_py37        | mult_list_py38        |\n    +================+================+=======================+=======================+\n    | [1]*1000       | 2.13 us        | 2.09 us: 1.02x faster | not significant       |\n    +----------------+----------------+-----------------------+-----------------------+\n    | [1,2]*1000     | 3.70 us        | 5.28 us: 1.42x slower | 3.18 us: 1.16x faster |\n    +----------------+----------------+-----------------------+-----------------------+\n    | [1,2,3]*1000   | 4.61 us        | 6.05 us: 1.31x slower | 4.17 us: 1.11x faster |\n    +----------------+----------------+-----------------------+-----------------------+\n    | Geometric mean | (ref)          | 1.22x slower          | 1.09x faster          |\n    +----------------+----------------+-----------------------+-----------------------+\n\n* ``pyperf system tune`` command to tune your system to run stable benchmarks.\n* Automatically collect metadata on the computer and the benchmark:\n  use the ``pyperf metadata`` command to display them, or the\n  ``pyperf collect_metadata`` command to manually collect them.\n* ``--track-memory`` and ``--tracemalloc`` options to track\n  the memory usage of a benchmark.\n\n\nQuick Links\n===========\n\n* `pyperf documentation\n  <https://pyperf.readthedocs.io/>`_\n* `pyperf project homepage at GitHub\n  <https://github.com/psf/pyperf>`_ (code, bugs)\n* `Download latest pyperf release at the Python Cheeseshop (PyPI)\n  <https://pypi.python.org/pypi/pyperf>`_\n\nCommand to install pyperf on Python 3::\n\n    python3 -m pip install pyperf\n\npyperf requires Python 3.7 or newer.\n\nPython 2.7 users can use pyperf 1.7.1 which is the last version compatible with\nPython 2.7.\n\npyperf is distributed under the MIT license.\n\nThe pyperf project is covered by the `PSF Code of Conduct\n<https://www.python.org/psf/codeofconduct/>`_.\n\n.. _run a benchmark: https://pyperf.readthedocs.io/en/latest/run_benchmark.html\n.. _the API docs: http://pyperf.readthedocs.io/en/latest/api.html#Runner.timeit\n.. _analyze benchmark results: https://pyperf.readthedocs.io/en/latest/analyze.html\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python module to run and analyze benchmarks",
    "version": "2.8.1",
    "project_urls": {
        "Homepage": "https://github.com/psf/pyperf"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3363e99f5b8034ad4c9fc2d305c3ad96cf68f893c03f0880c17a2694067791c2",
                "md5": "980ec820f17c5294a19096533d79ff1d",
                "sha256": "12a974a800a96568575be51d229b88e6b14197d02440afd98e908d80a42a1a44"
            },
            "downloads": -1,
            "filename": "pyperf-2.8.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "980ec820f17c5294a19096533d79ff1d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 142443,
            "upload_time": "2024-11-15T14:50:14",
            "upload_time_iso_8601": "2024-11-15T14:50:14.209172Z",
            "url": "https://files.pythonhosted.org/packages/33/63/e99f5b8034ad4c9fc2d305c3ad96cf68f893c03f0880c17a2694067791c2/pyperf-2.8.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c09d833e27f9c2cee94750af544449a49e4bdc7f98c1673f4413589a0abe98a5",
                "md5": "44cee8949ce6591d0282b4646d7843a2",
                "sha256": "ef103e21a4d04999315003026a2d659c48a7cfce5e1440f03d6e72591400713a"
            },
            "downloads": -1,
            "filename": "pyperf-2.8.1.tar.gz",
            "has_sig": false,
            "md5_digest": "44cee8949ce6591d0282b4646d7843a2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 225161,
            "upload_time": "2024-11-15T14:50:16",
            "upload_time_iso_8601": "2024-11-15T14:50:16.486843Z",
            "url": "https://files.pythonhosted.org/packages/c0/9d/833e27f9c2cee94750af544449a49e4bdc7f98c1673f4413589a0abe98a5/pyperf-2.8.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-15 14:50:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "psf",
    "github_project": "pyperf",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "pyperf"
}
        
Elapsed time: 0.99400s