benchmark-utils


Namebenchmark-utils JSON
Version 0.2.3 PyPI version JSON
download
home_pagehttps://github.com/ayasyrev/benchmark_utils
SummaryUtils for benchmark.
upload_time2023-07-28 08:35:30
maintainer
docs_urlNone
authorYasyrev Andrei
requires_python
licenseapache2
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Benchmark utils

Utils for benchmark - wrapper over python timeit.

[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/benchmark-utils)](https://pypi.org/project/benchmark-utils/)
[![PyPI Status](https://badge.fury.io/py/benchmark-utils.svg)](https://badge.fury.io/py/benchmark-utils)  
[![Tests](https://github.com/ayasyrev/benchmark_utils/workflows/Tests/badge.svg)](https://github.com/ayasyrev/benchmark_utils/actions?workflow=Tests)  [![Codecov](https://codecov.io/gh/ayasyrev/benchmark_utils/branch/main/graph/badge.svg)](https://codecov.io/gh/ayasyrev/benchmark_utils)  

Tested on python 3.7 - 3.11

## Install

Install from pypi:  

`pip install benchmark_utils`

Or install from github repo:

`pip install git+https://github.com/ayasyrev/benchmark_utils.git`

## Basic use.

Lets benchmark some (dump) functions.


```python
from time import sleep

def func_to_test_1(sleep_time: float = 0.1, mult: int = 1) -> None:
    """simple 'sleep' func for test"""
    sleep(sleep_time * mult)


def func_to_test_2(sleep_time: float = 0.11, mult: int = 1) -> None:
    """simple 'sleep' func for test"""
    sleep(sleep_time * mult)

```

Let's create benchmark.


```python
from benchmark_utils import Benchmark
```


```python
bench = Benchmark(
    [func_to_test_1, func_to_test_2],
)
```


```python
bench
```
<details open> <summary>output</summary>  
    <pre>Benchmark(func_to_test_1, func_to_test_2)</pre>
</details>



Now we can benchmark that functions.


```python
bench()
```
<details open> <summary>output</summary>  
    <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Func name  | Sec <span style="color: #800080; text-decoration-color: #800080">/</span> run
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">func_to_test_1:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.10</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.0</span>%
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">func_to_test_2:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.11</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">-9.6</span>%
</pre>

</details>


We can run it again, all functions, some of it, exclude some and change number of repeats.


```python
bench.run(num_repeats=10)
```
<details open> <summary>output</summary>  
    <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Func name  | Sec <span style="color: #800080; text-decoration-color: #800080">/</span> run
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">func_to_test_1:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.10</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.0</span>%
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">func_to_test_2:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.11</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">-8.8</span>%
</pre>

</details>


After run, we can prunt results - sorted or not, reversed, compare results with best or not. 


```python
bench.print_results(reverse=True)
```
<details open> <summary>output</summary>  
    <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Func name  | Sec <span style="color: #800080; text-decoration-color: #800080">/</span> run
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">func_to_test_2:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.11</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.0</span>%
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">func_to_test_1:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.10</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">9.7</span>%
</pre>

</details>


We can add functions to bencmark as list of funtions (or partial) ar as dictionary: `{"name": function}`.


```python
bench = Benchmark([
    func_to_test_1,
    partial(func_to_test_1, 0.12),
    partial(func_to_test_1, sleep_time=0.11),
])

```


```python
bench
```
<details open> <summary>output</summary>  
    <pre>Benchmark(func_to_test_1, func_to_test_1(0.12), func_to_test_1(sleep_time=0.11))</pre>
</details>




```python
bench.run()
```
<details open> <summary>output</summary>  
    <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Func name  | Sec <span style="color: #800080; text-decoration-color: #800080">/</span> run
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">func_to_test_1:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.10</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.0</span>%
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #800080; text-decoration-color: #800080; font-weight: bold">func_to_test_1</span><span style="font-weight: bold">(</span><span style="color: #808000; text-decoration-color: #808000">sleep_time</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.11</span><span style="font-weight: bold">)</span>:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.11</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">-8.9</span>%
</pre>




<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #800080; text-decoration-color: #800080; font-weight: bold">func_to_test_1</span><span style="font-weight: bold">(</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.12</span><span style="font-weight: bold">)</span>:   <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0.12</span> <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">-16.5</span>%
</pre>

</details>



```python
bench = Benchmark({
    "func_1": func_to_test_1,
    "func_2": func_to_test_2,
})
```


```python
bench
```
<details open> <summary>output</summary>  
    <pre>Benchmark(func_1, func_2)</pre>
</details>



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ayasyrev/benchmark_utils",
    "name": "benchmark-utils",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Yasyrev Andrei",
    "author_email": "a.yasyrev@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/50/93/5c7b8d4f7205bce828b25edf195af08723368c79efb5d5e7bbae4a4e75a7/benchmark_utils-0.2.3.tar.gz",
    "platform": null,
    "description": "# Benchmark utils\n\nUtils for benchmark - wrapper over python timeit.\n\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/benchmark-utils)](https://pypi.org/project/benchmark-utils/)\n[![PyPI Status](https://badge.fury.io/py/benchmark-utils.svg)](https://badge.fury.io/py/benchmark-utils)  \n[![Tests](https://github.com/ayasyrev/benchmark_utils/workflows/Tests/badge.svg)](https://github.com/ayasyrev/benchmark_utils/actions?workflow=Tests)  [![Codecov](https://codecov.io/gh/ayasyrev/benchmark_utils/branch/main/graph/badge.svg)](https://codecov.io/gh/ayasyrev/benchmark_utils)  \n\nTested on python 3.7 - 3.11\n\n## Install\n\nInstall from pypi:  \n\n`pip install benchmark_utils`\n\nOr install from github repo:\n\n`pip install git+https://github.com/ayasyrev/benchmark_utils.git`\n\n## Basic use.\n\nLets benchmark some (dump) functions.\n\n\n```python\nfrom time import sleep\n\ndef func_to_test_1(sleep_time: float = 0.1, mult: int = 1) -> None:\n    \"\"\"simple 'sleep' func for test\"\"\"\n    sleep(sleep_time * mult)\n\n\ndef func_to_test_2(sleep_time: float = 0.11, mult: int = 1) -> None:\n    \"\"\"simple 'sleep' func for test\"\"\"\n    sleep(sleep_time * mult)\n\n```\n\nLet's create benchmark.\n\n\n```python\nfrom benchmark_utils import Benchmark\n```\n\n\n```python\nbench = Benchmark(\n    [func_to_test_1, func_to_test_2],\n)\n```\n\n\n```python\nbench\n```\n<details open> <summary>output</summary>  \n    <pre>Benchmark(func_to_test_1, func_to_test_2)</pre>\n</details>\n\n\n\nNow we can benchmark that functions.\n\n\n```python\nbench()\n```\n<details open> <summary>output</summary>  \n    <pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"> Func name  | Sec <span style=\"color: #800080; text-decoration-color: #800080\">/</span> run\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">func_to_test_1:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.10</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.0</span>%\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">func_to_test_2:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.11</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">-9.6</span>%\n</pre>\n\n</details>\n\n\nWe can run it again, all functions, some of it, exclude some and change number of repeats.\n\n\n```python\nbench.run(num_repeats=10)\n```\n<details open> <summary>output</summary>  \n    <pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"> Func name  | Sec <span style=\"color: #800080; text-decoration-color: #800080\">/</span> run\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">func_to_test_1:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.10</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.0</span>%\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">func_to_test_2:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.11</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">-8.8</span>%\n</pre>\n\n</details>\n\n\nAfter run, we can prunt results - sorted or not, reversed, compare results with best or not. \n\n\n```python\nbench.print_results(reverse=True)\n```\n<details open> <summary>output</summary>  \n    <pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"> Func name  | Sec <span style=\"color: #800080; text-decoration-color: #800080\">/</span> run\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">func_to_test_2:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.11</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.0</span>%\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">func_to_test_1:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.10</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">9.7</span>%\n</pre>\n\n</details>\n\n\nWe can add functions to bencmark as list of funtions (or partial) ar as dictionary: `{\"name\": function}`.\n\n\n```python\nbench = Benchmark([\n    func_to_test_1,\n    partial(func_to_test_1, 0.12),\n    partial(func_to_test_1, sleep_time=0.11),\n])\n\n```\n\n\n```python\nbench\n```\n<details open> <summary>output</summary>  \n    <pre>Benchmark(func_to_test_1, func_to_test_1(0.12), func_to_test_1(sleep_time=0.11))</pre>\n</details>\n\n\n\n\n```python\nbench.run()\n```\n<details open> <summary>output</summary>  \n    <pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"> Func name  | Sec <span style=\"color: #800080; text-decoration-color: #800080\">/</span> run\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">func_to_test_1:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.10</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.0</span>%\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #800080; text-decoration-color: #800080; font-weight: bold\">func_to_test_1</span><span style=\"font-weight: bold\">(</span><span style=\"color: #808000; text-decoration-color: #808000\">sleep_time</span>=<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.11</span><span style=\"font-weight: bold\">)</span>:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.11</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">-8.9</span>%\n</pre>\n\n\n\n\n<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #800080; text-decoration-color: #800080; font-weight: bold\">func_to_test_1</span><span style=\"font-weight: bold\">(</span><span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.12</span><span style=\"font-weight: bold\">)</span>:   <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.12</span> <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">-16.5</span>%\n</pre>\n\n</details>\n\n\n\n```python\nbench = Benchmark({\n    \"func_1\": func_to_test_1,\n    \"func_2\": func_to_test_2,\n})\n```\n\n\n```python\nbench\n```\n<details open> <summary>output</summary>  \n    <pre>Benchmark(func_1, func_2)</pre>\n</details>\n\n\n",
    "bugtrack_url": null,
    "license": "apache2",
    "summary": "Utils for benchmark.",
    "version": "0.2.3",
    "project_urls": {
        "Homepage": "https://github.com/ayasyrev/benchmark_utils"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "18d289aa5e995a4c91a80ff387fd203e5e0d3f98c9dcc88384921ca9a90e5f49",
                "md5": "7dad9756fddc581fa829888dca6ea4cd",
                "sha256": "1455968d36ac7582aa2c26e58536cddf093df36879902e2f44b36eacd87c42ac"
            },
            "downloads": -1,
            "filename": "benchmark_utils-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7dad9756fddc581fa829888dca6ea4cd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 9959,
            "upload_time": "2023-07-28T08:35:28",
            "upload_time_iso_8601": "2023-07-28T08:35:28.400459Z",
            "url": "https://files.pythonhosted.org/packages/18/d2/89aa5e995a4c91a80ff387fd203e5e0d3f98c9dcc88384921ca9a90e5f49/benchmark_utils-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "50935c7b8d4f7205bce828b25edf195af08723368c79efb5d5e7bbae4a4e75a7",
                "md5": "fff88f381e93483978ff01d101b97d01",
                "sha256": "af8a009b18595f8ddf33aa0f721e5d6f51dff2fc040e1df515e97eff6f4ff68b"
            },
            "downloads": -1,
            "filename": "benchmark_utils-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "fff88f381e93483978ff01d101b97d01",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 12561,
            "upload_time": "2023-07-28T08:35:30",
            "upload_time_iso_8601": "2023-07-28T08:35:30.100476Z",
            "url": "https://files.pythonhosted.org/packages/50/93/5c7b8d4f7205bce828b25edf195af08723368c79efb5d5e7bbae4a4e75a7/benchmark_utils-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-28 08:35:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ayasyrev",
    "github_project": "benchmark_utils",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "benchmark-utils"
}
        
Elapsed time: 0.17741s