pybe


Namepybe JSON
Version 1.0.0 PyPI version JSON
download
home_page
SummaryBenchmarking python functions
upload_time2023-05-28 13:22:32
maintainer
docs_urlNone
authorNicolai Palm
requires_python>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [Visualization](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app) //
[Documentation](https://pybe.readthedocs.io/en/latest/source/benchmark.html)

# PyBe - benchmark your Python functions

Benchmark any (Python) function, store (as csv or Excel), read and [visualize](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app)
the results with only a few lines of code!

*Table of Contents:*
1. [Structure of a benchmark](#structure-of-a-benchmark)
2. [Installation](#installation)
3. [Getting started](#getting-started)
4. [Structure of csv](#structure-of-benchmark-csv)

## Structure of a benchmark

The general structure of a benchmark script is as follows:
- you have some algorithm
- you want to test the algorithm by varying over a set of inputs
- you assess quantities of interest (i.e. some performance metric) to the output of the algorithm

This can be implemented as a Python function:

    def benchmark_function(input):
        result = algorithm(input)
        return {"name_performance_metric_1": performance_metric_1(result),"name_performance_metric_2": performance_metric_1(result),...}


In order to benchmark your algorithm you simply need to call the above function to all sets of inputs.
This and [storing your results](#structure-of-benchmark-csv) is taken care of by the Benchmark class in pybe.benchmark.
Lets look at a concrete example.

### Example: Optimization algorithm
Lets say you have an optimization algorithm implemented in Python
which takes as inputs
- a function to be optimized and
- the number of runs.

You want to evaluate the optimizer on a certain test function and benchmark how well the optimizer
performs for specific number of runs.
For this you have a performance metric which can be called to the output of the optimization and returns
a real number (float).

Then, your benchmark function looks as follows:

    def benchmark_function(number_of_runs):
        result_optimizer = optimizer(test_function,number_of_runs)
        return {"name_performance_metric": performance_metric(result_optimizer)}

Lets say you want to benchmark your optimization algorithm for number of runs 10,100 and 1000.
Now, you can simply benchmark your optimization algorithm by using the pybe Benchmark class.

    from pybe.benchmark import Benchmark
    benchmark = Benchmark()
    benchmark(function=benchmark_function,inputs=[10,100,1000],name="name_of_my_optimization_algorithm")

Drag the resulting **name_of_my_optimization_algorithm.csv** into the [Dashboard](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app) and thats it!

## Installation
The official release is available at PyPi:

```
pip install pybe
```

You can clone this repository by running the following command:

```
git clone https://github.com/nicolaipalm/pybe
cd pybe
pip install
```

## Getting started

In order to benchmark a Python function you only need to implement the function and
specify some data of the benchmark.

```python
from pybe.benchmark import Benchmark
from pybe.wrappers import timer
import time

benchmark = Benchmark() # initialize pybe's benchmark class


@timer # additionally track the time needed in each iteration
def test_function(i: int):
    time.sleep(0.1)
    return {"name_of_output": i} # specify the output in a dictionary

# benchmark test_function on inputs [1,2,3] and evaluate each input 10 times
benchmark(test_function,
          name="test_benchmark", # set the name of the benchmark
          inputs=[1, 2, 3],
          store=True, # store the benchmark results
          number_runs=10)
```
Look at the benchmark.csv file in your directory!

You can view the results also directly in Python or write them to an Excel or csv file

```python
print(benchmark.inputs, benchmark.name_outputs)  # print inputs and names of outputs
print(benchmark.result)  # print results as stored in benchmark.csv
benchmark.to_excel(name="my_results")  # write results as excel
benchmark.to_csv(name="my_results")  # write results as csv

```

You can read any of the benchmark results by simply initializing the
benchmark class with parameter the .yaml benchmark file path

```python
benchmark = Benchmark(benchmark_file_path)
```

## Structure of benchmark csv

The structure of the resulting csv is supposed to be very intuitive:
- each row represents one call of the benchmarked function with
- one column for the input
- one column with the name of the benchmark
- one column for each output

For example:
- the function has two outputs: time and value
- is benchmarked at inputs 10 and 100
- has name Hello
- is evaluated once for each input

Then, the resulting csv/Excel has the following structure:

|   | value    | time  | Input | Name |
|---|----------|-------|-------|------|
| 0 | 0.1      | 1     | 10    | hello|
| 1 | 0.05     | 20    | 100   | hello|

## Dashboard

[You can easily visualize your benchmark results!](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "pybe",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "",
    "author": "Nicolai Palm",
    "author_email": "nicolaipalm@googlemail.com",
    "download_url": "https://files.pythonhosted.org/packages/f8/97/b7195ec0b085ed56a68a82c2628b4b34de9815e9f5cdfc136c4ee00aa417/pybe-1.0.0.tar.gz",
    "platform": null,
    "description": "[Visualization](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app) //\n[Documentation](https://pybe.readthedocs.io/en/latest/source/benchmark.html)\n\n# PyBe - benchmark your Python functions\n\nBenchmark any (Python) function, store (as csv or Excel), read and [visualize](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app)\nthe results with only a few lines of code!\n\n*Table of Contents:*\n1. [Structure of a benchmark](#structure-of-a-benchmark)\n2. [Installation](#installation)\n3. [Getting started](#getting-started)\n4. [Structure of csv](#structure-of-benchmark-csv)\n\n## Structure of a benchmark\n\nThe general structure of a benchmark script is as follows:\n- you have some algorithm\n- you want to test the algorithm by varying over a set of inputs\n- you assess quantities of interest (i.e. some performance metric) to the output of the algorithm\n\nThis can be implemented as a Python function:\n\n    def benchmark_function(input):\n        result = algorithm(input)\n        return {\"name_performance_metric_1\": performance_metric_1(result),\"name_performance_metric_2\": performance_metric_1(result),...}\n\n\nIn order to benchmark your algorithm you simply need to call the above function to all sets of inputs.\nThis and [storing your results](#structure-of-benchmark-csv) is taken care of by the Benchmark class in pybe.benchmark.\nLets look at a concrete example.\n\n### Example: Optimization algorithm\nLets say you have an optimization algorithm implemented in Python\nwhich takes as inputs\n- a function to be optimized and\n- the number of runs.\n\nYou want to evaluate the optimizer on a certain test function and benchmark how well the optimizer\nperforms for specific number of runs.\nFor this you have a performance metric which can be called to the output of the optimization and returns\na real number (float).\n\nThen, your benchmark function looks as follows:\n\n    def benchmark_function(number_of_runs):\n        result_optimizer = optimizer(test_function,number_of_runs)\n        return {\"name_performance_metric\": performance_metric(result_optimizer)}\n\nLets say you want to benchmark your optimization algorithm for number of runs 10,100 and 1000.\nNow, you can simply benchmark your optimization algorithm by using the pybe Benchmark class.\n\n    from pybe.benchmark import Benchmark\n    benchmark = Benchmark()\n    benchmark(function=benchmark_function,inputs=[10,100,1000],name=\"name_of_my_optimization_algorithm\")\n\nDrag the resulting **name_of_my_optimization_algorithm.csv** into the [Dashboard](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app) and thats it!\n\n## Installation\nThe official release is available at PyPi:\n\n```\npip install pybe\n```\n\nYou can clone this repository by running the following command:\n\n```\ngit clone https://github.com/nicolaipalm/pybe\ncd pybe\npip install\n```\n\n## Getting started\n\nIn order to benchmark a Python function you only need to implement the function and\nspecify some data of the benchmark.\n\n```python\nfrom pybe.benchmark import Benchmark\nfrom pybe.wrappers import timer\nimport time\n\nbenchmark = Benchmark() # initialize pybe's benchmark class\n\n\n@timer # additionally track the time needed in each iteration\ndef test_function(i: int):\n    time.sleep(0.1)\n    return {\"name_of_output\": i} # specify the output in a dictionary\n\n# benchmark test_function on inputs [1,2,3] and evaluate each input 10 times\nbenchmark(test_function,\n          name=\"test_benchmark\", # set the name of the benchmark\n          inputs=[1, 2, 3],\n          store=True, # store the benchmark results\n          number_runs=10)\n```\nLook at the benchmark.csv file in your directory!\n\nYou can view the results also directly in Python or write them to an Excel or csv file\n\n```python\nprint(benchmark.inputs, benchmark.name_outputs)  # print inputs and names of outputs\nprint(benchmark.result)  # print results as stored in benchmark.csv\nbenchmark.to_excel(name=\"my_results\")  # write results as excel\nbenchmark.to_csv(name=\"my_results\")  # write results as csv\n\n```\n\nYou can read any of the benchmark results by simply initializing the\nbenchmark class with parameter the .yaml benchmark file path\n\n```python\nbenchmark = Benchmark(benchmark_file_path)\n```\n\n## Structure of benchmark csv\n\nThe structure of the resulting csv is supposed to be very intuitive:\n- each row represents one call of the benchmarked function with\n- one column for the input\n- one column with the name of the benchmark\n- one column for each output\n\nFor example:\n- the function has two outputs: time and value\n- is benchmarked at inputs 10 and 100\n- has name Hello\n- is evaluated once for each input\n\nThen, the resulting csv/Excel has the following structure:\n\n|   | value    | time  | Input | Name |\n|---|----------|-------|-------|------|\n| 0 | 0.1      | 1     | 10    | hello|\n| 1 | 0.05     | 20    | 100   | hello|\n\n## Dashboard\n\n[You can easily visualize your benchmark results!](https://nicolaipalm-pybe-dashboard-dashboard-yb61qz.streamlit.app)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Benchmarking python functions",
    "version": "1.0.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8a7e96371810fcbbcb73d451d0a37231a26f768a604f99beda56eb00638d0bbe",
                "md5": "3cf3b455e91b5b4e5b41aa95a3f8342b",
                "sha256": "b2ed98f3cf1da4ef568e9fda07c820bc7027c943217a8142dca13ab5857832fa"
            },
            "downloads": -1,
            "filename": "pybe-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3cf3b455e91b5b4e5b41aa95a3f8342b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 6591,
            "upload_time": "2023-05-28T13:22:26",
            "upload_time_iso_8601": "2023-05-28T13:22:26.220927Z",
            "url": "https://files.pythonhosted.org/packages/8a/7e/96371810fcbbcb73d451d0a37231a26f768a604f99beda56eb00638d0bbe/pybe-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f897b7195ec0b085ed56a68a82c2628b4b34de9815e9f5cdfc136c4ee00aa417",
                "md5": "0360fece702c0b5e99f6aead84af8a55",
                "sha256": "80220eb1f394b421fd5ad23cc80e4de4a5f32054ae8a27000416150e1d5c2a70"
            },
            "downloads": -1,
            "filename": "pybe-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0360fece702c0b5e99f6aead84af8a55",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 6269,
            "upload_time": "2023-05-28T13:22:32",
            "upload_time_iso_8601": "2023-05-28T13:22:32.145401Z",
            "url": "https://files.pythonhosted.org/packages/f8/97/b7195ec0b085ed56a68a82c2628b4b34de9815e9f5cdfc136c4ee00aa417/pybe-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-28 13:22:32",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "pybe"
}
        
Elapsed time: 0.07470s