# eGenix Micro Benchmark
**Easily write micro benchmarks in Python.**
*Please note*: This is still an alpha version of the software. Things are most likely going to change at a higher rate until we've reached a point when a stable release can be made.
## Abstract
This package provides a set of tools for easily writing micro benchmarks in Python.
It builds upon the [pyperf](https://pypi.org/project/pyperf/) package, which is an evolution of the older [pybench](https://github.com/python/cpython/tree/v3.6.15/Tools/pybench) tool. pybench was part of Python for a very long time (and was also authored by Marc-André Lemburg, just like this new package). pyperf, written by Victor Stinner, builds upon the pybench concepts, but comes with more modern ways of doing benchmarking and timing, with the aim of producing more stable results.
Since micro benchmarks will typically test language features which run at a nanosecond scale, it is necessary to repeat the test code several times in order to have the test case run long enough to stand out compared to the timing machinery around it.
This package offers a very elegant way to do this and also provides generic discovery functionality to make writing such benchmarks a breeze.
## Example
Here's an example micro benchmark module (examples/bench_example.py):
```python
#!/usr/bin/env python3
import micro_benchmark
def bench_match_int():
# Init
obj = 1
# Bench
match obj:
case float():
type = 'float'
case int():
type = 'int'
case _:
pass
# Verify
assert type == 'int'
# CLI interface
if __name__ == '__main__':
micro_benchmark.run(globals())
```
## Concept
The *init* part is run to set up the variables for the main part, the *bench* part. This part is not measured.
The *bench* part is run inside a loop managed by pyperf lots of times to measure the performance. Since the for-loop used for this incurs some timing overhead as well, the *bench* part is repeated a certain number of times (this is called *iterations* in the context of this package).
The *verify* part is run after the bench part to check whether the bench part did in fact run correctly and as expected. This part is not measured.
## Running a benchmark
Invoking the benchmark is easy. Simply run it with Python:
```
python3 examples/bench_example.py
```
The benchmark will take all the command line arguments pyperf supports, in addition to these extra ones added by the egenix-micro-benchmarks package:
- `--mb-filter=<regexp>`
Only run those benchmark functions which match the given regular expression. The matching is done as a substring match, so e.g. using `--mb-filter="match"` will match the function in the example module.
The output will look something like this:
```
.....................
bench_match_int: Mean +- std dev: 105 ns +- 10 ns
```
giving you the time it tool to run a single iteration of the bench part, together with an indication how reliable this reading is, by providing the standard deviation of the timings.
In some cases, pyperf may warn you about unstable results. Benchmarking typically works best on quiet machines which don't have anything much else to do.
# Public API
`micro_benchmark.run(namespace, prefix='bench_', filters=None)`
> Run all benchmark functions found in namespace.
>
> *namespace* can be an object with an '`.items()`' method (e.g. the
globals() dictionary) or a `.__dict__` attribute (e.g. a module,
package, class, etc.).
>
> *prefix* is the prefix name of benchmark functions to look for
(defaults to '`bench_`').
>
> *filters* may be given as a list of regular expression to limit the
number of functions to run. The expressions are OR-joined. If the
parameter is not given, the command line argument `--mb-filter` is used.
If this is missing as well, no filtering takes place.
`micro_benchmark.configure(iterations=None, name=None)`
> Provide additiona configuration for a benchmark function.
>
> *iterations* can be set to override the default for this function
(which is 20)
>
> *name* can be given to provide a more verbose name for the function.
The name is used by pyperf when generating output and for recording the
results in the JSON results file. It defaults to the function's name.
# Development
## Preparing the venv
In order to prepare the virtual env needed for the package to run, edit the `Makefile` to your liking and then run:
```
make install-venv
source env.sh # for bash
source env.csh # for C-shell
make install-packages
```
(or use any other virtual env tool you like :-))
## Create a release
- Make sure you update the version number in micro_benchmark/__init__.py
- Create a distribution and upload to TestPyPI_
```
make create-dist
make test-upload
```
- Check release on TestPyPI and try downloading the package from there
- Special attention should be paid to the contents of the .tar.gz file
- This should contain all necessary files to build the package
- Publish to PyPI:
```
make prod-upload
```
- Send out release emails
## Roadmap
- [x] Turn into a package
- [x] Release as a PyPI package
- [ ] Add more documentation and convert to MkDocs
- [ ] Add a whole set of micro benchmarks (e.g. the ones from pybench)
- May be better to do this as a separate package
# License
(c) Copyright 2024, eGenix.com Software, Skills and Services GmbH, Germany.
This software is licensed under the Apache License, Version 2.0.
Please see the LICENSE file for details.
# Contact
For inquiries related to the package, please write to info@egenix.com.
Raw data
{
"_id": null,
"home_page": null,
"name": "egenix-micro-benchmark",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "\"eGenix.com Software, Skills and Services GmbH\" <info@egenix.com>",
"keywords": "benchmark, micro benchmarks, tool, cli",
"author": null,
"author_email": "\"eGenix.com Software, Skills and Services GmbH\" <info@egenix.com>",
"download_url": "https://files.pythonhosted.org/packages/db/99/f9051e8628d859ce6d4f28f4c09add06296a890439781c395a6b76b0e993/egenix_micro_benchmark-0.1.0.tar.gz",
"platform": null,
"description": "\n# eGenix Micro Benchmark\n\n**Easily write micro benchmarks in Python.**\n\n*Please note*: This is still an alpha version of the software. Things are most likely going to change at a higher rate until we've reached a point when a stable release can be made.\n\n## Abstract\n\nThis package provides a set of tools for easily writing micro benchmarks in Python.\n\nIt builds upon the [pyperf](https://pypi.org/project/pyperf/) package, which is an evolution of the older [pybench](https://github.com/python/cpython/tree/v3.6.15/Tools/pybench) tool. pybench was part of Python for a very long time (and was also authored by Marc-Andr\u00e9 Lemburg, just like this new package). pyperf, written by Victor Stinner, builds upon the pybench concepts, but comes with more modern ways of doing benchmarking and timing, with the aim of producing more stable results.\n\nSince micro benchmarks will typically test language features which run at a nanosecond scale, it is necessary to repeat the test code several times in order to have the test case run long enough to stand out compared to the timing machinery around it.\n\nThis package offers a very elegant way to do this and also provides generic discovery functionality to make writing such benchmarks a breeze.\n\n## Example\n\nHere's an example micro benchmark module (examples/bench_example.py):\n\n```python\n#!/usr/bin/env python3\nimport micro_benchmark\n\ndef bench_match_int():\n\n # Init\n obj = 1\n\n # Bench\n match obj:\n case float():\n type = 'float'\n case int():\n type = 'int'\n case _:\n pass\n\n # Verify\n assert type == 'int'\n\n# CLI interface\nif __name__ == '__main__':\n micro_benchmark.run(globals())\n```\n\n## Concept\n\nThe *init* part is run to set up the variables for the main part, the *bench* part. This part is not measured.\n\nThe *bench* part is run inside a loop managed by pyperf lots of times to measure the performance. Since the for-loop used for this incurs some timing overhead as well, the *bench* part is repeated a certain number of times (this is called *iterations* in the context of this package).\n\nThe *verify* part is run after the bench part to check whether the bench part did in fact run correctly and as expected. This part is not measured.\n\n## Running a benchmark\n\nInvoking the benchmark is easy. Simply run it with Python:\n\n```\npython3 examples/bench_example.py\n```\n\nThe benchmark will take all the command line arguments pyperf supports, in addition to these extra ones added by the egenix-micro-benchmarks package:\n\n- `--mb-filter=<regexp>`\n Only run those benchmark functions which match the given regular expression. The matching is done as a substring match, so e.g. using `--mb-filter=\"match\"` will match the function in the example module.\n\nThe output will look something like this:\n\n```\n.....................\nbench_match_int: Mean +- std dev: 105 ns +- 10 ns\n```\n\ngiving you the time it tool to run a single iteration of the bench part, together with an indication how reliable this reading is, by providing the standard deviation of the timings.\n\nIn some cases, pyperf may warn you about unstable results. Benchmarking typically works best on quiet machines which don't have anything much else to do.\n\n# Public API\n\n`micro_benchmark.run(namespace, prefix='bench_', filters=None)`\n\n> Run all benchmark functions found in namespace.\n>\n> *namespace* can be an object with an '`.items()`' method (e.g. the\nglobals() dictionary) or a `.__dict__` attribute (e.g. a module,\npackage, class, etc.).\n>\n> *prefix* is the prefix name of benchmark functions to look for\n(defaults to '`bench_`').\n>\n> *filters* may be given as a list of regular expression to limit the\nnumber of functions to run. The expressions are OR-joined. If the\nparameter is not given, the command line argument `--mb-filter` is used.\nIf this is missing as well, no filtering takes place.\n\n`micro_benchmark.configure(iterations=None, name=None)`\n\n> Provide additiona configuration for a benchmark function.\n>\n> *iterations* can be set to override the default for this function\n(which is 20)\n>\n> *name* can be given to provide a more verbose name for the function.\nThe name is used by pyperf when generating output and for recording the\nresults in the JSON results file. It defaults to the function's name.\n\n# Development\n\n## Preparing the venv\n\nIn order to prepare the virtual env needed for the package to run, edit the `Makefile` to your liking and then run:\n\n```\nmake install-venv\nsource env.sh # for bash\nsource env.csh # for C-shell\nmake install-packages\n```\n\n(or use any other virtual env tool you like :-))\n\n## Create a release\n\n- Make sure you update the version number in micro_benchmark/__init__.py\n\n- Create a distribution and upload to TestPyPI_\n```\nmake create-dist\nmake test-upload\n```\n- Check release on TestPyPI and try downloading the package from there\n - Special attention should be paid to the contents of the .tar.gz file\n - This should contain all necessary files to build the package\n- Publish to PyPI:\n```\nmake prod-upload\n```\n- Send out release emails\n\n## Roadmap\n\n- [x] Turn into a package\n- [x] Release as a PyPI package\n- [ ] Add more documentation and convert to MkDocs\n- [ ] Add a whole set of micro benchmarks (e.g. the ones from pybench)\n - May be better to do this as a separate package\n\n# License\n\n(c) Copyright 2024, eGenix.com Software, Skills and Services GmbH, Germany.\nThis software is licensed under the Apache License, Version 2.0.\nPlease see the LICENSE file for details.\n\n\n# Contact\n\nFor inquiries related to the package, please write to info@egenix.com.\n",
"bugtrack_url": null,
"license": "Apache-2.0 license",
"summary": "Micro benchmark tooling for Python",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/eGenix/egenix-micro-benchmark",
"Issues": "https://github.com/eGenix/egenix-micro-benchmark/issues",
"Repository": "https://github.com/eGenix/egenix-micro-benchmark"
},
"split_keywords": [
"benchmark",
" micro benchmarks",
" tool",
" cli"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "cf521013a78641851455867487ff99226267f847c2feac6655566c59446a825c",
"md5": "706eca168e2e5788527e13b6db637df0",
"sha256": "9ef8f6f05502f2b83e3809b0b5e8b3b3f39523c11a1568715df1e3ad54b9c985"
},
"downloads": -1,
"filename": "egenix_micro_benchmark-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "706eca168e2e5788527e13b6db637df0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 13836,
"upload_time": "2024-05-21T12:10:19",
"upload_time_iso_8601": "2024-05-21T12:10:19.619590Z",
"url": "https://files.pythonhosted.org/packages/cf/52/1013a78641851455867487ff99226267f847c2feac6655566c59446a825c/egenix_micro_benchmark-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "db99f9051e8628d859ce6d4f28f4c09add06296a890439781c395a6b76b0e993",
"md5": "f5c782d10062449ff57f792e58b4a712",
"sha256": "b8c5cdc7e04f171af07964c6a4f549714b5943dc4b7d8389d7f506a88f668c4a"
},
"downloads": -1,
"filename": "egenix_micro_benchmark-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "f5c782d10062449ff57f792e58b4a712",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 14476,
"upload_time": "2024-05-21T12:10:17",
"upload_time_iso_8601": "2024-05-21T12:10:17.811462Z",
"url": "https://files.pythonhosted.org/packages/db/99/f9051e8628d859ce6d4f28f4c09add06296a890439781c395a6b76b0e993/egenix_micro_benchmark-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-21 12:10:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "eGenix",
"github_project": "egenix-micro-benchmark",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "psutil",
"specs": [
[
"==",
"5.9.8"
]
]
},
{
"name": "pyperf",
"specs": [
[
"==",
"2.7.0"
]
]
}
],
"lcname": "egenix-micro-benchmark"
}