nnbench


Namennbench JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
SummaryA small framework for benchmarking machine learning models.
upload_time2024-03-27 15:15:47
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseApache-2.0
keywords benchmarking machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # nnbench: A small framework for benchmarking machine learning models

Welcome to nnbench, a framework for benchmarking machine learning models.
The main goals of this project are

1. To provide a portable, easy-to-use solution for model evaluation that leads to better ML experiment organization, and
2. To integrate with experiment and metadata tracking solutions for easy adoption.

On a high level, you can think of nnbench as "pytest for ML models" - you define benchmarks similarly to test cases, collect them, and selectively run them based on model type, markers, and environment info.

What's new is that upon completion, you can stream the resulting data to any sink of your choice (including multiple at the same), which allows easy integration with experiment trackers and metadata stores.

See the [quickstart](https://aai-institute.github.io/nnbench/latest/quickstart/) for a lightning-quick demo, or the [examples](https://aai-institute.github.io/nnbench/latest/tutorials/) for more advanced usages.

## Installation

⚠️ nnbench is an experimental project - expect bugs and sharp edges.

Install it directly from source, for example either using `pip` or `poetry`:

```shell
pip install nnbench
# or
poetry add nnbench
```

## A ⚡️- quick demo

To understand how nnbench works, you can run the following in your Python interpreter:

```python
# example.py
import nnbench


@nnbench.benchmark
def product(a: int, b: int) -> int:
    return a * b


@nnbench.benchmark
def power(a: int, b: int) -> int:
    return a ** b


runner = nnbench.BenchmarkRunner()
# run the above benchmarks with the parameters `a=2, b=10`...
record = runner.run("__main__", params={"a": 2, "b": 10})
rep = nnbench.BenchmarkReporter()
rep.display(record)  # ...and print the results to the terminal.

# results in a table look like the following:
# name     function    date                 parameters         value    time_ns
# -------  ----------  -------------------  -----------------  -------  ---------
# product  product     2024-03-08T18:03:48  {'a': 2, 'b': 10}       20       1000
# power    power       2024-03-08T18:03:48  {'a': 2, 'b': 10}     1024        750
```

For a more realistic example of how to evaluate a trained model with a benchmark suite, check the [Quickstart](https://aai-institute.github.io/nnbench/latest/quickstart/).
For even more advanced usages of the library, you can check out the [Examples](https://aai-institute.github.io/nnbench/latest/tutorials/) in the documentation.

## Contributing

We encourage and welcome contributions from the community to enhance the project.
Please check [discussions](https://github.com/aai-institute/nnbench/discussions) or raise an [issue](https://github.com/aai-institute/nnbench/issues) on GitHub for any problems you encounter with the library.

For information on the general development workflow, see the [contribution guide](CONTRIBUTING.md).

## License

The nnbench library is distributed under the [Apache-2 license](LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nnbench",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Nicholas Junge <n.junge@appliedai-institute.de>, Max Mynter <m.mynter@appliedai-institute.de>, Adrian Rumpold <a.rumpold@appliedai-institute.de>",
    "keywords": "Benchmarking, Machine Learning",
    "author": null,
    "author_email": "appliedAI Initiative <info+oss@appliedai.de>",
    "download_url": "https://files.pythonhosted.org/packages/f6/15/53fd873a970e9b0fe4f13c7a0c36570bd93434dad756faa50422aeb35dc7/nnbench-0.3.0.tar.gz",
    "platform": null,
    "description": "# nnbench: A small framework for benchmarking machine learning models\n\nWelcome to nnbench, a framework for benchmarking machine learning models.\nThe main goals of this project are\n\n1. To provide a portable, easy-to-use solution for model evaluation that leads to better ML experiment organization, and\n2. To integrate with experiment and metadata tracking solutions for easy adoption.\n\nOn a high level, you can think of nnbench as \"pytest for ML models\" - you define benchmarks similarly to test cases, collect them, and selectively run them based on model type, markers, and environment info.\n\nWhat's new is that upon completion, you can stream the resulting data to any sink of your choice (including multiple at the same), which allows easy integration with experiment trackers and metadata stores.\n\nSee the [quickstart](https://aai-institute.github.io/nnbench/latest/quickstart/) for a lightning-quick demo, or the [examples](https://aai-institute.github.io/nnbench/latest/tutorials/) for more advanced usages.\n\n## Installation\n\n\u26a0\ufe0f nnbench is an experimental project - expect bugs and sharp edges.\n\nInstall it directly from source, for example either using `pip` or `poetry`:\n\n```shell\npip install nnbench\n# or\npoetry add nnbench\n```\n\n## A \u26a1\ufe0f- quick demo\n\nTo understand how nnbench works, you can run the following in your Python interpreter:\n\n```python\n# example.py\nimport nnbench\n\n\n@nnbench.benchmark\ndef product(a: int, b: int) -> int:\n    return a * b\n\n\n@nnbench.benchmark\ndef power(a: int, b: int) -> int:\n    return a ** b\n\n\nrunner = nnbench.BenchmarkRunner()\n# run the above benchmarks with the parameters `a=2, b=10`...\nrecord = runner.run(\"__main__\", params={\"a\": 2, \"b\": 10})\nrep = nnbench.BenchmarkReporter()\nrep.display(record)  # ...and print the results to the terminal.\n\n# results in a table look like the following:\n# name     function    date                 parameters         value    time_ns\n# -------  ----------  -------------------  -----------------  -------  ---------\n# product  product     2024-03-08T18:03:48  {'a': 2, 'b': 10}       20       1000\n# power    power       2024-03-08T18:03:48  {'a': 2, 'b': 10}     1024        750\n```\n\nFor a more realistic example of how to evaluate a trained model with a benchmark suite, check the [Quickstart](https://aai-institute.github.io/nnbench/latest/quickstart/).\nFor even more advanced usages of the library, you can check out the [Examples](https://aai-institute.github.io/nnbench/latest/tutorials/) in the documentation.\n\n## Contributing\n\nWe encourage and welcome contributions from the community to enhance the project.\nPlease check [discussions](https://github.com/aai-institute/nnbench/discussions) or raise an [issue](https://github.com/aai-institute/nnbench/issues) on GitHub for any problems you encounter with the library.\n\nFor information on the general development workflow, see the [contribution guide](CONTRIBUTING.md).\n\n## License\n\nThe nnbench library is distributed under the [Apache-2 license](LICENSE).\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A small framework for benchmarking machine learning models.",
    "version": "0.3.0",
    "project_urls": {
        "Discussions": "https://github.com/aai-institute/nnbench/discussions",
        "Homepage": "https://github.com/aai-institute/nnbench",
        "Issues": "https://github.com/aai-institute/nnbench/issues",
        "Repository": "https://github.com/aai-institute/nnbench.git"
    },
    "split_keywords": [
        "benchmarking",
        " machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f42bbf014a208e1df795b672209e0c7b0c371daa6909f501f097c2a0b99b89a4",
                "md5": "7d8b3842d56a03752e3b5cbd460394e7",
                "sha256": "ed0e0f54f87eca1bd1ef1bafcc36e4acfec0a91332181feebcda379d162b12d7"
            },
            "downloads": -1,
            "filename": "nnbench-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7d8b3842d56a03752e3b5cbd460394e7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 33169,
            "upload_time": "2024-03-27T15:15:46",
            "upload_time_iso_8601": "2024-03-27T15:15:46.205537Z",
            "url": "https://files.pythonhosted.org/packages/f4/2b/bf014a208e1df795b672209e0c7b0c371daa6909f501f097c2a0b99b89a4/nnbench-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f61553fd873a970e9b0fe4f13c7a0c36570bd93434dad756faa50422aeb35dc7",
                "md5": "e5fa066e947e782975b3e51afbf25a1e",
                "sha256": "262117548d0931729678987fd4c722594c4a515c0ce5193908ed5551f3a2722c"
            },
            "downloads": -1,
            "filename": "nnbench-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e5fa066e947e782975b3e51afbf25a1e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 300941,
            "upload_time": "2024-03-27T15:15:47",
            "upload_time_iso_8601": "2024-03-27T15:15:47.815459Z",
            "url": "https://files.pythonhosted.org/packages/f6/15/53fd873a970e9b0fe4f13c7a0c36570bd93434dad756faa50422aeb35dc7/nnbench-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-27 15:15:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aai-institute",
    "github_project": "nnbench",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nnbench"
}
        
Elapsed time: 0.21772s