<div align="center" id="top">
<img src="https://github.com/BasedLabs/aibenchmark/raw/4f774ab8ad881724103b69ecd328a1eb80a94d3b/media/aibenchmark-logo.png" width="250px" alt="aibencharmk" />
 
<!-- <a href="https://aibenchmark.netlify.app">Demo</a> -->
</div>
<h1 align="center">AIBenchmark</h1>
<h2 align="center">Benchmark your model against other models</h2>
<p align="center">
<img alt="Github top language" src="https://img.shields.io/github/languages/top/BasedLabs/aibenchmark?color=56BEB8">
<img alt="Github language count" src="https://img.shields.io/github/languages/count/BasedLabs/aibenchmark?color=56BEB8">
<img alt="Repository size" src="https://img.shields.io/github/repo-size/BasedLabs/aibenchmark?color=56BEB8">
<img alt="License" src="https://img.shields.io/github/license/BasedLabs/aibenchmark?color=56BEB8">
<!-- <img alt="Github issues" src="https://img.shields.io/github/issues/BasedLabs/aibenchmark?color=56BEB8" /> -->
<!-- <img alt="Github forks" src="https://img.shields.io/github/forks/BasedLabs/aibenchmark?color=56BEB8" /> -->
<!-- <img alt="Github stars" src="https://img.shields.io/github/stars/BasedLabs/aibenchmark?color=56BEB8" /> -->
</p>
<!-- Status -->
<!-- <h4 align="center">
🚧 NoLabs 🚀 Under construction... 🚧
</h4>
<hr> -->
<p align="center">
<a href="#dart-about">About</a>   |  
<a href="#sparkles-features">Features</a>   |  
<a href="#Technologies">Technologies</a>   |  
<a href="#checkered_flag-starting">Starting</a>   |  
<a href="#memo-license">License</a>   |  
<a href="https://github.com/BasedLabs" target="_blank">Author</a>
</p>
<br>
## Installation ##
Run this script in your terminal:
```bash
$ pip install aibench
```
## About ##
AIBenchmark is a package which lets you quickly get the benchmark of your model based on the popular datasets and compare with existing leaderboard. It also has a nice collection of metrics which you could easily import.
We currently support 14 text-based and 2 image-based datasets for AutoBenchmarking aiming for regression/classification tasks. Available datasets could be found in aibenchmark/dataset.py file.
Or run the following code:
```python
from aibenchmark.dataset import DatasetsList
print(list(DatasetsList.get_available_datasets()))
```
Code example for benchmarking:
```python
from aibenchmark.benchmark import Benchmark
from aibenchmark.dataset import DatasetInfo, DatasetsList
benchmark = Benchmark(DatasetsList.Texts.SST)
dataset_info: DatasetInfo = benchmark.dataset_info
print(dataset_info)
test_features = dataset_info.data['Texts']
model = torch.load(...)
# Implement your code based on the type of model you use, your pre- and post-processing etc.
outputs = model.predict(test_features)
# Results of your model based on predictions
benchmark_results = benchmark.run(predictions=outputs, metrics=['accuracy', 'precision', 'recall', 'f1_score'])
# Metrics
print(benchmark_results)
# Existing leaderboard for this dataset
print(benchmark.get_existing_benchmarks())
```
## Features ##
1) Fast comparison of metrics of your model and other SOTA models for particular dataset
2) Supporting 16+ most populat datasets, the list is always updating. Soon we willl support more than 1000 datasets
3) All metrics in one place and we are adding new ones in a standardised way
## Technologies ##
The following tools were used in this project:
- [Pytorch](https://pytorch.org/)
- [Transformers](https://huggingface.co/transformers)
- [Scikit-learn](https://scikit-learn.org/stable/)
## :memo: License ##
This project is under license from MIT. For more details, see the [LICENSE](LICENSE.md) file.
Made by <a href="https://github.com/jaktenstid" target="_blank">Igor</a> and <a href="https://github.com/timurishmuratov7" target="_blank">Tim</a>
 
<a href="#top">Back to top</a>
Raw data
{
"_id": null,
"home_page": "https://github.com/BasedLabs/aibenchmark/",
"name": "aibench",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "ai benchmark metrics",
"author": "Based Labs",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/14/e8/b9059615ad115e612f43c19f6290cea6410cd9273b20050d3135e3d9912a/aibench-0.0.5.tar.gz",
"platform": null,
"description": "<div align=\"center\" id=\"top\"> \n <img src=\"https://github.com/BasedLabs/aibenchmark/raw/4f774ab8ad881724103b69ecd328a1eb80a94d3b/media/aibenchmark-logo.png\" width=\"250px\" alt=\"aibencharmk\" />\n\n  \n\n <!-- <a href=\"https://aibenchmark.netlify.app\">Demo</a> -->\n</div>\n\n<h1 align=\"center\">AIBenchmark</h1>\n<h2 align=\"center\">Benchmark your model against other models</h2>\n\n<p align=\"center\">\n <img alt=\"Github top language\" src=\"https://img.shields.io/github/languages/top/BasedLabs/aibenchmark?color=56BEB8\">\n\n <img alt=\"Github language count\" src=\"https://img.shields.io/github/languages/count/BasedLabs/aibenchmark?color=56BEB8\">\n\n <img alt=\"Repository size\" src=\"https://img.shields.io/github/repo-size/BasedLabs/aibenchmark?color=56BEB8\">\n\n <img alt=\"License\" src=\"https://img.shields.io/github/license/BasedLabs/aibenchmark?color=56BEB8\">\n\n <!-- <img alt=\"Github issues\" src=\"https://img.shields.io/github/issues/BasedLabs/aibenchmark?color=56BEB8\" /> -->\n\n <!-- <img alt=\"Github forks\" src=\"https://img.shields.io/github/forks/BasedLabs/aibenchmark?color=56BEB8\" /> -->\n\n <!-- <img alt=\"Github stars\" src=\"https://img.shields.io/github/stars/BasedLabs/aibenchmark?color=56BEB8\" /> -->\n</p>\n\n<!-- Status -->\n\n<!-- <h4 align=\"center\"> \n \ud83d\udea7 NoLabs \ud83d\ude80 Under construction... \ud83d\udea7\n</h4> \n\n<hr> -->\n\n<p align=\"center\">\n <a href=\"#dart-about\">About</a>   |   \n <a href=\"#sparkles-features\">Features</a>   |  \n <a href=\"#Technologies\">Technologies</a>   |  \n <a href=\"#checkered_flag-starting\">Starting</a>   |  \n <a href=\"#memo-license\">License</a>   |  \n <a href=\"https://github.com/BasedLabs\" target=\"_blank\">Author</a>\n</p>\n\n<br>\n\n## Installation ##\n\nRun this script in your terminal:\n```bash\n$ pip install aibench\n```\n\n## About ##\n\nAIBenchmark is a package which lets you quickly get the benchmark of your model based on the popular datasets and compare with existing leaderboard. It also has a nice collection of metrics which you could easily import.\n\nWe currently support 14 text-based and 2 image-based datasets for AutoBenchmarking aiming for regression/classification tasks. Available datasets could be found in aibenchmark/dataset.py file. \n\nOr run the following code:\n\n```python\n\nfrom aibenchmark.dataset import DatasetsList\n\nprint(list(DatasetsList.get_available_datasets()))\n\n```\n\nCode example for benchmarking:\n\n```python\nfrom aibenchmark.benchmark import Benchmark\nfrom aibenchmark.dataset import DatasetInfo, DatasetsList\n\n\nbenchmark = Benchmark(DatasetsList.Texts.SST)\ndataset_info: DatasetInfo = benchmark.dataset_info\nprint(dataset_info)\n\ntest_features = dataset_info.data['Texts']\nmodel = torch.load(...)\n# Implement your code based on the type of model you use, your pre- and post-processing etc.\noutputs = model.predict(test_features)\n\n# Results of your model based on predictions\nbenchmark_results = benchmark.run(predictions=outputs, metrics=['accuracy', 'precision', 'recall', 'f1_score']) \n\n# Metrics\nprint(benchmark_results)\n# Existing leaderboard for this dataset\nprint(benchmark.get_existing_benchmarks())\n```\n\n## Features ##\n\n1) Fast comparison of metrics of your model and other SOTA models for particular dataset\n2) Supporting 16+ most populat datasets, the list is always updating. Soon we willl support more than 1000 datasets\n3) All metrics in one place and we are adding new ones in a standardised way\n\n## Technologies ##\n\nThe following tools were used in this project:\n\n- [Pytorch](https://pytorch.org/)\n- [Transformers](https://huggingface.co/transformers)\n- [Scikit-learn](https://scikit-learn.org/stable/)\n\n\n## :memo: License ##\n\nThis project is under license from MIT. For more details, see the [LICENSE](LICENSE.md) file.\n\n\nMade by <a href=\"https://github.com/jaktenstid\" target=\"_blank\">Igor</a> and <a href=\"https://github.com/timurishmuratov7\" target=\"_blank\">Tim</a>\n\n \n\n<a href=\"#top\">Back to top</a>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "",
"version": "0.0.5",
"project_urls": {
"Homepage": "https://github.com/BasedLabs/aibenchmark/"
},
"split_keywords": [
"ai",
"benchmark",
"metrics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5a916e4192a6e51248e148d6b1a6ef840937c0fa42b4b5e1840362fbe96af513",
"md5": "66776fcf7ab244d1862101073dec50dd",
"sha256": "71b62f519c22438acde8dc306a73c9ddc7bd6333456530fffeea65731d7c3b7e"
},
"downloads": -1,
"filename": "aibench-0.0.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "66776fcf7ab244d1862101073dec50dd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 10754,
"upload_time": "2023-06-29T19:22:19",
"upload_time_iso_8601": "2023-06-29T19:22:19.531287Z",
"url": "https://files.pythonhosted.org/packages/5a/91/6e4192a6e51248e148d6b1a6ef840937c0fa42b4b5e1840362fbe96af513/aibench-0.0.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "14e8b9059615ad115e612f43c19f6290cea6410cd9273b20050d3135e3d9912a",
"md5": "abe81ba1f3931ca0ef789ce816eeb339",
"sha256": "b46b4cb6a6dd1d36fef7f57d1ded5a42d9a767f108d02ee87e7a99301ed1be2d"
},
"downloads": -1,
"filename": "aibench-0.0.5.tar.gz",
"has_sig": false,
"md5_digest": "abe81ba1f3931ca0ef789ce816eeb339",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 10382,
"upload_time": "2023-06-29T19:22:20",
"upload_time_iso_8601": "2023-06-29T19:22:20.905821Z",
"url": "https://files.pythonhosted.org/packages/14/e8/b9059615ad115e612f43c19f6290cea6410cd9273b20050d3135e3d9912a/aibench-0.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-29 19:22:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "BasedLabs",
"github_project": "aibenchmark",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "aibench"
}