[AI Benchmark Alpha](http://ai-benchmark.com/alpha) is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs. The benchmark is relying on [TensorFlow](https://www.tensorflow.org) machine learning library, and is providing a lightweight and accurate solution for assessing inference and training speed for key Deep Learning models.</br></br>
In total, AI Benchmark consists of <b>42 tests</b> and <b>19 sections</b> provided below:</br>
1. MobileNet-V2 `[classification]`
2. Inception-V3 `[classification]`
3. Inception-V4 `[classification]`
4. Inception-ResNet-V2 `[classification]`
5. ResNet-V2-50 `[classification]`
6. ResNet-V2-152 `[classification]`
7. VGG-16 `[classification]`
8. SRCNN 9-5-5 `[image-to-image mapping]`
9. VGG-19 `[image-to-image mapping]`
10. ResNet-SRGAN `[image-to-image mapping]`
11. ResNet-DPED `[image-to-image mapping]`
12. U-Net `[image-to-image mapping]`
13. Nvidia-SPADE `[image-to-image mapping]`
14. ICNet `[image segmentation]`
15. PSPNet `[image segmentation]`
16. DeepLab `[image segmentation]`
17. Pixel-RNN `[inpainting]`
18. LSTM `[sentence sentiment analysis]`
19. GNMT `[text translation]`
For more information and results, please visit the project website: [http://ai-benchmark.com/alpha](http://ai-benchmark.com/alpha)</br></br>
#### Installation Instructions </br>
The benchmark requires TensorFlow machine learning library to be present in your system.
On systems that <b>do not have Nvidia GPUs</b>, run the following commands to install AI Benchmark:
```bash
pip install tensorflow
pip install ai-benchmark
```
</br>
If you want to check the <b>performance of Nvidia graphic cards</b>, run the following commands:
```bash
pip install tensorflow-gpu
pip install ai-benchmark
```
<b>`Note 1:`</b> If Tensorflow is already installed in your system, you can skip the first command.
<b>`Note 2:`</b> For running the benchmark on Nvidia GPUs, <b>`NVIDIA CUDA`</b> and <b>`cuDNN`</b> libraries should be installed first. Please find detailed instructions [here](https://www.tensorflow.org/install/gpu). </br></br>
#### Getting Started </br>
To run AI Benchmark, use the following code:
```bash
from ai_benchmark import AIBenchmark
benchmark = AIBenchmark()
results = benchmark.run()
```
Alternatively, on Linux systems you can type `ai-benchmark` in the command line to start the tests.
To run inference or training only, use `benchmark.run_inference()` or `benchmark.run_training()`. </br></br>
#### Advanced settings </br>
```bash
AIBenchmark(use_CPU=None, verbose_level=1):
```
> use_CPU=`{True, False, None}`: whether to run the tests on CPUs (if tensorflow-gpu is installed)
> verbose_level=`{0, 1, 2, 3}`: run tests silently | with short summary | with information about each run | with TF logs
```bash
benchmark.run(precision="normal"):
```
> precision=`{"normal", "high"}`: if `high` is selected, the benchmark will execute 10 times more runs for each test.
</br>
### Additional Notes and Requirements </br>
GPU with at least 2GB of RAM is required for running inference tests / 4GB of RAM for training tests.
The benchmark is compatible with both `TensorFlow 1.x` and `2.x` versions. </br></br>
### Contacts </br>
Please contact `andrey@vision.ee.ethz.ch` for any feedback or information.
Raw data
{
"_id": null,
"home_page": "http://ai-benchmark.com",
"name": "ai-benchmark",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "AI Benchmark Tensorflow Machine Learning Inference Training",
"author": "Andrey Ignatov",
"author_email": "andrey@vision.ee.ethz.ch",
"download_url": "https://files.pythonhosted.org/packages/99/9e/6685285db14f407d5061e6022f96400f6fe958a70ba320472178151ded4b/ai-benchmark-0.1.2.tar.gz",
"platform": "",
"description": "[AI Benchmark Alpha](http://ai-benchmark.com/alpha) is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs. The benchmark is relying on [TensorFlow](https://www.tensorflow.org) machine learning library, and is providing a lightweight and accurate solution for assessing inference and training speed for key Deep Learning models.</br></br>\n\nIn total, AI Benchmark consists of <b>42 tests</b> and <b>19 sections</b> provided below:</br>\n\n1. MobileNet-V2 `[classification]`\n2. Inception-V3 `[classification]`\n3. Inception-V4 `[classification]`\n4. Inception-ResNet-V2 `[classification]`\n5. ResNet-V2-50 `[classification]`\n6. ResNet-V2-152 `[classification]`\n7. VGG-16 `[classification]`\n8. SRCNN 9-5-5 `[image-to-image mapping]`\n9. VGG-19 `[image-to-image mapping]`\n10. ResNet-SRGAN `[image-to-image mapping]`\n11. ResNet-DPED `[image-to-image mapping]`\n12. U-Net `[image-to-image mapping]`\n13. Nvidia-SPADE `[image-to-image mapping]`\n14. ICNet `[image segmentation]`\n15. PSPNet `[image segmentation]`\n16. DeepLab `[image segmentation]`\n17. Pixel-RNN `[inpainting]`\n18. LSTM `[sentence sentiment analysis]`\n19. GNMT `[text translation]`\n\nFor more information and results, please visit the project website: [http://ai-benchmark.com/alpha](http://ai-benchmark.com/alpha)</br></br>\n\n#### Installation Instructions </br>\n\nThe benchmark requires TensorFlow machine learning library to be present in your system.\n\nOn systems that <b>do not have Nvidia GPUs</b>, run the following commands to install AI Benchmark:\n\n```bash\npip install tensorflow\npip install ai-benchmark\n```\n</br>\n\nIf you want to check the <b>performance of Nvidia graphic cards</b>, run the following commands:\n\n```bash\npip install tensorflow-gpu\npip install ai-benchmark\n```\n\n<b>`Note 1:`</b> If Tensorflow is already installed in your system, you can skip the first command.\n\n<b>`Note 2:`</b> For running the benchmark on Nvidia GPUs, <b>`NVIDIA CUDA`</b> and <b>`cuDNN`</b> libraries should be installed first. Please find detailed instructions [here](https://www.tensorflow.org/install/gpu). </br></br>\n\n#### Getting Started </br>\n\nTo run AI Benchmark, use the following code:\n\n```bash\nfrom ai_benchmark import AIBenchmark\nbenchmark = AIBenchmark()\nresults = benchmark.run()\n```\n\nAlternatively, on Linux systems you can type `ai-benchmark` in the command line to start the tests.\n\nTo run inference or training only, use `benchmark.run_inference()` or `benchmark.run_training()`. </br></br>\n\n#### Advanced settings </br>\n\n```bash\nAIBenchmark(use_CPU=None, verbose_level=1):\n```\n> use_CPU=`{True, False, None}`: whether to run the tests on CPUs (if tensorflow-gpu is installed)\n\n> verbose_level=`{0, 1, 2, 3}`: run tests silently | with short summary | with information about each run | with TF logs\n\n```bash\nbenchmark.run(precision=\"normal\"):\n```\n\n> precision=`{\"normal\", \"high\"}`: if `high` is selected, the benchmark will execute 10 times more runs for each test.\n\n</br>\n\n### Additional Notes and Requirements </br>\n\nGPU with at least 2GB of RAM is required for running inference tests / 4GB of RAM for training tests.\n\nThe benchmark is compatible with both `TensorFlow 1.x` and `2.x` versions. </br></br>\n\n### Contacts </br>\n\nPlease contact `andrey@vision.ee.ethz.ch` for any feedback or information.\n\n\n\n",
"bugtrack_url": null,
"license": "Apache License Version 2.0",
"summary": "AI Benchmark is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs.",
"version": "0.1.2",
"project_urls": {
"Homepage": "http://ai-benchmark.com"
},
"split_keywords": [
"ai",
"benchmark",
"tensorflow",
"machine",
"learning",
"inference",
"training"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0a34e1f4dfd4713a01e114d1dbb28e9af164054177b870bc92a7c8ea8e6ce0b7",
"md5": "ed217c05ab83f7a305c445c7c009266b",
"sha256": "4f26560f2627d4681892525d7bd057d02b7aebbfee2c54527d5e13129601617c"
},
"downloads": -1,
"filename": "ai_benchmark-0.1.2-py2-none-any.whl",
"has_sig": false,
"md5_digest": "ed217c05ab83f7a305c445c7c009266b",
"packagetype": "bdist_wheel",
"python_version": "py2",
"requires_python": null,
"size": 21549593,
"upload_time": "2019-12-18T16:14:20",
"upload_time_iso_8601": "2019-12-18T16:14:20.218817Z",
"url": "https://files.pythonhosted.org/packages/0a/34/e1f4dfd4713a01e114d1dbb28e9af164054177b870bc92a7c8ea8e6ce0b7/ai_benchmark-0.1.2-py2-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e0702f4581a0b48ffedcb555251c02476fb19a519335a4da63b4f9795b03716d",
"md5": "c3c4aa880cf6164876324b66291ce7bf",
"sha256": "e97fd54be8e62227a10bf4fad48ba3bf7d116604e11767f0c6307680760dff98"
},
"downloads": -1,
"filename": "ai_benchmark-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c3c4aa880cf6164876324b66291ce7bf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 21549595,
"upload_time": "2019-12-18T16:14:31",
"upload_time_iso_8601": "2019-12-18T16:14:31.838560Z",
"url": "https://files.pythonhosted.org/packages/e0/70/2f4581a0b48ffedcb555251c02476fb19a519335a4da63b4f9795b03716d/ai_benchmark-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "999e6685285db14f407d5061e6022f96400f6fe958a70ba320472178151ded4b",
"md5": "c9ae2ad78d1c71b40e14df6e56d9c00a",
"sha256": "759ae01af1f8f1fecacf73b3313c9722d37274f778a0e842feab8c935263580c"
},
"downloads": -1,
"filename": "ai-benchmark-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "c9ae2ad78d1c71b40e14df6e56d9c00a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 21375516,
"upload_time": "2019-12-18T16:14:49",
"upload_time_iso_8601": "2019-12-18T16:14:49.470535Z",
"url": "https://files.pythonhosted.org/packages/99/9e/6685285db14f407d5061e6022f96400f6fe958a70ba320472178151ded4b/ai-benchmark-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2019-12-18 16:14:49",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "ai-benchmark"
}