neurocorgi-sdk


Nameneurocorgi-sdk JSON
Version 2.0.2 PyPI version JSON
download
home_page
SummaryNeuroCorgi-SDK to use the NeuroCorgi model in object detection, instance segmentation and image classification apps.
upload_time2024-01-29 13:31:36
maintainerVincent Templier, Lilian Billod
docs_urlNone
authorIvan Miro Panades, Vincent Lorrain, Lilian Billod, Inna Kucher, Vincent Templier
requires_python>=3.7
licenseCECILL-2.1
keywords machine-learning deep-learning computer-vision ml dl ai neurocorgi cea
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # NeuroCorgi SDK

[![NeuroCorgi SDK CI](https://github.com/CEA-LIST/neurocorgi_sdk/actions/workflows/ci.yaml/badge.svg)](https://github.com/CEA-LIST/neurocorgi_sdk/actions/workflows/ci.yaml)
[![License](https://img.shields.io/badge/license-CeCILL--C-blue.svg)](LICENSE)
[![PyPI version](https://badge.fury.io/py/neurocorgi_sdk.svg)](https://pypi.org/project/neurocorgi_sdk)

The NeuroCorgi-SDK is a SDK to use the NeuroCorgi model in your object detection, instance segmentation and classification applications as a feature extractor. <br>
This SDK is developed inside the Andante project. For more information about the Andante project, go to https://www.andante-ai.eu/.

The SDK provides some versions of the NeuroCorgi circuit which can simulate the behaviour of the models on chip. Two versions have been designed from a MobileNetV1 trained and quantized in 4-bit by using the [SAT](https://arxiv.org/abs/1912.10207) method in [N2D2](https://github.com/CEA-LIST/N2D2): one with the [ImageNet](https://www.image-net.org/index.php) dataset and the second with the [Coco](https://cocodataset.org/#home) dataset.

<div align="center">
    <a href="https://ai4di.automotive.oth-aw.de/images/EAI-PDF/2022-09-19_EAI_S2_P2-CEA_IvanMiro-Panades.pdf#page=17">
    <img src="https://github.com/CEA-LIST/neurocorgi_sdk/raw/master/docs/_static/NeuroCorgi_design.png" width="80%" alt="NeuroCorgi ASIC">
    </a>
</div>
<div align="center">
    <i>NeuroCorgi ASIC</i>
</div>
<br>

The NeuroCorgi ASIC is able to extract features from HD images (1280x720) at 30 FPS with less than 100 mW.

<div align="center">
    <a href="https://ai4di.automotive.oth-aw.de/images/EAI-PDF/2022-09-19_EAI_S2_P2-CEA_IvanMiro-Panades.pdf#page=10">
    <img src="https://github.com/CEA-LIST/neurocorgi_sdk/raw/master/docs/_static/NeuroCorgi_performance.png" width="80%" alt="NeuroCorgi performance">
    </a>
</div>
<div align="center">
    <i>NeuroCorgi performance target</i>
</div>
<br>

For more information about the NeuroCorgi ASIC, check the [presentation](https://ai4di.automotive.oth-aw.de/images/EAI-PDF/2022-09-19_EAI_S2_P2-CEA_IvanMiro-Panades.pdf) of [Ivan Miro-Panades](https://www.linkedin.com/in/ivanmiro/) at the International Workshop on Embedded Artificial Intelligence Devices, Systems, and Industrial Applications (EAI).


## Installation

Before installing the sdk package, be sure to have the weight files of the NeuroCorgi models to use them with the sdk. Two versions exist for both Coco and ImageNet versions of the circuit. <br>
Please choose what you want:

| | For PyTorch integration | For N2D2 integration |
|:-:|:-:|:-:|
| ImageNet chip | `neurocorginet_imagenet.safetensors` | `imagenet_weights.zip` |
| Coco chip | `neurocorginet_coco.safetensors` | `coco_weights.zip` |

Please send an email to [Ivan Miro-Panades](ivan.miro-panades@cea.fr) or [Vincent Templier](vincent.templier@cea.fr) to get the files.

### Via PyPI

Pip install the sdk package including all requirements in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).
```
pip install neurocorgi_sdk
```


### From Source

To install the SDK, run in your [**Python>=3.7**](https://www.python.org/) environment
```
git clone https://github.com/CEA-LIST/neurocorgi_sdk
cd neurocorgi_sdk
pip install .
```

### With Docker

If you wish to work inside a Docker container, you will need to build the image first. <br>
To do so, run the command
```
docker build --pull --rm -f "docker/Dockerfile" -t neurocorgi:neurocorgi_sdk "."
```

After building the image, start a container
```
docker run --name myContainer --gpus=all -it neurocorgi:neurocorgi_sdk
```


## Getting Started

Use `NeuroCorgiNet` directly in a Python environment:
```python
from neurocorgi_sdk import NeuroCorgiNet
from neurocorgi_sdk.transforms import ToNeuroCorgiChip

# Load a model
model = NeuroCorgiNet("neurocorginet_imagenet.safetensors")

# Load and transform an image (requires PIL and requests)
image = PIL.Image.open(requests.get("https://github.com/CEA-LIST/neurocorgi_sdk/blob/master/neurocorgi_sdk/assets/corgi.jpg", stream=True).raw)
img = ToNeuroCorgiChip()(image)

# Use the model
div4, div8, div16, div32 = model(img)
```

## About NeuroCorgi model

The NeuroCorgi circuit embeds a version of MobileNetV1 which has been trained and quantized in 4-bit. It requires to provide unsigned 8-bit inputs. To respect this condition, **inputs provided to `NeuroCorgiNet` must be between 0 and 255**. <br>
You can use the `ToNeuroCorgiChip` transformation to transform your images in the correct format. (No need to use it with the fakequant version `NeuroCorgiNet_fakequant` as the inputs have to be between 0 and 1).

Moreover, since the model is fixed on chip it is not possible to modify its parameters.
So it will be impossible to train the models but you can train additional models to plug with `NeuroCorgiNet` for your own applications.


## Examples

You can find some [examples](./examples/) of how to integrate the models in your applications.

For instance:
- [ImageNet](./examples/classification/Imagenet.ipynb): use `NeuroCorgiNet` to perform an ImageNet inference. 
    You need a local installation of the ImageNet Database to make it work. Once installed somewhere, 
    you need to modify the ILSVRC2012_root variable to locate the correct directory.
- [Cifar100](./examples/classification/Cifar100.ipynb): use `NeuroCorgiNet` to perform a transfer learning on CIFAR100 classification challenge.

To evaluate NeuroCorgi's performances on other tasks, you should use those scripts as a base inspiration.


## The Team

The NeuroCorgi-SDK is a project that brought together several skillful engineers and researchers who contributed to it.

This SDK is currently maintained by [Lilian Billod](https://fr.linkedin.com/in/lilian-billod-3737b6177) and [Vincent Templier](http://www.linkedin.com/in/vincent-templier).
A huge thank you to the people who contributed to the creation of this SDK: [Ivan Miro-Panades](https://www.linkedin.com/in/ivanmiro/), [Vincent Lorrain](https://fr.linkedin.com/in/vincent-lorrain-71510583), [Inna Kucher](https://fr.linkedin.com/in/inna-kucher-phd-14528169), [David Briand](https://fr.linkedin.com/in/david-briand-a0b1524a), [Johannes Thiele](https://ch.linkedin.com/in/johannes-thiele-51b795130), [Cyril Moineau](https://fr.linkedin.com/in/cmoineau), [Nermine Ali](https://fr.linkedin.com/in/nermineali) and [Olivier Bichler](https://fr.linkedin.com/in/olivierbichler).


## License

This SDK has a CeCill-style license, as found in the [LICENSE](LICENSE) file.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "neurocorgi-sdk",
    "maintainer": "Vincent Templier, Lilian Billod",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "machine-learning,deep-learning,computer-vision,ML,DL,AI,NeuroCorgi,CEA",
    "author": "Ivan Miro Panades, Vincent Lorrain, Lilian Billod, Inna Kucher, Vincent Templier",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/d1/e3/b3d97fdf2209f6bae29b9dd29287f15edd5cf364067956fedd223415c366/neurocorgi_sdk-2.0.2.tar.gz",
    "platform": null,
    "description": "# NeuroCorgi SDK\n\n[![NeuroCorgi SDK CI](https://github.com/CEA-LIST/neurocorgi_sdk/actions/workflows/ci.yaml/badge.svg)](https://github.com/CEA-LIST/neurocorgi_sdk/actions/workflows/ci.yaml)\n[![License](https://img.shields.io/badge/license-CeCILL--C-blue.svg)](LICENSE)\n[![PyPI version](https://badge.fury.io/py/neurocorgi_sdk.svg)](https://pypi.org/project/neurocorgi_sdk)\n\nThe NeuroCorgi-SDK is a SDK to use the NeuroCorgi model in your object detection, instance segmentation and classification applications as a feature extractor. <br>\nThis SDK is developed inside the Andante project. For more information about the Andante project, go to https://www.andante-ai.eu/.\n\nThe SDK provides some versions of the NeuroCorgi circuit which can simulate the behaviour of the models on chip. Two versions have been designed from a MobileNetV1 trained and quantized in 4-bit by using the [SAT](https://arxiv.org/abs/1912.10207) method in [N2D2](https://github.com/CEA-LIST/N2D2): one with the [ImageNet](https://www.image-net.org/index.php) dataset and the second with the [Coco](https://cocodataset.org/#home) dataset.\n\n<div align=\"center\">\n    <a href=\"https://ai4di.automotive.oth-aw.de/images/EAI-PDF/2022-09-19_EAI_S2_P2-CEA_IvanMiro-Panades.pdf#page=17\">\n    <img src=\"https://github.com/CEA-LIST/neurocorgi_sdk/raw/master/docs/_static/NeuroCorgi_design.png\" width=\"80%\" alt=\"NeuroCorgi ASIC\">\n    </a>\n</div>\n<div align=\"center\">\n    <i>NeuroCorgi ASIC</i>\n</div>\n<br>\n\nThe NeuroCorgi ASIC is able to extract features from HD images (1280x720) at 30 FPS with less than 100 mW.\n\n<div align=\"center\">\n    <a href=\"https://ai4di.automotive.oth-aw.de/images/EAI-PDF/2022-09-19_EAI_S2_P2-CEA_IvanMiro-Panades.pdf#page=10\">\n    <img src=\"https://github.com/CEA-LIST/neurocorgi_sdk/raw/master/docs/_static/NeuroCorgi_performance.png\" width=\"80%\" alt=\"NeuroCorgi performance\">\n    </a>\n</div>\n<div align=\"center\">\n    <i>NeuroCorgi performance target</i>\n</div>\n<br>\n\nFor more information about the NeuroCorgi ASIC, check the [presentation](https://ai4di.automotive.oth-aw.de/images/EAI-PDF/2022-09-19_EAI_S2_P2-CEA_IvanMiro-Panades.pdf) of [Ivan Miro-Panades](https://www.linkedin.com/in/ivanmiro/) at the International Workshop on Embedded Artificial Intelligence Devices, Systems, and Industrial Applications (EAI).\n\n\n## Installation\n\nBefore installing the sdk package, be sure to have the weight files of the NeuroCorgi models to use them with the sdk. Two versions exist for both Coco and ImageNet versions of the circuit. <br>\nPlease choose what you want:\n\n| | For PyTorch integration | For N2D2 integration |\n|:-:|:-:|:-:|\n| ImageNet chip | `neurocorginet_imagenet.safetensors` | `imagenet_weights.zip` |\n| Coco chip | `neurocorginet_coco.safetensors` | `coco_weights.zip` |\n\nPlease send an email to [Ivan Miro-Panades](ivan.miro-panades@cea.fr) or [Vincent Templier](vincent.templier@cea.fr) to get the files.\n\n### Via PyPI\n\nPip install the sdk package including all requirements in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).\n```\npip install neurocorgi_sdk\n```\n\n\n### From Source\n\nTo install the SDK, run in your [**Python>=3.7**](https://www.python.org/) environment\n```\ngit clone https://github.com/CEA-LIST/neurocorgi_sdk\ncd neurocorgi_sdk\npip install .\n```\n\n### With Docker\n\nIf you wish to work inside a Docker container, you will need to build the image first. <br>\nTo do so, run the command\n```\ndocker build --pull --rm -f \"docker/Dockerfile\" -t neurocorgi:neurocorgi_sdk \".\"\n```\n\nAfter building the image, start a container\n```\ndocker run --name myContainer --gpus=all -it neurocorgi:neurocorgi_sdk\n```\n\n\n## Getting Started\n\nUse `NeuroCorgiNet` directly in a Python environment:\n```python\nfrom neurocorgi_sdk import NeuroCorgiNet\nfrom neurocorgi_sdk.transforms import ToNeuroCorgiChip\n\n# Load a model\nmodel = NeuroCorgiNet(\"neurocorginet_imagenet.safetensors\")\n\n# Load and transform an image (requires PIL and requests)\nimage = PIL.Image.open(requests.get(\"https://github.com/CEA-LIST/neurocorgi_sdk/blob/master/neurocorgi_sdk/assets/corgi.jpg\", stream=True).raw)\nimg = ToNeuroCorgiChip()(image)\n\n# Use the model\ndiv4, div8, div16, div32 = model(img)\n```\n\n## About NeuroCorgi model\n\nThe NeuroCorgi circuit embeds a version of MobileNetV1 which has been trained and quantized in 4-bit. It requires to provide unsigned 8-bit inputs. To respect this condition, **inputs provided to `NeuroCorgiNet` must be between 0 and 255**. <br>\nYou can use the `ToNeuroCorgiChip` transformation to transform your images in the correct format. (No need to use it with the fakequant version `NeuroCorgiNet_fakequant` as the inputs have to be between 0 and 1).\n\nMoreover, since the model is fixed on chip it is not possible to modify its parameters.\nSo it will be impossible to train the models but you can train additional models to plug with `NeuroCorgiNet` for your own applications.\n\n\n## Examples\n\nYou can find some [examples](./examples/) of how to integrate the models in your applications.\n\nFor instance:\n- [ImageNet](./examples/classification/Imagenet.ipynb): use `NeuroCorgiNet` to perform an ImageNet inference. \n    You need a local installation of the ImageNet Database to make it work. Once installed somewhere, \n    you need to modify the ILSVRC2012_root variable to locate the correct directory.\n- [Cifar100](./examples/classification/Cifar100.ipynb): use `NeuroCorgiNet` to perform a transfer learning on CIFAR100 classification challenge.\n\nTo evaluate NeuroCorgi's performances on other tasks, you should use those scripts as a base inspiration.\n\n\n## The Team\n\nThe NeuroCorgi-SDK is a project that brought together several skillful engineers and researchers who contributed to it.\n\nThis SDK is currently maintained by [Lilian Billod](https://fr.linkedin.com/in/lilian-billod-3737b6177) and [Vincent Templier](http://www.linkedin.com/in/vincent-templier).\nA huge thank you to the people who contributed to the creation of this SDK: [Ivan Miro-Panades](https://www.linkedin.com/in/ivanmiro/), [Vincent Lorrain](https://fr.linkedin.com/in/vincent-lorrain-71510583), [Inna Kucher](https://fr.linkedin.com/in/inna-kucher-phd-14528169), [David Briand](https://fr.linkedin.com/in/david-briand-a0b1524a), [Johannes Thiele](https://ch.linkedin.com/in/johannes-thiele-51b795130), [Cyril Moineau](https://fr.linkedin.com/in/cmoineau), [Nermine Ali](https://fr.linkedin.com/in/nermineali) and [Olivier Bichler](https://fr.linkedin.com/in/olivierbichler).\n\n\n## License\n\nThis SDK has a CeCill-style license, as found in the [LICENSE](LICENSE) file.\n",
    "bugtrack_url": null,
    "license": "CECILL-2.1",
    "summary": "NeuroCorgi-SDK to use the NeuroCorgi model in object detection, instance segmentation and image classification apps.",
    "version": "2.0.2",
    "project_urls": {
        "Homepage": "https://neurocorgi.com",
        "Source": "https://github.com/CEA-LIST/neurocorgi_sdk"
    },
    "split_keywords": [
        "machine-learning",
        "deep-learning",
        "computer-vision",
        "ml",
        "dl",
        "ai",
        "neurocorgi",
        "cea"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cdc9e9ff83814952476ab974cf9a840c068013f8d31806972a30b54867e4bfd7",
                "md5": "038e22081098f392e4817809f1fa1649",
                "sha256": "78170fe88e7170a4c38582865351e14b2a3a7337cfa9ea2e4fc4714914edaeee"
            },
            "downloads": -1,
            "filename": "neurocorgi_sdk-2.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "038e22081098f392e4817809f1fa1649",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 138041,
            "upload_time": "2024-01-29T13:31:35",
            "upload_time_iso_8601": "2024-01-29T13:31:35.304463Z",
            "url": "https://files.pythonhosted.org/packages/cd/c9/e9ff83814952476ab974cf9a840c068013f8d31806972a30b54867e4bfd7/neurocorgi_sdk-2.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d1e3b3d97fdf2209f6bae29b9dd29287f15edd5cf364067956fedd223415c366",
                "md5": "9a34c2ed5449a8ecdf0dee6001bcf093",
                "sha256": "63d05c97f005ab6f0ea3e22d4201c9e8b2aba60f896cebde298e7970030ccb64"
            },
            "downloads": -1,
            "filename": "neurocorgi_sdk-2.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "9a34c2ed5449a8ecdf0dee6001bcf093",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 138877,
            "upload_time": "2024-01-29T13:31:36",
            "upload_time_iso_8601": "2024-01-29T13:31:36.969534Z",
            "url": "https://files.pythonhosted.org/packages/d1/e3/b3d97fdf2209f6bae29b9dd29287f15edd5cf364067956fedd223415c366/neurocorgi_sdk-2.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-29 13:31:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "CEA-LIST",
    "github_project": "neurocorgi_sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "neurocorgi-sdk"
}
        
Elapsed time: 0.20070s