<div align="center">
<p>
<a align="center" href="" target="_blank">
<img
width="850"
src="https://media.roboflow.com/open-source/autodistill/autodistill-banner.png"
>
</a>
</p>
</div>
# Autodistill Azure Vision Module
This repository contains the code supporting the Azure Vision module for use with [Autodistill](https://github.com/autodistill/autodistill).
The [Azure Vision Image Analysis 4.0 API](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?tabs=csharp&pivots=programming-language-rest-api) enables you to detect objects in images.
With this package, you can use the API to automatically label data for use in training a fine-tuned computer vision model.
This is ideal if you want to train a model that you own on a custom dataset.
You can then use your trained model on your computer using Autodistill, or at the edge or in the cloud by deploying with [Roboflow Inference](https://inference.roboflow.com).
Read the full [Autodistill documentation](https://autodistill.github.io/autodistill/).
Read the [Autodistill Azure Vision documentation](https://autodistill.github.io/autodistill/base_models/azure_vision/).
## Installation
> [!NOTE]
> Using this project will incur billing charges for API calls to the [Azure Vision Image Analysis 4.0 API](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?tabs=csharp&pivots=programming-language-rest-api).
> Refer to the [Azure Computer Vision pricing](https://azure.microsoft.com/en-gb/pricing/details/cognitive-services/computer-vision/) for more information and to calculate your expected pricing. This package makes one API call per image you want to label.
To use Azure Vision with autodistill, you need to install the following dependency:
```bash
pip install autodistill-azure-vision
```
Next, you will need to [create an Azure account](https://azure.microsoft.com/en-gb/get-started/azure-portal). Once you have an Azure account, create a "Computer vision" deployment in the "Azure AI services" dashboard in Azure.
This deployment will give you two API keys and an endpoint URL. You will need one of these API keys and the endpoint URL to use this Autodistill module.
Set your API key and endpoint URL in your environment:
```
export AZURE_VISION_SUBSCRIPTION_KEY=<your api key>
export AZURE_VISION_ENDPOINT=<your endpoint>
```
Use the quickstart below to start labeling images.
## Quickstart
### Label a Single Image
```python
from autodistill_azure_vision import AzureVision
from autodistill.detection import CaptionOntology
# define an ontology to map class names to our Azure Custom Vision prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = AzureVision(
ontology=CaptionOntology(
{
"animal": "animal",
"a forklift": "forklift"
}
),
endpoint=os.environ["AZURE_VISION_ENDPOINT"],
subscription_key=os.environ["AZURE_VISION_SUBSCRIPTION_KEY"]
)
results = base_model.predict("image.jpeg")
print(results)
# annotate predictions on an image
classes = base_model.ontology.classes()
box_annotator = sv.BoxAnnotator()
labels = [
f"{classes[class_id]} {confidence:0.2f}"
for _, _, confidence, class_id, _
in detections
]
image = cv2.imread("image.jpeg")
annotated_frame = box_annotator.annotate(
scene=image.copy(),
detections=detections,
labels=labels
)
sv.plot_image(image=annotated_frame, size=(16, 16))
```
### Label a Folder of Images
```python
from autodistill_azure_vision import AzureVision
from autodistill.detection import CaptionOntology
# define an ontology to map class names to our Azure Custom Vision prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = AzureVision(
ontology=CaptionOntology(
{
"animal": "animal",
"a forklift": "forklift"
}
),
endpoint=os.environ["AZURE_VISION_ENDPOINT"],
subscription_key=os.environ["AZURE_VISION_SUBSCRIPTION_KEY"]
)
base_model.label("./context_images", extension=".jpeg")
```
## License
This project is licensed under an [MIT license](LICENSE).
## 🏆 Contributing
We love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!
Raw data
{
"_id": null,
"home_page": "https://github.com/autodistill/autodistill-azure-vision",
"name": "autodistill-azure-vision",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Roboflow",
"author_email": "support@roboflow.com",
"download_url": "https://files.pythonhosted.org/packages/a8/cd/c4a86ad97368c947ebdad831252abbd77d6484e32cca68df2d3c64f66d66/autodistill-azure-vision-0.1.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <p>\n <a align=\"center\" href=\"\" target=\"_blank\">\n <img\n width=\"850\"\n src=\"https://media.roboflow.com/open-source/autodistill/autodistill-banner.png\"\n >\n </a>\n </p>\n</div>\n\n# Autodistill Azure Vision Module\n\nThis repository contains the code supporting the Azure Vision module for use with [Autodistill](https://github.com/autodistill/autodistill).\n\nThe [Azure Vision Image Analysis 4.0 API](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?tabs=csharp&pivots=programming-language-rest-api) enables you to detect objects in images.\n\nWith this package, you can use the API to automatically label data for use in training a fine-tuned computer vision model.\n\nThis is ideal if you want to train a model that you own on a custom dataset.\n\nYou can then use your trained model on your computer using Autodistill, or at the edge or in the cloud by deploying with [Roboflow Inference](https://inference.roboflow.com).\n\nRead the full [Autodistill documentation](https://autodistill.github.io/autodistill/).\n\nRead the [Autodistill Azure Vision documentation](https://autodistill.github.io/autodistill/base_models/azure_vision/).\n\n## Installation\n\n> [!NOTE] \n> Using this project will incur billing charges for API calls to the [Azure Vision Image Analysis 4.0 API](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?tabs=csharp&pivots=programming-language-rest-api).\n> Refer to the [Azure Computer Vision pricing](https://azure.microsoft.com/en-gb/pricing/details/cognitive-services/computer-vision/) for more information and to calculate your expected pricing. This package makes one API call per image you want to label.\n\nTo use Azure Vision with autodistill, you need to install the following dependency:\n\n```bash\npip install autodistill-azure-vision\n```\n\nNext, you will need to [create an Azure account](https://azure.microsoft.com/en-gb/get-started/azure-portal). Once you have an Azure account, create a \"Computer vision\" deployment in the \"Azure AI services\" dashboard in Azure.\n\nThis deployment will give you two API keys and an endpoint URL. You will need one of these API keys and the endpoint URL to use this Autodistill module.\n\nSet your API key and endpoint URL in your environment:\n\n```\nexport AZURE_VISION_SUBSCRIPTION_KEY=<your api key>\nexport AZURE_VISION_ENDPOINT=<your endpoint>\n```\n\nUse the quickstart below to start labeling images.\n\n## Quickstart\n\n### Label a Single Image\n\n```python\nfrom autodistill_azure_vision import AzureVision\nfrom autodistill.detection import CaptionOntology\n\n# define an ontology to map class names to our Azure Custom Vision prompt\n# the ontology dictionary has the format {caption: class}\n# where caption is the prompt sent to the base model, and class is the label that will\n# be saved for that caption in the generated annotations\n# then, load the model\nbase_model = AzureVision(\n ontology=CaptionOntology(\n {\n \"animal\": \"animal\",\n \"a forklift\": \"forklift\"\n }\n ),\n endpoint=os.environ[\"AZURE_VISION_ENDPOINT\"],\n subscription_key=os.environ[\"AZURE_VISION_SUBSCRIPTION_KEY\"]\n)\n\nresults = base_model.predict(\"image.jpeg\")\n\nprint(results)\n\n# annotate predictions on an image\nclasses = base_model.ontology.classes()\n\nbox_annotator = sv.BoxAnnotator()\n\nlabels = [\n\tf\"{classes[class_id]} {confidence:0.2f}\"\n\tfor _, _, confidence, class_id, _\n\tin detections\n]\n\nimage = cv2.imread(\"image.jpeg\")\n\nannotated_frame = box_annotator.annotate(\n\tscene=image.copy(),\n\tdetections=detections,\n\tlabels=labels\n)\n\nsv.plot_image(image=annotated_frame, size=(16, 16))\n```\n\n### Label a Folder of Images\n\n\n```python\nfrom autodistill_azure_vision import AzureVision\nfrom autodistill.detection import CaptionOntology\n\n# define an ontology to map class names to our Azure Custom Vision prompt\n# the ontology dictionary has the format {caption: class}\n# where caption is the prompt sent to the base model, and class is the label that will\n# be saved for that caption in the generated annotations\n# then, load the model\nbase_model = AzureVision(\n ontology=CaptionOntology(\n {\n \"animal\": \"animal\",\n \"a forklift\": \"forklift\"\n }\n ),\n endpoint=os.environ[\"AZURE_VISION_ENDPOINT\"],\n subscription_key=os.environ[\"AZURE_VISION_SUBSCRIPTION_KEY\"]\n)\n\nbase_model.label(\"./context_images\", extension=\".jpeg\")\n```\n\n## License\n\nThis project is licensed under an [MIT license](LICENSE).\n\n## \ud83c\udfc6 Contributing\n\nWe love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you \ud83d\ude4f to all our contributors!\n",
"bugtrack_url": null,
"license": "",
"summary": "Azure Vision base model for use with Autodistill",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/autodistill/autodistill-azure-vision"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "dc997a163155a99754c20bb0c280aa3a4da0d3c4e842ef8d166695d9e7dd3d97",
"md5": "b5ab63a505be34de269128715c386137",
"sha256": "c27db864cde494f36f36bc76aa172d029b1d5dd5e56f0f18b939a75a8a6a04e2"
},
"downloads": -1,
"filename": "autodistill_azure_vision-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b5ab63a505be34de269128715c386137",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 5176,
"upload_time": "2023-10-27T09:14:07",
"upload_time_iso_8601": "2023-10-27T09:14:07.653694Z",
"url": "https://files.pythonhosted.org/packages/dc/99/7a163155a99754c20bb0c280aa3a4da0d3c4e842ef8d166695d9e7dd3d97/autodistill_azure_vision-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a8cdc4a86ad97368c947ebdad831252abbd77d6484e32cca68df2d3c64f66d66",
"md5": "b9225ab234e7d88ac2d5e0880eb9ffc1",
"sha256": "cc5064edca3b2cd5dac1d1b559723c5f1806a79d7670ed123c9c99203863e750"
},
"downloads": -1,
"filename": "autodistill-azure-vision-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "b9225ab234e7d88ac2d5e0880eb9ffc1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 4836,
"upload_time": "2023-10-27T09:14:09",
"upload_time_iso_8601": "2023-10-27T09:14:09.540460Z",
"url": "https://files.pythonhosted.org/packages/a8/cd/c4a86ad97368c947ebdad831252abbd77d6484e32cca68df2d3c64f66d66/autodistill-azure-vision-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-27 09:14:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "autodistill",
"github_project": "autodistill-azure-vision",
"github_not_found": true,
"lcname": "autodistill-azure-vision"
}