azure-ai-vision-imageanalysis


Nameazure-ai-vision-imageanalysis JSON
Version 1.0.0b2 PyPI version JSON
download
home_pagehttps://github.com/Azure/azure-sdk-for-python/tree/main/sdk
SummaryMicrosoft Azure Ai Vision Imageanalysis Client Library for Python
upload_time2024-02-13 22:39:05
maintainerNone
docs_urlNone
authorMicrosoft Corporation
requires_python>=3.8
licenseMIT License
keywords azure azure sdk
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # Azure Image Analysis client library for Python

The Image Analysis service provides AI algorithms for processing images and returning information about their content. In a single service call, you can extract one or more visual features from the image simultaneously, including getting a caption for the image, extracting text shown in the image (OCR) and detecting objects. For more information on the service and the supported visual features, see [Image Analysis overview](https://learn.microsoft.com/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0), and the [Concepts](https://learn.microsoft.com/azure/ai-services/computer-vision/concept-tag-images-40) page.

Use the Image Analysis client library to:
* Authenticate against the service
* Set what features you would like to extract
* Upload an image for analysis, or send an image URL
* Get the analysis result

[Product documentation](https://learn.microsoft.com/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0) 
| [Samples](https://aka.ms/azsdk/image-analysis/samples/python)
| [Vision Studio](https://aka.ms/vision-studio/image-analysis)
| [API reference documentation](https://aka.ms/azsdk/image-analysis/ref-docs/python)
| [Package (Pypi)](https://aka.ms/azsdk/image-analysis/package/pypi)
| [SDK source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis)

## Getting started

### Prerequisites

* [Python 3.8](https://www.python.org/) or later installed, including [pip](https://pip.pypa.io/en/stable/).
* An [Azure subscription](https://azure.microsoft.com/free).
* A [Computer Vision resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) in your Azure subscription.
  * You will need the key and endpoint from this resource to authenticate against the service.
  * Note that in order to run Image Analysis with the `CAPTION` or `DENSE_CAPTIONS` features, the Azure resource needs to be from a GPU-supported region. See the note [here](https://learn.microsoft.com/azure/ai-services/computer-vision/concept-describe-images-40) for a list of supported regions.

### Install the Image Analysis package

```bash
pip install azure-ai-vision-imageanalysis
```
### Set environment variables

To authenticate the `ImageAnalysisClient`, you will need the endpoint and key from your Azure Computer Vision resource in the [Azure Portal](https://portal.azure.com). The code snippet below assumes these values are stored in environment variables:

* Set the environment variable `VISION_ENDPOINT` to the endpoint URL. It has the form `https://your-resource-name.cognitiveservices.azure.com`, where `your-resource-name` is your unique Azure Computer Vision resource name.

* Set the environment variable `VISION_KEY` to the key. The key is a 32-character Hexadecimal number.

Note that the client library does not directly read these environment variable at run time. The endpoint and key must be provided to the constructor of `ImageAnalysisClient` in your code. The code snippet below reads environment variables to promote the practice of not hard-coding secrets in your source code.

### Create and authenticate the client

Once you define the environment variables, this Python code will create and authenticate a synchronous `ImageAnalysisClient`:

<!-- SNIPPET:sample_caption_image_file.create_client -->

```python
import os
from azure.ai.vision.imageanalysis import ImageAnalysisClient
from azure.ai.vision.imageanalysis.models import VisualFeatures
from azure.core.credentials import AzureKeyCredential

# Set the values of your computer vision endpoint and computer vision key
# as environment variables:
try:
    endpoint = os.environ["VISION_ENDPOINT"]
    key = os.environ["VISION_KEY"]
except KeyError:
    print("Missing environment variable 'VISION_ENDPOINT' or 'VISION_KEY'")
    print("Set them before running this sample.")
    exit()

# Create an Image Analysis client for synchronous operations
client = ImageAnalysisClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key)
)
```

<!-- END SNIPPET -->

A synchronous client supports synchronous analysis methods, meaning they will block until the service responds with analysis results. The code snippets below all use synchronous methods because it's easier for a getting-started guide. The SDK offers equivalent asynchronous APIs which are often preferred. To create an asynchronous client, do the following:

* Update the above code to import `ImageAnalysisClient` from the `aio` namespace:
    ```python
    from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient
    ```
* Install the additional package [aiohttp](https://pypi.org/project/aiohttp/):
    ```bash
    pip install aiohttp
    ```

## Key concepts

### Visual features

Once you've initialized an `ImageAnalysisClient`, you need to select one or more visual features to analyze. The options are specified by the enum class `VisualFeatures`. The following features are supported:

1. `VisualFeatures.CAPTION` ([Examples](#generate-an-image-caption-for-an-image-file) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Generate a human-readable sentence that describes the content of an image.
1. `VisualFeatures.READ` ([Examples](#extract-text-from-an-image-file) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Also known as Optical Character Recognition (OCR). Extract printed or handwritten text from images. **Note**: For extracting text from PDF, Office, and HTML documents and document images, use the Document Intelligence service with the [Read model](https://learn.microsoft.com/azure/ai-services/document-intelligence/concept-read?view=doc-intel-4.0.0). This model is optimized for text-heavy digital and scanned documents with an asynchronous REST API that makes it easy to power your intelligent document processing scenarios. This service is separate from the Image Analysis service and has its own SDK.
1. `VisualFeatures.DENSE_CAPTIONS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Dense Captions provides more details by generating one-sentence captions for up to 10 different regions in the image, including one for the whole image. 
1. `VisualFeatures.TAGS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Extract content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images.
1. `VisualFeatures.OBJECTS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Object detection. This is similar to tagging, but focused on detecting physical objects in the image and returning their location.
1. `VisualFeatures.SMART_CROPS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Used to find a representative sub-region of the image for thumbnail generation, with priority given to include faces.
1. `VisualFeatures.PEOPLE` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Detect people in the image and return their location.

For more information about these features, see [Image Analysis overview](https://learn.microsoft.com/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0), and the [Concepts](https://learn.microsoft.com/azure/ai-services/computer-vision/concept-tag-images-40) page.

### Analyze from image buffer or URL

The `ImageAnalysisClient` has two overloads for the method `analyze`:
* Analyze an image from an input [bytes](https://docs.python.org/3/library/stdtypes.html#bytes-objects) object. The client will upload the image to the service as part of the REST request.
* Analyze an image from a publicly-accessible URL. The client will send the image URL to the service. The service will fetch the image.

The examples below show how to do both. The `analyze` from an input `bytes` object examples populate the `bytes` object by loading an image from a file on disk.

### Supported image formats

Image Analysis works on images that meet the following requirements:
* The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format
* The file size of the image must be less than 20 megabytes (MB)
* The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels


## Examples

The following sections provide code snippets covering these common Image Analysis scenarios:

* [Generate an image caption for an image file](#generate-an-image-caption-for-an-image-file)
* [Generate an image caption for an image URL](#generate-an-image-caption-for-an-image-url)
* [Extract text (OCR) from an image file](#extract-text-from-an-image-file)
* [Extract text (OCR) from an image URL](#extract-text-from-an-image-url)

These snippets use the synchronous `client` from [Create and authenticate the client](#create-and-authenticate-the-client).

See the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples) folder for fully working samples for all visual features, including asynchronous clients.

### Generate an image caption for an image file

This example demonstrates how to generate a one-sentence caption for the image file `sample.jpg` using the `ImageAnalysisClient`. The synchronous (blocking) `analyze` method call returns an `ImageAnalysisResult` object with a `caption` property of type `CaptionResult`. It contains the generated caption and its confidence score in the range [0, 1]. By default the caption may contain gender terms such as "man", "woman", or "boy", "girl". You have the option to request gender-neutral terms such as "person" or "child" by setting `gender_neutral_caption = True` when calling `analyze`.

Notes:
* Caption is only available in some Azure regions. See [Prerequisites](#prerequisites).
* Caption is only supported in English at the moment.

<!-- SNIPPET:sample_caption_image_file.caption -->

```python
# Load image to analyze into a 'bytes' object
with open("sample.jpg", "rb") as f:
    image_data = f.read()

# Get a caption for the image. This will be a synchronously (blocking) call.
result = client.analyze(
    image_data=image_data,
    visual_features=[VisualFeatures.CAPTION],
    gender_neutral_caption=True,  # Optional (default is False)
)

# Print caption results to the console
print("Image analysis results:")
print(" Caption:")
if result.caption is not None:
    print(f"   '{result.caption.text}', Confidence {result.caption.confidence:.4f}")
```

<!-- END SNIPPET -->

To generate captions for additional images, simply call `analyze` multiple times. You can use the same `ImageAnalysisClient` do to multiple analysis calls.

### Generate an image caption for an image URL

This example is similar to the above, expect it calls the `analyze` method and provides a [publicly accessible image URL](https://aka.ms/azsdk/image-analysis/sample.jpg) instead of a file name.

<!-- SNIPPET:sample_caption_image_url.caption -->

```python
# Get a caption for the image. This will be a synchronously (blocking) call.
result = client.analyze_from_url(
    image_url="https://aka.ms/azsdk/image-analysis/sample.jpg",
    visual_features=[VisualFeatures.CAPTION],
    gender_neutral_caption=True,  # Optional (default is False)
)

# Print caption results to the console
print("Image analysis results:")
print(" Caption:")
if result.caption is not None:
    print(f"   '{result.caption.text}', Confidence {result.caption.confidence:.4f}")
```

<!-- END SNIPPET -->

### Extract text from an image file

This example demonstrates how to extract printed or hand-written text for the image file `sample.jpg` using the `ImageAnalysisClient`. The synchronous (blocking) `analyze` method call returns an `ImageAnalysisResult` object with a `read` property of type `ReadResult`. It includes a list of text lines and a bounding polygon surrounding each text line. For each line, it also returns a list of words in the text line and a bounding polygon surrounding each word.

<!-- SNIPPET:sample_ocr_image_file.read -->

```python
# Load image to analyze into a 'bytes' object
with open("sample.jpg", "rb") as f:
    image_data = f.read()

# Extract text (OCR) from an image stream. This will be a synchronously (blocking) call.
result = client.analyze(
    image_data=image_data,
    visual_features=[VisualFeatures.READ]
)

# Print text (OCR) analysis results to the console
print("Image analysis results:")
print(" Read:")
if result.read is not None:
    for line in result.read.blocks[0].lines:
        print(f"   Line: '{line.text}', Bounding box {line.bounding_polygon}")
        for word in line.words:
            print(f"     Word: '{word.text}', Bounding polygon {word.bounding_polygon}, Confidence {word.confidence:.4f}")
```

<!-- END SNIPPET -->

To extract text for additional images, simply call `analyze` multiple times. You can use the same ImageAnalysisClient do to multiple analysis calls.

**Note**: For extracting text from PDF, Office, and HTML documents and document images, use the Document Intelligence service with the [Read model](https://learn.microsoft.com/azure/ai-services/document-intelligence/concept-read?view=doc-intel-4.0.0). This model is optimized for text-heavy digital and scanned documents with an asynchronous REST API that makes it easy to power your intelligent document processing scenarios. This service is separate from the Image Analysis service and has its own SDK.

### Extract text from an image URL

This example is similar to the above, expect it calls the `analyze` method and provides a [publicly accessible image URL](https://aka.ms/azsdk/image-analysis/sample.jpg) instead of a file name.

<!-- SNIPPET:sample_ocr_image_url.read -->

```python
# Extract text (OCR) from an image stream. This will be a synchronously (blocking) call.
result = client.analyze_from_url(
    image_url="https://aka.ms/azsdk/image-analysis/sample.jpg",
    visual_features=[VisualFeatures.READ]
)

# Print text (OCR) analysis results to the console
print("Image analysis results:")
print(" Read:")
if result.read is not None:
    for line in result.read.blocks[0].lines:
        print(f"   Line: '{line.text}', Bounding box {line.bounding_polygon}")
        for word in line.words:
            print(f"     Word: '{word.text}', Bounding polygon {word.bounding_polygon}, Confidence {word.confidence:.4f}")
```

<!-- END SNIPPET -->


## Troubleshooting

### Exceptions

The `analyze` methods raise an [HttpResponseError](https://learn.microsoft.com/python/api/azure-core/azure.core.exceptions.httpresponseerror) exception for a non-success HTTP status code response from the service. The exception's `status_code` will be the HTTP response status code. The exception's `error.message` contains a detailed message that will allow you to diagnose the issue:

```python
try:
    result = client.analyze( ... )
except HttpResponseError as e:
    print(f"Status code: {e.status_code}")
    print(f"Reason: {e.reason}")
    print(f"Message: {e.error.message}")
```

For example, when you provide a wrong authentication key:
```
Status code: 401
Reason: PermissionDenied
Message: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
```

Or when you provide an image URL that does not exist or not accessible:
```
Status code: 400
Reason: Bad Request
Message: The provided image url is not accessible.
```

### Logging

The client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following:

<!-- SNIPPET:sample_analyze_all_image_file.logging -->

```python
import sys
import logging

# Acquire the logger for this client library. Use 'azure' to affect both
# 'azure.core` and `azure.ai.vision.imageanalysis' libraries.
logger = logging.getLogger("azure")

# Set the desired logging level. logging.INFO or logging.DEBUG are good options.
logger.setLevel(logging.INFO)

# Direct logging output to stdout (the default):
handler = logging.StreamHandler(stream=sys.stdout)
# Or direct logging output to a file:
# handler = logging.FileHandler(filename = 'sample.log')
logger.addHandler(handler)

# Optional: change the default logging format. Here we add a timestamp.
formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s")
handler.setFormatter(formatter)
```

<!-- END SNIPPET -->

By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Ocp-Apim-Subscription-Key` which holds the key), and the request and response payloads. To create logs without redaction, set the method argument `logging_enable = True` when you create `ImageAnalysisClient`, or when you call `analyze` on the client. 

<!-- SNIPPET:sample_analyze_all_image_file.create_client_with_logging -->

```python
# Create an Image Analysis client with none redacted log
client = ImageAnalysisClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key),
    logging_enable=True
)
```

<!-- END SNIPPET -->

None redacted logs are generated for log level `logging.DEBUG` only. Be sure to protect none redacted logs to avoid compromising security. For more information see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging)


## Next steps

* Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples) folder, containing fully runnable Python code for Image Analysis (all visual features, synchronous and asynchronous clients, from image file or URL).

## Contributing

This project welcomes contributions and suggestions. Most contributions require
you to agree to a Contributor License Agreement (CLA) declaring that you have
the right to, and actually do, grant us the rights to use your contribution.
For details, visit [https://cla.microsoft.com](https://cla.microsoft.com).

When you submit a pull request, a CLA-bot will automatically determine whether
you need to provide a CLA and decorate the PR appropriately (e.g., label,
comment). Simply follow the instructions provided by the bot. You will only
need to do this once across all repos using our CLA.

This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct). For more information,
see the Code of Conduct FAQ or contact opencode@microsoft.com with any
additional questions or comments.


<!-- Note: I did not use LINKS section here with a list of `[link-label](link-url)` because these
links don't work in the Sphinx generated documentation. The index.html page of these docs
include this README, but with broken links.-->

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk",
    "name": "azure-ai-vision-imageanalysis",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "azure,azure sdk",
    "author": "Microsoft Corporation",
    "author_email": "azpysdkhelp@microsoft.com",
    "download_url": "https://files.pythonhosted.org/packages/36/50/f84280721603cfa3bd2cead84a5769057fdb8aa4db66a91364176116e8d7/azure-ai-vision-imageanalysis-1.0.0b2.tar.gz",
    "platform": null,
    "description": "# Azure Image Analysis client library for Python\n\nThe Image Analysis service provides AI algorithms for processing images and returning information about their content. In a single service call, you can extract one or more visual features from the image simultaneously, including getting a caption for the image, extracting text shown in the image (OCR) and detecting objects. For more information on the service and the supported visual features, see [Image Analysis overview](https://learn.microsoft.com/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0), and the [Concepts](https://learn.microsoft.com/azure/ai-services/computer-vision/concept-tag-images-40) page.\n\nUse the Image Analysis client library to:\n* Authenticate against the service\n* Set what features you would like to extract\n* Upload an image for analysis, or send an image URL\n* Get the analysis result\n\n[Product documentation](https://learn.microsoft.com/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0) \n| [Samples](https://aka.ms/azsdk/image-analysis/samples/python)\n| [Vision Studio](https://aka.ms/vision-studio/image-analysis)\n| [API reference documentation](https://aka.ms/azsdk/image-analysis/ref-docs/python)\n| [Package (Pypi)](https://aka.ms/azsdk/image-analysis/package/pypi)\n| [SDK source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/azure/ai/vision/imageanalysis)\n\n## Getting started\n\n### Prerequisites\n\n* [Python 3.8](https://www.python.org/) or later installed, including [pip](https://pip.pypa.io/en/stable/).\n* An [Azure subscription](https://azure.microsoft.com/free).\n* A [Computer Vision resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) in your Azure subscription.\n  * You will need the key and endpoint from this resource to authenticate against the service.\n  * Note that in order to run Image Analysis with the `CAPTION` or `DENSE_CAPTIONS` features, the Azure resource needs to be from a GPU-supported region. See the note [here](https://learn.microsoft.com/azure/ai-services/computer-vision/concept-describe-images-40) for a list of supported regions.\n\n### Install the Image Analysis package\n\n```bash\npip install azure-ai-vision-imageanalysis\n```\n### Set environment variables\n\nTo authenticate the `ImageAnalysisClient`, you will need the endpoint and key from your Azure Computer Vision resource in the [Azure Portal](https://portal.azure.com). The code snippet below assumes these values are stored in environment variables:\n\n* Set the environment variable `VISION_ENDPOINT` to the endpoint URL. It has the form `https://your-resource-name.cognitiveservices.azure.com`, where `your-resource-name` is your unique Azure Computer Vision resource name.\n\n* Set the environment variable `VISION_KEY` to the key. The key is a 32-character Hexadecimal number.\n\nNote that the client library does not directly read these environment variable at run time. The endpoint and key must be provided to the constructor of `ImageAnalysisClient` in your code. The code snippet below reads environment variables to promote the practice of not hard-coding secrets in your source code.\n\n### Create and authenticate the client\n\nOnce you define the environment variables, this Python code will create and authenticate a synchronous `ImageAnalysisClient`:\n\n<!-- SNIPPET:sample_caption_image_file.create_client -->\n\n```python\nimport os\nfrom azure.ai.vision.imageanalysis import ImageAnalysisClient\nfrom azure.ai.vision.imageanalysis.models import VisualFeatures\nfrom azure.core.credentials import AzureKeyCredential\n\n# Set the values of your computer vision endpoint and computer vision key\n# as environment variables:\ntry:\n    endpoint = os.environ[\"VISION_ENDPOINT\"]\n    key = os.environ[\"VISION_KEY\"]\nexcept KeyError:\n    print(\"Missing environment variable 'VISION_ENDPOINT' or 'VISION_KEY'\")\n    print(\"Set them before running this sample.\")\n    exit()\n\n# Create an Image Analysis client for synchronous operations\nclient = ImageAnalysisClient(\n    endpoint=endpoint,\n    credential=AzureKeyCredential(key)\n)\n```\n\n<!-- END SNIPPET -->\n\nA synchronous client supports synchronous analysis methods, meaning they will block until the service responds with analysis results. The code snippets below all use synchronous methods because it's easier for a getting-started guide. The SDK offers equivalent asynchronous APIs which are often preferred. To create an asynchronous client, do the following:\n\n* Update the above code to import `ImageAnalysisClient` from the `aio` namespace:\n    ```python\n    from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient\n    ```\n* Install the additional package [aiohttp](https://pypi.org/project/aiohttp/):\n    ```bash\n    pip install aiohttp\n    ```\n\n## Key concepts\n\n### Visual features\n\nOnce you've initialized an `ImageAnalysisClient`, you need to select one or more visual features to analyze. The options are specified by the enum class `VisualFeatures`. The following features are supported:\n\n1. `VisualFeatures.CAPTION` ([Examples](#generate-an-image-caption-for-an-image-file) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Generate a human-readable sentence that describes the content of an image.\n1. `VisualFeatures.READ` ([Examples](#extract-text-from-an-image-file) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Also known as Optical Character Recognition (OCR). Extract printed or handwritten text from images. **Note**: For extracting text from PDF, Office, and HTML documents and document images, use the Document Intelligence service with the [Read model](https://learn.microsoft.com/azure/ai-services/document-intelligence/concept-read?view=doc-intel-4.0.0). This model is optimized for text-heavy digital and scanned documents with an asynchronous REST API that makes it easy to power your intelligent document processing scenarios. This service is separate from the Image Analysis service and has its own SDK.\n1. `VisualFeatures.DENSE_CAPTIONS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Dense Captions provides more details by generating one-sentence captions for up to 10 different regions in the image, including one for the whole image. \n1. `VisualFeatures.TAGS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Extract content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images.\n1. `VisualFeatures.OBJECTS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Object detection. This is similar to tagging, but focused on detecting physical objects in the image and returning their location.\n1. `VisualFeatures.SMART_CROPS` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Used to find a representative sub-region of the image for thumbnail generation, with priority given to include faces.\n1. `VisualFeatures.PEOPLE` ([Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples)): Detect people in the image and return their location.\n\nFor more information about these features, see [Image Analysis overview](https://learn.microsoft.com/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0), and the [Concepts](https://learn.microsoft.com/azure/ai-services/computer-vision/concept-tag-images-40) page.\n\n### Analyze from image buffer or URL\n\nThe `ImageAnalysisClient` has two overloads for the method `analyze`:\n* Analyze an image from an input [bytes](https://docs.python.org/3/library/stdtypes.html#bytes-objects) object. The client will upload the image to the service as part of the REST request.\n* Analyze an image from a publicly-accessible URL. The client will send the image URL to the service. The service will fetch the image.\n\nThe examples below show how to do both. The `analyze` from an input `bytes` object examples populate the `bytes` object by loading an image from a file on disk.\n\n### Supported image formats\n\nImage Analysis works on images that meet the following requirements:\n* The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format\n* The file size of the image must be less than 20 megabytes (MB)\n* The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels\n\n\n## Examples\n\nThe following sections provide code snippets covering these common Image Analysis scenarios:\n\n* [Generate an image caption for an image file](#generate-an-image-caption-for-an-image-file)\n* [Generate an image caption for an image URL](#generate-an-image-caption-for-an-image-url)\n* [Extract text (OCR) from an image file](#extract-text-from-an-image-file)\n* [Extract text (OCR) from an image URL](#extract-text-from-an-image-url)\n\nThese snippets use the synchronous `client` from [Create and authenticate the client](#create-and-authenticate-the-client).\n\nSee the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples) folder for fully working samples for all visual features, including asynchronous clients.\n\n### Generate an image caption for an image file\n\nThis example demonstrates how to generate a one-sentence caption for the image file `sample.jpg` using the `ImageAnalysisClient`. The synchronous (blocking) `analyze` method call returns an `ImageAnalysisResult` object with a `caption` property of type `CaptionResult`. It contains the generated caption and its confidence score in the range [0, 1]. By default the caption may contain gender terms such as \"man\", \"woman\", or \"boy\", \"girl\". You have the option to request gender-neutral terms such as \"person\" or \"child\" by setting `gender_neutral_caption = True` when calling `analyze`.\n\nNotes:\n* Caption is only available in some Azure regions. See [Prerequisites](#prerequisites).\n* Caption is only supported in English at the moment.\n\n<!-- SNIPPET:sample_caption_image_file.caption -->\n\n```python\n# Load image to analyze into a 'bytes' object\nwith open(\"sample.jpg\", \"rb\") as f:\n    image_data = f.read()\n\n# Get a caption for the image. This will be a synchronously (blocking) call.\nresult = client.analyze(\n    image_data=image_data,\n    visual_features=[VisualFeatures.CAPTION],\n    gender_neutral_caption=True,  # Optional (default is False)\n)\n\n# Print caption results to the console\nprint(\"Image analysis results:\")\nprint(\" Caption:\")\nif result.caption is not None:\n    print(f\"   '{result.caption.text}', Confidence {result.caption.confidence:.4f}\")\n```\n\n<!-- END SNIPPET -->\n\nTo generate captions for additional images, simply call `analyze` multiple times. You can use the same `ImageAnalysisClient` do to multiple analysis calls.\n\n### Generate an image caption for an image URL\n\nThis example is similar to the above, expect it calls the `analyze` method and provides a [publicly accessible image URL](https://aka.ms/azsdk/image-analysis/sample.jpg) instead of a file name.\n\n<!-- SNIPPET:sample_caption_image_url.caption -->\n\n```python\n# Get a caption for the image. This will be a synchronously (blocking) call.\nresult = client.analyze_from_url(\n    image_url=\"https://aka.ms/azsdk/image-analysis/sample.jpg\",\n    visual_features=[VisualFeatures.CAPTION],\n    gender_neutral_caption=True,  # Optional (default is False)\n)\n\n# Print caption results to the console\nprint(\"Image analysis results:\")\nprint(\" Caption:\")\nif result.caption is not None:\n    print(f\"   '{result.caption.text}', Confidence {result.caption.confidence:.4f}\")\n```\n\n<!-- END SNIPPET -->\n\n### Extract text from an image file\n\nThis example demonstrates how to extract printed or hand-written text for the image file `sample.jpg` using the `ImageAnalysisClient`. The synchronous (blocking) `analyze` method call returns an `ImageAnalysisResult` object with a `read` property of type `ReadResult`. It includes a list of text lines and a bounding polygon surrounding each text line. For each line, it also returns a list of words in the text line and a bounding polygon surrounding each word.\n\n<!-- SNIPPET:sample_ocr_image_file.read -->\n\n```python\n# Load image to analyze into a 'bytes' object\nwith open(\"sample.jpg\", \"rb\") as f:\n    image_data = f.read()\n\n# Extract text (OCR) from an image stream. This will be a synchronously (blocking) call.\nresult = client.analyze(\n    image_data=image_data,\n    visual_features=[VisualFeatures.READ]\n)\n\n# Print text (OCR) analysis results to the console\nprint(\"Image analysis results:\")\nprint(\" Read:\")\nif result.read is not None:\n    for line in result.read.blocks[0].lines:\n        print(f\"   Line: '{line.text}', Bounding box {line.bounding_polygon}\")\n        for word in line.words:\n            print(f\"     Word: '{word.text}', Bounding polygon {word.bounding_polygon}, Confidence {word.confidence:.4f}\")\n```\n\n<!-- END SNIPPET -->\n\nTo extract text for additional images, simply call `analyze` multiple times. You can use the same ImageAnalysisClient do to multiple analysis calls.\n\n**Note**: For extracting text from PDF, Office, and HTML documents and document images, use the Document Intelligence service with the [Read model](https://learn.microsoft.com/azure/ai-services/document-intelligence/concept-read?view=doc-intel-4.0.0). This model is optimized for text-heavy digital and scanned documents with an asynchronous REST API that makes it easy to power your intelligent document processing scenarios. This service is separate from the Image Analysis service and has its own SDK.\n\n### Extract text from an image URL\n\nThis example is similar to the above, expect it calls the `analyze` method and provides a [publicly accessible image URL](https://aka.ms/azsdk/image-analysis/sample.jpg) instead of a file name.\n\n<!-- SNIPPET:sample_ocr_image_url.read -->\n\n```python\n# Extract text (OCR) from an image stream. This will be a synchronously (blocking) call.\nresult = client.analyze_from_url(\n    image_url=\"https://aka.ms/azsdk/image-analysis/sample.jpg\",\n    visual_features=[VisualFeatures.READ]\n)\n\n# Print text (OCR) analysis results to the console\nprint(\"Image analysis results:\")\nprint(\" Read:\")\nif result.read is not None:\n    for line in result.read.blocks[0].lines:\n        print(f\"   Line: '{line.text}', Bounding box {line.bounding_polygon}\")\n        for word in line.words:\n            print(f\"     Word: '{word.text}', Bounding polygon {word.bounding_polygon}, Confidence {word.confidence:.4f}\")\n```\n\n<!-- END SNIPPET -->\n\n\n## Troubleshooting\n\n### Exceptions\n\nThe `analyze` methods raise an [HttpResponseError](https://learn.microsoft.com/python/api/azure-core/azure.core.exceptions.httpresponseerror) exception for a non-success HTTP status code response from the service. The exception's `status_code` will be the HTTP response status code. The exception's `error.message` contains a detailed message that will allow you to diagnose the issue:\n\n```python\ntry:\n    result = client.analyze( ... )\nexcept HttpResponseError as e:\n    print(f\"Status code: {e.status_code}\")\n    print(f\"Reason: {e.reason}\")\n    print(f\"Message: {e.error.message}\")\n```\n\nFor example, when you provide a wrong authentication key:\n```\nStatus code: 401\nReason: PermissionDenied\nMessage: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.\n```\n\nOr when you provide an image URL that does not exist or not accessible:\n```\nStatus code: 400\nReason: Bad Request\nMessage: The provided image url is not accessible.\n```\n\n### Logging\n\nThe client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following:\n\n<!-- SNIPPET:sample_analyze_all_image_file.logging -->\n\n```python\nimport sys\nimport logging\n\n# Acquire the logger for this client library. Use 'azure' to affect both\n# 'azure.core` and `azure.ai.vision.imageanalysis' libraries.\nlogger = logging.getLogger(\"azure\")\n\n# Set the desired logging level. logging.INFO or logging.DEBUG are good options.\nlogger.setLevel(logging.INFO)\n\n# Direct logging output to stdout (the default):\nhandler = logging.StreamHandler(stream=sys.stdout)\n# Or direct logging output to a file:\n# handler = logging.FileHandler(filename = 'sample.log')\nlogger.addHandler(handler)\n\n# Optional: change the default logging format. Here we add a timestamp.\nformatter = logging.Formatter(\"%(asctime)s:%(levelname)s:%(name)s:%(message)s\")\nhandler.setFormatter(formatter)\n```\n\n<!-- END SNIPPET -->\n\nBy default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Ocp-Apim-Subscription-Key` which holds the key), and the request and response payloads. To create logs without redaction, set the method argument `logging_enable = True` when you create `ImageAnalysisClient`, or when you call `analyze` on the client. \n\n<!-- SNIPPET:sample_analyze_all_image_file.create_client_with_logging -->\n\n```python\n# Create an Image Analysis client with none redacted log\nclient = ImageAnalysisClient(\n    endpoint=endpoint,\n    credential=AzureKeyCredential(key),\n    logging_enable=True\n)\n```\n\n<!-- END SNIPPET -->\n\nNone redacted logs are generated for log level `logging.DEBUG` only. Be sure to protect none redacted logs to avoid compromising security. For more information see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging)\n\n\n## Next steps\n\n* Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/vision/azure-ai-vision-imageanalysis/samples) folder, containing fully runnable Python code for Image Analysis (all visual features, synchronous and asynchronous clients, from image file or URL).\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require\nyou to agree to a Contributor License Agreement (CLA) declaring that you have\nthe right to, and actually do, grant us the rights to use your contribution.\nFor details, visit [https://cla.microsoft.com](https://cla.microsoft.com).\n\nWhen you submit a pull request, a CLA-bot will automatically determine whether\nyou need to provide a CLA and decorate the PR appropriately (e.g., label,\ncomment). Simply follow the instructions provided by the bot. You will only\nneed to do this once across all repos using our CLA.\n\nThis project has adopted the\n[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct). For more information,\nsee the Code of Conduct FAQ or contact opencode@microsoft.com with any\nadditional questions or comments.\n\n\n<!-- Note: I did not use LINKS section here with a list of `[link-label](link-url)` because these\nlinks don't work in the Sphinx generated documentation. The index.html page of these docs\ninclude this README, but with broken links.-->\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Microsoft Azure Ai Vision Imageanalysis Client Library for Python",
    "version": "1.0.0b2",
    "project_urls": {
        "Homepage": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk"
    },
    "split_keywords": [
        "azure",
        "azure sdk"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "10a4cf119473f72388815cd2b715744f99c88417a7c6fdcb539e23eb4646efec",
                "md5": "67f9822c1f05221fcc60a3c7d9d78db2",
                "sha256": "b9b7e9e189b8b7df0e95e704972866efcb287bc557b2395fd11b7f730b86e22e"
            },
            "downloads": -1,
            "filename": "azure_ai_vision_imageanalysis-1.0.0b2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "67f9822c1f05221fcc60a3c7d9d78db2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 64177,
            "upload_time": "2024-02-13T22:39:07",
            "upload_time_iso_8601": "2024-02-13T22:39:07.247144Z",
            "url": "https://files.pythonhosted.org/packages/10/a4/cf119473f72388815cd2b715744f99c88417a7c6fdcb539e23eb4646efec/azure_ai_vision_imageanalysis-1.0.0b2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3650f84280721603cfa3bd2cead84a5769057fdb8aa4db66a91364176116e8d7",
                "md5": "921db61814ad58ab630bf596839b696e",
                "sha256": "704cb99e3d40513a01d0623c2537a467f9b7fc3b076a2597ba714992b1f1e6ab"
            },
            "downloads": -1,
            "filename": "azure-ai-vision-imageanalysis-1.0.0b2.tar.gz",
            "has_sig": false,
            "md5_digest": "921db61814ad58ab630bf596839b696e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 2951961,
            "upload_time": "2024-02-13T22:39:05",
            "upload_time_iso_8601": "2024-02-13T22:39:05.430130Z",
            "url": "https://files.pythonhosted.org/packages/36/50/f84280721603cfa3bd2cead84a5769057fdb8aa4db66a91364176116e8d7/azure-ai-vision-imageanalysis-1.0.0b2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-13 22:39:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Azure",
    "github_project": "azure-sdk-for-python",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "azure-ai-vision-imageanalysis"
}
        
Elapsed time: 0.39832s