azure-ai-inference


Nameazure-ai-inference JSON
Version 1.0.0b6 PyPI version JSON
download
home_pagehttps://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference
SummaryMicrosoft Azure AI Inference Client Library for Python
upload_time2024-11-12 21:11:15
maintainerNone
docs_urlNone
authorMicrosoft Corporation
requires_python>=3.8
licenseMIT License
keywords azure azure sdk
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # Azure AI Inference client library for Python

Use the Inference client library (in preview) to:

* Authenticate against the service
* Get information about the AI model
* Do chat completions
* Get text embeddings
<!-- * Get image embeddings -->

The Inference client library supports AI models deployed to the following services:

* [GitHub Models](https://github.com/marketplace/models) - Free-tier endpoint for AI models from different providers
* Serverless API endpoints and Managed Compute endpoints - AI models from different providers deployed from [Azure AI Studio](https://ai.azure.com). See [Overview: Deploy models, flows, and web apps with Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview).
* Azure OpenAI Service - OpenAI models deployed from [Azure OpenAI Studio](https://oai.azure.com/). See [What is Azure OpenAI Service?](https://learn.microsoft.com/azure/ai-services/openai/overview). Although we recommend you use the official [OpenAI client library](https://pypi.org/project/openai/) in your production code for this service, you can use the Azure AI Inference client library to easily compare the performance of OpenAI models to other models, using the same client library and Python code.

The Inference client library makes services calls using REST API version `2024-05-01-preview`, as documented in [Azure AI Model Inference API](https://aka.ms/azureai/modelinference).

[Product documentation](https://aka.ms/azureai/modelinference)
| [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples)
| [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference)
| [Package (Pypi)](https://aka.ms/azsdk/azure-ai-inference/python/package)
| [SDK source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/azure/ai/inference)

## Getting started

### Prerequisites

* [Python 3.8](https://www.python.org/) or later installed, including [pip](https://pip.pypa.io/en/stable/).
Studio.
* For GitHub models
  * The AI model name, such as "gpt-4o" or "mistral-large"
  * A GitHub personal access token. [Create one here](https://github.com/settings/tokens). You do not need to give any permissions to the token. The token is a string that starts with `github_pat_`.
* For Serverless API endpoints or Managed Compute endpoints
  * An [Azure subscription](https://azure.microsoft.com/free).
  * An [AI Model from the catalog](https://ai.azure.com/explore/models) deployed through Azure AI Studio.
  * The endpoint URL of your model, in of the form `https://<your-host-name>.<your-azure-region>.models.ai.azure.com`, where `your-host-name` is your unique model deployment host name and `your-azure-region` is the Azure region where the model is deployed (e.g. `eastus2`).
  * Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.
* For Azure OpenAI (AOAI) service
  * An [Azure subscription](https://azure.microsoft.com/free).
  * An [OpenAI Model from the catalog](https://oai.azure.com/resource/models) deployed through Azure OpenAI Studio.
  * The endpoint URL of your model, in the form `https://<your-resouce-name>.openai.azure.com/openai/deployments/<your-deployment-name>`, where `your-resource-name` is your globally unique AOAI resource name, and `your-deployment-name` is your AI Model deployment name.
  * Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.
  * An api-version. Latest preview or GA version listed in the `Data plane - inference` row in [the API Specs table](https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions). At the time of writing, latest GA version was "2024-06-01".

### Install the package

To install the Azure AI Inferencing package use the following command:

```bash
pip install azure-ai-inference
```

To update an existing installation of the package, use:

```bash
pip install --upgrade azure-ai-inference
```

If you want to install Azure AI Inferencing package with support for OpenTelemetry based tracing, use the following command:

```bash
pip install azure-ai-inference[opentelemetry]
```

## Key concepts

### Create and authenticate a client directly, using API key or GitHub token

The package includes two clients `ChatCompletionsClient` and `EmbeddingsClient`<!-- and `ImageGenerationClients`-->. Both can be created in the similar manner. For example, assuming `endpoint`, `key` and `github_token` are strings holding your endpoint URL, API key or GitHub token, this Python code will create and authenticate a synchronous `ChatCompletionsClient`:

```python
from azure.ai.inference import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential

# For GitHub models
client = ChatCompletionsClient(
    endpoint="https://models.inference.ai.azure.com",
    credential=AzureKeyCredential(github_token),
    model="mistral-large" # Update as needed. Alternatively, you can include this is the `complete` call.
)

# For Serverless API or Managed Compute endpoints
client = ChatCompletionsClient(
    endpoint=endpoint,  # Of the form https://<your-host-name>.<your-azure-region>.models.ai.azure.com
    credential=AzureKeyCredential(key)
)

# For Azure OpenAI endpoint
client = ChatCompletionsClient(
    endpoint=endpoint,  # Of the form https://<your-resouce-name>.openai.azure.com/openai/deployments/<your-deployment-name>
    credential=AzureKeyCredential(key),
    api_version="2024-06-01",  # Azure OpenAI api-version. See https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions
)
```

A synchronous client supports synchronous inference methods, meaning they will block until the service responds with inference results. For simplicity the code snippets below all use synchronous methods. The client offers equivalent asynchronous methods which are more commonly used in production.

To create an asynchronous client, Install the additional package [aiohttp](https://pypi.org/project/aiohttp/):

```bash
pip install aiohttp
```

and update the code above to import `asyncio`, and import `ChatCompletionsClient` from the `azure.ai.inference.aio` namespace instead of `azure.ai.inference`. For example:

```python
import asyncio
from azure.ai.inference.aio import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential

# For Serverless API or Managed Compute endpoints
client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key)
)
```

### Create and authenticate a client directly, using Entra ID

_Note: At the time of writing, only Managed Compute endpoints and Azure OpenAI endpoints support Entra ID authentication.

To use an Entra ID token credential, first install the [azure-identity](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity) package:

```python
pip install azure.identity
```

You will need to provide the desired credential type obtained from that package. A common selection is [DefaultAzureCredential](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#defaultazurecredential) and it can be used as follows:

```python
from azure.ai.inference import ChatCompletionsClient
from azure.identity import DefaultAzureCredential

# For Managed Compute endpoints
client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=DefaultAzureCredential(exclude_interactive_browser_credential=False)
)

# For Azure OpenAI endpoint
client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=DefaultAzureCredential(exclude_interactive_browser_credential=False),
    credential_scopes=["https://cognitiveservices.azure.com/.default"],
    api_version="2024-06-01",  # Azure OpenAI api-version. See https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions
)
```

During application development, you would typically set up the environment for authentication using Entra ID by first [Installing the Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli), running `az login` in your console window, then entering your credentials in the browser window that was opened. The call to `DefaultAzureCredential()` will then succeed. Setting `exclude_interactive_browser_credential=False` in that call will enable launching a browser window if the user isn't already logged in.

### Defining default settings while creating the clients

You can define default chat completions or embeddings configurations while constructing the relevant client. These configurations will be applied to all future service calls.

For example, here we create a `ChatCompletionsClient` using API key authentication, and apply two settings, `temperature` and `max_tokens`:

```python
from azure.ai.inference import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential

# For Serverless API or Managed Compute endpoints
client = ChatCompletionsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key)
    temperature=0.5,
    max_tokens=1000
)
```

Default settings can be overridden in individual service calls.

### Create and authenticate clients using `load_client`

If you are using Serverless API or Managed Compute endpoints, there is an alternative to creating a specific client directly. You can instead use the function `load_client` to return the relevant client (of types `ChatCompletionsClient` or `EmbeddingsClient`) based on the provided endpoint:

```python
from azure.ai.inference import load_client
from azure.core.credentials import AzureKeyCredential

# For Serverless API or Managed Compute endpoints only.
# This will not work on GitHub Models endpoint or Azure OpenAI endpoint.
client = load_client(
    endpoint=endpoint,
    credential=AzureKeyCredential(key)
)

print(f"Created client of type `{type(client).__name__}`.")
```

To load an asynchronous client, import the `load_client` function from `azure.ai.inference.aio` instead.

Entra ID authentication is also supported by the `load_client` function. Replace the key authentication above with `credential=DefaultAzureCredential(exclude_interactive_browser_credential=False)` for example.

### Get AI model information

If you are using Serverless API or Managed Compute endpoints, you can call the client method `get_model_info` to retrive AI model information. This makes a REST call to the `/info` route on the provided endpoint, as documented in [the REST API reference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-info). This call will not work for GitHub Models or Azure OpenAI endpoints.

<!-- SNIPPET:sample_get_model_info.get_model_info -->

```python
model_info = client.get_model_info()

print(f"Model name: {model_info.model_name}")
print(f"Model provider name: {model_info.model_provider_name}")
print(f"Model type: {model_info.model_type}")
```

<!-- END SNIPPET -->

AI model information is cached in the client, and futher calls to `get_model_info` will access the cached value and wil not result in a REST API call. Note that if you created the client using `load_client` function, model information will already be cached in the client.

AI model information is displayed (if available) when you `print(client)`.

### Chat Completions

The `ChatCompletionsClient` has a method named `complete`. The method makes a REST API call to the `/chat/completions` route on the provided endpoint, as documented in [the REST API reference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-chat-completions).

See simple chat completion examples below. More can be found in the [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder.

### Text Embeddings

The `EmbeddingsClient` has a method named `embedding`. The method makes a REST API call to the `/embeddings` route on the provided endpoint, as documented in [the REST API reference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-embeddings).

See simple text embedding example below. More can be found in the [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder.

<!--
### Image Embeddings

TODO: Add overview and link to explain image embeddings.

Embeddings operations target the URL route `images/embeddings` on the provided endpoint.
-->

## Examples

In the following sections you will find simple examples of:

* [Chat completions](#chat-completions-example)
* [Streaming chat completions](#streaming-chat-completions-example)
* [Chat completions with additional model-specific parameters](#chat-completions-with-additional-model-specific-parameters)
* [Text Embeddings](#text-embeddings-example)
<!-- * [Image Embeddings](#image-embeddings-example) -->

The examples create a synchronous client assuming a Serverless API or Managed Compute endpoint. Modify client
construction code as descirbed in [Key concepts](#key-concepts) to have it work with GitHub Models endpoint or Azure OpenAI
endpoint. Only mandatory input settings are shown for simplicity.

See the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder for full working samples for synchronous and asynchronous clients.

### Chat completions example

This example demonstrates how to generate a single chat completions, for a Serverless API or Managed Compute endpoint, with key authentication, assuming `endpoint` and `key` are already defined. For Entra ID authentication, GitHub models endpoint or Azure OpenAI endpoint, modify the code to create the client as specified in the above sections.

<!-- SNIPPET:sample_chat_completions.chat_completions -->

```python
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

client = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

response = client.complete(
    messages=[
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="How many feet are in a mile?"),
    ]
)

print(response.choices[0].message.content)
```

<!-- END SNIPPET -->

The following types or messages are supported: `SystemMessage`,`UserMessage`, `AssistantMessage`, `ToolMessage`. See also samples:

* [sample_chat_completions_with_tools.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tools.py) for usage of `ToolMessage`.
* [sample_chat_completions_with_image_url.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_image_url.py) for usage of `UserMessage` that
includes sending an image URL.
* [sample_chat_completions_with_image_data.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_image_data.py) for usage of `UserMessage` that
includes sending image data read from a local file.

Alternatively, you can provide the messages as dictionary instead of using the strongly typed classes like `SystemMessage` and `UserMessage`:

<!-- SNIPPET:sample_chat_completions_from_input_json.chat_completions -->

```python
response = client.complete(
    {
        "messages": [
            {
                "role": "system",
                "content": "You are an AI assistant that helps people find information. Your replies are short, no more than two sentences.",
            },
            {
                "role": "user",
                "content": "What year was construction of the International Space Station mostly done?",
            },
            {
                "role": "assistant",
                "content": "The main construction of the International Space Station (ISS) was completed between 1998 and 2011. During this period, more than 30 flights by US space shuttles and 40 by Russian rockets were conducted to transport components and modules to the station.",
            },
            {"role": "user", "content": "And what was the estimated cost to build it?"},
        ]
    }
)
```

<!-- END SNIPPET -->

To generate completions for additional messages, simply call `client.complete` multiple times using the same `client`.

### Streaming chat completions example

This example demonstrates how to generate a single chat completions with streaming response, for a Serverless API or Managed Compute endpoint, with key authentication, assuming `endpoint` and `key` are already defined. You simply need to add `stream=True` to the `complete` call to enable streaming.

For Entra ID authentication, GitHub models endpoint or Azure OpenAI endpoint, modify the code to create the client as specified in the above sections.

<!-- SNIPPET:sample_chat_completions_streaming.chat_completions_streaming -->

```python
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

client = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

response = client.complete(
    stream=True,
    messages=[
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="Give me 5 good reasons why I should exercise every day."),
    ],
)

for update in response:
    print(update.choices[0].delta.content or "", end="", flush=True)

client.close()
```

<!-- END SNIPPET -->

In the above `for` loop that prints the results you should see the answer progressively get longer as updates get streamed to the client.

To generate completions for additional messages, simply call `client.complete` multiple times using the same `client`.

### Chat completions with additional model-specific parameters

In this example, extra JSON elements are inserted at the root of the request body by setting `model_extras` when calling the `complete` method. These are intended for AI models that require additional model-specific parameters beyond what is defined in the REST API [Request Body table](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-chat-completions#request-body).

<!-- SNIPPET:sample_chat_completions_with_model_extras.model_extras -->

```python
response = client.complete(
    messages=[
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="How many feet are in a mile?"),
    ],
    model_extras={"key1": "value1", "key2": "value2"},  # Optional. Additional parameters to pass to the model.
)
```

<!-- END SNIPPET -->
In the above example, this will be the JSON payload in the HTTP request:

```json
{
    "messages":
    [
        {"role":"system","content":"You are a helpful assistant."},
        {"role":"user","content":"How many feet are in a mile?"}
    ],
    "key1": "value1",
    "key2": "value2"
}
```

Note that by default, the service will reject any request payload that includes extra parameters. In order to change the default service behaviour, when the `complete` method includes `model_extras`, the client library will automatically add the HTTP request header `"extra-parameters": "pass-through"`.

### Text Embeddings example

This example demonstrates how to get text embeddings, for a Serverless API or Managed Compute endpoint, with key authentication, assuming `endpoint` and `key` are already defined. For Entra ID authentication, GitHub models endpoint or Azure OpenAI endpoint, modify the code to create the client as specified in the above sections.

<!-- SNIPPET:sample_embeddings.embeddings -->

```python
from azure.ai.inference import EmbeddingsClient
from azure.core.credentials import AzureKeyCredential

client = EmbeddingsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

response = client.embed(input=["first phrase", "second phrase", "third phrase"])

for item in response.data:
    length = len(item.embedding)
    print(
        f"data[{item.index}]: length={length}, [{item.embedding[0]}, {item.embedding[1]}, "
        f"..., {item.embedding[length-2]}, {item.embedding[length-1]}]"
    )
```

<!-- END SNIPPET -->

The length of the embedding vector depends on the model, but you should see something like this:

```text
data[0]: length=1024, [0.0013399124, -0.01576233, ..., 0.007843018, 0.000238657]
data[1]: length=1024, [0.036590576, -0.0059547424, ..., 0.011405945, 0.004863739]
data[2]: length=1024, [0.04196167, 0.029083252, ..., -0.0027484894, 0.0073127747]
```

To generate embeddings for additional phrases, simply call `client.embed` multiple times using the same `client`.

<!--
### Image Embeddings example

This example demonstrates how to get image embeddings.

 <! -- SNIPPET:sample_image_embeddings.image_embeddings -- >

```python
from azure.ai.inference import ImageEmbeddingsClient
from azure.ai.inference.models import EmbeddingInput
from azure.core.credentials import AzureKeyCredential

with open("sample1.png", "rb") as f:
    image1: str = base64.b64encode(f.read()).decode("utf-8")
with open("sample2.png", "rb") as f:
    image2: str = base64.b64encode(f.read()).decode("utf-8")

client = ImageEmbeddingsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

response = client.embed(input=[EmbeddingInput(image=image1), EmbeddingInput(image=image2)])

for item in response.data:
    length = len(item.embedding)
    print(
        f"data[{item.index}]: length={length}, [{item.embedding[0]}, {item.embedding[1]}, "
        f"..., {item.embedding[length-2]}, {item.embedding[length-1]}]"
    )
```

-- END SNIPPET --

The printed result of course depends on the model, but you should see something like this:

```txt
TBD
```

To generate embeddings for additional phrases, simply call `client.embed` multiple times using the same `client`.
-->

## Troubleshooting

### Exceptions

The `complete`, `embed` and `get_model_info` methods on the clients raise an [HttpResponseError](https://learn.microsoft.com/python/api/azure-core/azure.core.exceptions.httpresponseerror) exception for a non-success HTTP status code response from the service. The exception's `status_code` will hold the HTTP response status code (with `reason` showing the friendly name). The exception's `error.message` contains a detailed message that may be helpful in diagnosing the issue:

```python
from azure.core.exceptions import HttpResponseError

...

try:
    result = client.complete( ... )
except HttpResponseError as e:
    print(f"Status code: {e.status_code} ({e.reason})")
    print(e.message)
```

For example, when you provide a wrong authentication key:

```text
Status code: 401 (Unauthorized)
Operation returned an invalid status 'Unauthorized'
```

Or when you create an `EmbeddingsClient` and call `embed` on the client, but the endpoint does not
support the `/embeddings` route:

```text
Status code: 405 (Method Not Allowed)
Operation returned an invalid status 'Method Not Allowed'
```

### Logging

The client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following:

```python
import sys
import logging

# Acquire the logger for this client library. Use 'azure' to affect both
# 'azure.core` and `azure.ai.inference' libraries.
logger = logging.getLogger("azure")

# Set the desired logging level. logging.INFO or logging.DEBUG are good options.
logger.setLevel(logging.DEBUG)

# Direct logging output to stdout:
handler = logging.StreamHandler(stream=sys.stdout)
# Or direct logging output to a file:
# handler = logging.FileHandler(filename="sample.log")
logger.addHandler(handler)

# Optional: change the default logging format. Here we add a timestamp.
formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s")
handler.setFormatter(formatter)
```

By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Authorization` which holds the key or token), and the request and response payloads. To create logs without redaction, do these two things:

1. Set the method argument `logging_enable = True` when you construct the client library, or when you call the client's `complete` or `embed`  methods.
    ```python
    client = ChatCompletionsClient(
        endpoint=endpoint,
        credential=AzureKeyCredential(key),
        logging_enable=True
    )
    ```
1. Set the log level to `logging.DEBUG`. Logs will be redacted with any other log level.

Be sure to protect non redacted logs to avoid compromising security.

For more information, see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging)

### Reporting issues

To report issues with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues)

## Observability With OpenTelemetry

The Azure AI Inference client library provides experimental support for tracing with OpenTelemetry.

You can capture prompt and completion contents by setting `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment to `true` (case insensitive).
By default prompts, completions, function name, parameters or outputs are not recorded.

### Setup with Azure Monitor

When using Azure AI Inference library with [Azure Monitor OpenTelemetry Distro](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python),
distributed tracing for Azure AI Inference calls is enabled by default when using latest version of the distro.

### Setup with OpenTelemetry

Check out your observability vendor documentation on how to configure OpenTelemetry or refer to the [official OpenTelemetry documentation](https://opentelemetry.io/docs/languages/python/).

#### Installation

Make sure to install OpenTelemetry and the Azure SDK tracing plugin via

```bash
pip install opentelemetry
pip install azure-core-tracing-opentelemetry
```

You will also need an exporter to send telemetry to your observability backend. You can print traces to the console or use a local viewer such as [Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash).

To connect to Aspire Dashboard or another OpenTelemetry compatible backend, install OTLP exporter:

```bash
pip install opentelemetry-exporter-otlp
```

#### Configuration

To enable Azure SDK tracing set `AZURE_SDK_TRACING_IMPLEMENTATION` environment variable to `opentelemetry`.

Or configure it in the code with the following snippet:

<!-- SNIPPET:sample_chat_completions_with_tracing.trace_setting -->

```python
from azure.core.settings import settings

settings.tracing_implementation = "opentelemetry"
```

<!-- END SNIPPET -->

Please refer to [azure-core-tracing-documentation](https://learn.microsoft.com/python/api/overview/azure/core-tracing-opentelemetry-readme) for more information.

The final step is to enable Azure AI Inference instrumentation with the following code snippet:

<!-- SNIPPET:sample_chat_completions_with_tracing.instrument_inferencing -->

```python
from azure.ai.inference.tracing import AIInferenceInstrumentor

# Instrument AI Inference API
AIInferenceInstrumentor().instrument()
```

<!-- END SNIPPET -->


It is also possible to uninstrument the Azure AI Inferencing API by using the uninstrument call. After this call, the traces will no longer be emitted by the Azure AI Inferencing API until instrument is called again.

<!-- SNIPPET:sample_chat_completions_with_tracing.uninstrument_inferencing -->

```python
AIInferenceInstrumentor().uninstrument()
```

<!-- END SNIPPET -->

### Tracing Your Own Functions

The `@tracer.start_as_current_span` decorator can be used to trace your own functions. This will trace the function parameters and their values. You can also add further attributes to the span in the function implementation as demonstrated below. Note that you will have to setup the tracer in your code before using the decorator. More information is available [here](https://opentelemetry.io/docs/languages/python/).

<!-- SNIPPET:sample_chat_completions_with_tracing.trace_function -->

```python
from opentelemetry.trace import get_tracer

tracer = get_tracer(__name__)


# The tracer.start_as_current_span decorator will trace the function call and enable adding additional attributes
# to the span in the function implementation. Note that this will trace the function parameters and their values.
@tracer.start_as_current_span("get_temperature")  # type: ignore
def get_temperature(city: str) -> str:

    # Adding attributes to the current span
    span = trace.get_current_span()
    span.set_attribute("requested_city", city)

    if city == "Seattle":
        return "75"
    elif city == "New York City":
        return "80"
    else:
        return "Unavailable"
```

<!-- END SNIPPET -->

## Next steps

* Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder, containing fully runnable Python code for doing inference using synchronous and asynchronous clients.

## Contributing

This project welcomes contributions and suggestions. Most contributions require
you to agree to a Contributor License Agreement (CLA) declaring that you have
the right to, and actually do, grant us the rights to use your contribution.
For details, visit [https://cla.microsoft.com](https://cla.microsoft.com).

When you submit a pull request, a CLA-bot will automatically determine whether
you need to provide a CLA and decorate the PR appropriately (e.g., label,
comment). Simply follow the instructions provided by the bot. You will only
need to do this once across all repos using our CLA.

This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct). For more information,
see the Code of Conduct FAQ or contact opencode@microsoft.com with any
additional questions or comments.


<!-- Note: I did not use LINKS section here with a list of `[link-label](link-url)` because these
links don't work in the Sphinx generated documentation. The index.html page of these docs
include this README, but with broken links.-->

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference",
    "name": "azure-ai-inference",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "azure, azure sdk",
    "author": "Microsoft Corporation",
    "author_email": "azpysdkhelp@microsoft.com",
    "download_url": "https://files.pythonhosted.org/packages/2a/c9/264ae0ef0460dbd7c7efe1d3a093ad6a00fb2823d341ac457459396df2d6/azure_ai_inference-1.0.0b6.tar.gz",
    "platform": null,
    "description": "# Azure AI Inference client library for Python\n\nUse the Inference client library (in preview) to:\n\n* Authenticate against the service\n* Get information about the AI model\n* Do chat completions\n* Get text embeddings\n<!-- * Get image embeddings -->\n\nThe Inference client library supports AI models deployed to the following services:\n\n* [GitHub Models](https://github.com/marketplace/models) - Free-tier endpoint for AI models from different providers\n* Serverless API endpoints and Managed Compute endpoints - AI models from different providers deployed from [Azure AI Studio](https://ai.azure.com). See [Overview: Deploy models, flows, and web apps with Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview).\n* Azure OpenAI Service - OpenAI models deployed from [Azure OpenAI Studio](https://oai.azure.com/). See [What is Azure OpenAI Service?](https://learn.microsoft.com/azure/ai-services/openai/overview). Although we recommend you use the official [OpenAI client library](https://pypi.org/project/openai/) in your production code for this service, you can use the Azure AI Inference client library to easily compare the performance of OpenAI models to other models, using the same client library and Python code.\n\nThe Inference client library makes services calls using REST API version `2024-05-01-preview`, as documented in [Azure AI Model Inference API](https://aka.ms/azureai/modelinference).\n\n[Product documentation](https://aka.ms/azureai/modelinference)\n| [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples)\n| [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference)\n| [Package (Pypi)](https://aka.ms/azsdk/azure-ai-inference/python/package)\n| [SDK source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/azure/ai/inference)\n\n## Getting started\n\n### Prerequisites\n\n* [Python 3.8](https://www.python.org/) or later installed, including [pip](https://pip.pypa.io/en/stable/).\nStudio.\n* For GitHub models\n  * The AI model name, such as \"gpt-4o\" or \"mistral-large\"\n  * A GitHub personal access token. [Create one here](https://github.com/settings/tokens). You do not need to give any permissions to the token. The token is a string that starts with `github_pat_`.\n* For Serverless API endpoints or Managed Compute endpoints\n  * An [Azure subscription](https://azure.microsoft.com/free).\n  * An [AI Model from the catalog](https://ai.azure.com/explore/models) deployed through Azure AI Studio.\n  * The endpoint URL of your model, in of the form `https://<your-host-name>.<your-azure-region>.models.ai.azure.com`, where `your-host-name` is your unique model deployment host name and `your-azure-region` is the Azure region where the model is deployed (e.g. `eastus2`).\n  * Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.\n* For Azure OpenAI (AOAI) service\n  * An [Azure subscription](https://azure.microsoft.com/free).\n  * An [OpenAI Model from the catalog](https://oai.azure.com/resource/models) deployed through Azure OpenAI Studio.\n  * The endpoint URL of your model, in the form `https://<your-resouce-name>.openai.azure.com/openai/deployments/<your-deployment-name>`, where `your-resource-name` is your globally unique AOAI resource name, and `your-deployment-name` is your AI Model deployment name.\n  * Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.\n  * An api-version. Latest preview or GA version listed in the `Data plane - inference` row in [the API Specs table](https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions). At the time of writing, latest GA version was \"2024-06-01\".\n\n### Install the package\n\nTo install the Azure AI Inferencing package use the following command:\n\n```bash\npip install azure-ai-inference\n```\n\nTo update an existing installation of the package, use:\n\n```bash\npip install --upgrade azure-ai-inference\n```\n\nIf you want to install Azure AI Inferencing package with support for OpenTelemetry based tracing, use the following command:\n\n```bash\npip install azure-ai-inference[opentelemetry]\n```\n\n## Key concepts\n\n### Create and authenticate a client directly, using API key or GitHub token\n\nThe package includes two clients `ChatCompletionsClient` and `EmbeddingsClient`<!-- and `ImageGenerationClients`-->. Both can be created in the similar manner. For example, assuming `endpoint`, `key` and `github_token` are strings holding your endpoint URL, API key or GitHub token, this Python code will create and authenticate a synchronous `ChatCompletionsClient`:\n\n```python\nfrom azure.ai.inference import ChatCompletionsClient\nfrom azure.core.credentials import AzureKeyCredential\n\n# For GitHub models\nclient = ChatCompletionsClient(\n    endpoint=\"https://models.inference.ai.azure.com\",\n    credential=AzureKeyCredential(github_token),\n    model=\"mistral-large\" # Update as needed. Alternatively, you can include this is the `complete` call.\n)\n\n# For Serverless API or Managed Compute endpoints\nclient = ChatCompletionsClient(\n    endpoint=endpoint,  # Of the form https://<your-host-name>.<your-azure-region>.models.ai.azure.com\n    credential=AzureKeyCredential(key)\n)\n\n# For Azure OpenAI endpoint\nclient = ChatCompletionsClient(\n    endpoint=endpoint,  # Of the form https://<your-resouce-name>.openai.azure.com/openai/deployments/<your-deployment-name>\n    credential=AzureKeyCredential(key),\n    api_version=\"2024-06-01\",  # Azure OpenAI api-version. See https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions\n)\n```\n\nA synchronous client supports synchronous inference methods, meaning they will block until the service responds with inference results. For simplicity the code snippets below all use synchronous methods. The client offers equivalent asynchronous methods which are more commonly used in production.\n\nTo create an asynchronous client, Install the additional package [aiohttp](https://pypi.org/project/aiohttp/):\n\n```bash\npip install aiohttp\n```\n\nand update the code above to import `asyncio`, and import `ChatCompletionsClient` from the `azure.ai.inference.aio` namespace instead of `azure.ai.inference`. For example:\n\n```python\nimport asyncio\nfrom azure.ai.inference.aio import ChatCompletionsClient\nfrom azure.core.credentials import AzureKeyCredential\n\n# For Serverless API or Managed Compute endpoints\nclient = ChatCompletionsClient(\n    endpoint=endpoint,\n    credential=AzureKeyCredential(key)\n)\n```\n\n### Create and authenticate a client directly, using Entra ID\n\n_Note: At the time of writing, only Managed Compute endpoints and Azure OpenAI endpoints support Entra ID authentication.\n\nTo use an Entra ID token credential, first install the [azure-identity](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity) package:\n\n```python\npip install azure.identity\n```\n\nYou will need to provide the desired credential type obtained from that package. A common selection is [DefaultAzureCredential](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#defaultazurecredential) and it can be used as follows:\n\n```python\nfrom azure.ai.inference import ChatCompletionsClient\nfrom azure.identity import DefaultAzureCredential\n\n# For Managed Compute endpoints\nclient = ChatCompletionsClient(\n    endpoint=endpoint,\n    credential=DefaultAzureCredential(exclude_interactive_browser_credential=False)\n)\n\n# For Azure OpenAI endpoint\nclient = ChatCompletionsClient(\n    endpoint=endpoint,\n    credential=DefaultAzureCredential(exclude_interactive_browser_credential=False),\n    credential_scopes=[\"https://cognitiveservices.azure.com/.default\"],\n    api_version=\"2024-06-01\",  # Azure OpenAI api-version. See https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions\n)\n```\n\nDuring application development, you would typically set up the environment for authentication using Entra ID by first [Installing the Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli), running `az login` in your console window, then entering your credentials in the browser window that was opened. The call to `DefaultAzureCredential()` will then succeed. Setting `exclude_interactive_browser_credential=False` in that call will enable launching a browser window if the user isn't already logged in.\n\n### Defining default settings while creating the clients\n\nYou can define default chat completions or embeddings configurations while constructing the relevant client. These configurations will be applied to all future service calls.\n\nFor example, here we create a `ChatCompletionsClient` using API key authentication, and apply two settings, `temperature` and `max_tokens`:\n\n```python\nfrom azure.ai.inference import ChatCompletionsClient\nfrom azure.core.credentials import AzureKeyCredential\n\n# For Serverless API or Managed Compute endpoints\nclient = ChatCompletionsClient(\n    endpoint=endpoint,\n    credential=AzureKeyCredential(key)\n    temperature=0.5,\n    max_tokens=1000\n)\n```\n\nDefault settings can be overridden in individual service calls.\n\n### Create and authenticate clients using `load_client`\n\nIf you are using Serverless API or Managed Compute endpoints, there is an alternative to creating a specific client directly. You can instead use the function `load_client` to return the relevant client (of types `ChatCompletionsClient` or `EmbeddingsClient`) based on the provided endpoint:\n\n```python\nfrom azure.ai.inference import load_client\nfrom azure.core.credentials import AzureKeyCredential\n\n# For Serverless API or Managed Compute endpoints only.\n# This will not work on GitHub Models endpoint or Azure OpenAI endpoint.\nclient = load_client(\n    endpoint=endpoint,\n    credential=AzureKeyCredential(key)\n)\n\nprint(f\"Created client of type `{type(client).__name__}`.\")\n```\n\nTo load an asynchronous client, import the `load_client` function from `azure.ai.inference.aio` instead.\n\nEntra ID authentication is also supported by the `load_client` function. Replace the key authentication above with `credential=DefaultAzureCredential(exclude_interactive_browser_credential=False)` for example.\n\n### Get AI model information\n\nIf you are using Serverless API or Managed Compute endpoints, you can call the client method `get_model_info` to retrive AI model information. This makes a REST call to the `/info` route on the provided endpoint, as documented in [the REST API reference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-info). This call will not work for GitHub Models or Azure OpenAI endpoints.\n\n<!-- SNIPPET:sample_get_model_info.get_model_info -->\n\n```python\nmodel_info = client.get_model_info()\n\nprint(f\"Model name: {model_info.model_name}\")\nprint(f\"Model provider name: {model_info.model_provider_name}\")\nprint(f\"Model type: {model_info.model_type}\")\n```\n\n<!-- END SNIPPET -->\n\nAI model information is cached in the client, and futher calls to `get_model_info` will access the cached value and wil not result in a REST API call. Note that if you created the client using `load_client` function, model information will already be cached in the client.\n\nAI model information is displayed (if available) when you `print(client)`.\n\n### Chat Completions\n\nThe `ChatCompletionsClient` has a method named `complete`. The method makes a REST API call to the `/chat/completions` route on the provided endpoint, as documented in [the REST API reference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-chat-completions).\n\nSee simple chat completion examples below. More can be found in the [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder.\n\n### Text Embeddings\n\nThe `EmbeddingsClient` has a method named `embedding`. The method makes a REST API call to the `/embeddings` route on the provided endpoint, as documented in [the REST API reference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-embeddings).\n\nSee simple text embedding example below. More can be found in the [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder.\n\n<!--\n### Image Embeddings\n\nTODO: Add overview and link to explain image embeddings.\n\nEmbeddings operations target the URL route `images/embeddings` on the provided endpoint.\n-->\n\n## Examples\n\nIn the following sections you will find simple examples of:\n\n* [Chat completions](#chat-completions-example)\n* [Streaming chat completions](#streaming-chat-completions-example)\n* [Chat completions with additional model-specific parameters](#chat-completions-with-additional-model-specific-parameters)\n* [Text Embeddings](#text-embeddings-example)\n<!-- * [Image Embeddings](#image-embeddings-example) -->\n\nThe examples create a synchronous client assuming a Serverless API or Managed Compute endpoint. Modify client\nconstruction code as descirbed in [Key concepts](#key-concepts) to have it work with GitHub Models endpoint or Azure OpenAI\nendpoint. Only mandatory input settings are shown for simplicity.\n\nSee the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder for full working samples for synchronous and asynchronous clients.\n\n### Chat completions example\n\nThis example demonstrates how to generate a single chat completions, for a Serverless API or Managed Compute endpoint, with key authentication, assuming `endpoint` and `key` are already defined. For Entra ID authentication, GitHub models endpoint or Azure OpenAI endpoint, modify the code to create the client as specified in the above sections.\n\n<!-- SNIPPET:sample_chat_completions.chat_completions -->\n\n```python\nfrom azure.ai.inference import ChatCompletionsClient\nfrom azure.ai.inference.models import SystemMessage, UserMessage\nfrom azure.core.credentials import AzureKeyCredential\n\nclient = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\n\nresponse = client.complete(\n    messages=[\n        SystemMessage(content=\"You are a helpful assistant.\"),\n        UserMessage(content=\"How many feet are in a mile?\"),\n    ]\n)\n\nprint(response.choices[0].message.content)\n```\n\n<!-- END SNIPPET -->\n\nThe following types or messages are supported: `SystemMessage`,`UserMessage`, `AssistantMessage`, `ToolMessage`. See also samples:\n\n* [sample_chat_completions_with_tools.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tools.py) for usage of `ToolMessage`.\n* [sample_chat_completions_with_image_url.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_image_url.py) for usage of `UserMessage` that\nincludes sending an image URL.\n* [sample_chat_completions_with_image_data.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_image_data.py) for usage of `UserMessage` that\nincludes sending image data read from a local file.\n\nAlternatively, you can provide the messages as dictionary instead of using the strongly typed classes like `SystemMessage` and `UserMessage`:\n\n<!-- SNIPPET:sample_chat_completions_from_input_json.chat_completions -->\n\n```python\nresponse = client.complete(\n    {\n        \"messages\": [\n            {\n                \"role\": \"system\",\n                \"content\": \"You are an AI assistant that helps people find information. Your replies are short, no more than two sentences.\",\n            },\n            {\n                \"role\": \"user\",\n                \"content\": \"What year was construction of the International Space Station mostly done?\",\n            },\n            {\n                \"role\": \"assistant\",\n                \"content\": \"The main construction of the International Space Station (ISS) was completed between 1998 and 2011. During this period, more than 30 flights by US space shuttles and 40 by Russian rockets were conducted to transport components and modules to the station.\",\n            },\n            {\"role\": \"user\", \"content\": \"And what was the estimated cost to build it?\"},\n        ]\n    }\n)\n```\n\n<!-- END SNIPPET -->\n\nTo generate completions for additional messages, simply call `client.complete` multiple times using the same `client`.\n\n### Streaming chat completions example\n\nThis example demonstrates how to generate a single chat completions with streaming response, for a Serverless API or Managed Compute endpoint, with key authentication, assuming `endpoint` and `key` are already defined. You simply need to add `stream=True` to the `complete` call to enable streaming.\n\nFor Entra ID authentication, GitHub models endpoint or Azure OpenAI endpoint, modify the code to create the client as specified in the above sections.\n\n<!-- SNIPPET:sample_chat_completions_streaming.chat_completions_streaming -->\n\n```python\nfrom azure.ai.inference import ChatCompletionsClient\nfrom azure.ai.inference.models import SystemMessage, UserMessage\nfrom azure.core.credentials import AzureKeyCredential\n\nclient = ChatCompletionsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\n\nresponse = client.complete(\n    stream=True,\n    messages=[\n        SystemMessage(content=\"You are a helpful assistant.\"),\n        UserMessage(content=\"Give me 5 good reasons why I should exercise every day.\"),\n    ],\n)\n\nfor update in response:\n    print(update.choices[0].delta.content or \"\", end=\"\", flush=True)\n\nclient.close()\n```\n\n<!-- END SNIPPET -->\n\nIn the above `for` loop that prints the results you should see the answer progressively get longer as updates get streamed to the client.\n\nTo generate completions for additional messages, simply call `client.complete` multiple times using the same `client`.\n\n### Chat completions with additional model-specific parameters\n\nIn this example, extra JSON elements are inserted at the root of the request body by setting `model_extras` when calling the `complete` method. These are intended for AI models that require additional model-specific parameters beyond what is defined in the REST API [Request Body table](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-chat-completions#request-body).\n\n<!-- SNIPPET:sample_chat_completions_with_model_extras.model_extras -->\n\n```python\nresponse = client.complete(\n    messages=[\n        SystemMessage(content=\"You are a helpful assistant.\"),\n        UserMessage(content=\"How many feet are in a mile?\"),\n    ],\n    model_extras={\"key1\": \"value1\", \"key2\": \"value2\"},  # Optional. Additional parameters to pass to the model.\n)\n```\n\n<!-- END SNIPPET -->\nIn the above example, this will be the JSON payload in the HTTP request:\n\n```json\n{\n    \"messages\":\n    [\n        {\"role\":\"system\",\"content\":\"You are a helpful assistant.\"},\n        {\"role\":\"user\",\"content\":\"How many feet are in a mile?\"}\n    ],\n    \"key1\": \"value1\",\n    \"key2\": \"value2\"\n}\n```\n\nNote that by default, the service will reject any request payload that includes extra parameters. In order to change the default service behaviour, when the `complete` method includes `model_extras`, the client library will automatically add the HTTP request header `\"extra-parameters\": \"pass-through\"`.\n\n### Text Embeddings example\n\nThis example demonstrates how to get text embeddings, for a Serverless API or Managed Compute endpoint, with key authentication, assuming `endpoint` and `key` are already defined. For Entra ID authentication, GitHub models endpoint or Azure OpenAI endpoint, modify the code to create the client as specified in the above sections.\n\n<!-- SNIPPET:sample_embeddings.embeddings -->\n\n```python\nfrom azure.ai.inference import EmbeddingsClient\nfrom azure.core.credentials import AzureKeyCredential\n\nclient = EmbeddingsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\n\nresponse = client.embed(input=[\"first phrase\", \"second phrase\", \"third phrase\"])\n\nfor item in response.data:\n    length = len(item.embedding)\n    print(\n        f\"data[{item.index}]: length={length}, [{item.embedding[0]}, {item.embedding[1]}, \"\n        f\"..., {item.embedding[length-2]}, {item.embedding[length-1]}]\"\n    )\n```\n\n<!-- END SNIPPET -->\n\nThe length of the embedding vector depends on the model, but you should see something like this:\n\n```text\ndata[0]: length=1024, [0.0013399124, -0.01576233, ..., 0.007843018, 0.000238657]\ndata[1]: length=1024, [0.036590576, -0.0059547424, ..., 0.011405945, 0.004863739]\ndata[2]: length=1024, [0.04196167, 0.029083252, ..., -0.0027484894, 0.0073127747]\n```\n\nTo generate embeddings for additional phrases, simply call `client.embed` multiple times using the same `client`.\n\n<!--\n### Image Embeddings example\n\nThis example demonstrates how to get image embeddings.\n\n <! -- SNIPPET:sample_image_embeddings.image_embeddings -- >\n\n```python\nfrom azure.ai.inference import ImageEmbeddingsClient\nfrom azure.ai.inference.models import EmbeddingInput\nfrom azure.core.credentials import AzureKeyCredential\n\nwith open(\"sample1.png\", \"rb\") as f:\n    image1: str = base64.b64encode(f.read()).decode(\"utf-8\")\nwith open(\"sample2.png\", \"rb\") as f:\n    image2: str = base64.b64encode(f.read()).decode(\"utf-8\")\n\nclient = ImageEmbeddingsClient(endpoint=endpoint, credential=AzureKeyCredential(key))\n\nresponse = client.embed(input=[EmbeddingInput(image=image1), EmbeddingInput(image=image2)])\n\nfor item in response.data:\n    length = len(item.embedding)\n    print(\n        f\"data[{item.index}]: length={length}, [{item.embedding[0]}, {item.embedding[1]}, \"\n        f\"..., {item.embedding[length-2]}, {item.embedding[length-1]}]\"\n    )\n```\n\n-- END SNIPPET --\n\nThe printed result of course depends on the model, but you should see something like this:\n\n```txt\nTBD\n```\n\nTo generate embeddings for additional phrases, simply call `client.embed` multiple times using the same `client`.\n-->\n\n## Troubleshooting\n\n### Exceptions\n\nThe `complete`, `embed` and `get_model_info` methods on the clients raise an [HttpResponseError](https://learn.microsoft.com/python/api/azure-core/azure.core.exceptions.httpresponseerror) exception for a non-success HTTP status code response from the service. The exception's `status_code` will hold the HTTP response status code (with `reason` showing the friendly name). The exception's `error.message` contains a detailed message that may be helpful in diagnosing the issue:\n\n```python\nfrom azure.core.exceptions import HttpResponseError\n\n...\n\ntry:\n    result = client.complete( ... )\nexcept HttpResponseError as e:\n    print(f\"Status code: {e.status_code} ({e.reason})\")\n    print(e.message)\n```\n\nFor example, when you provide a wrong authentication key:\n\n```text\nStatus code: 401 (Unauthorized)\nOperation returned an invalid status 'Unauthorized'\n```\n\nOr when you create an `EmbeddingsClient` and call `embed` on the client, but the endpoint does not\nsupport the `/embeddings` route:\n\n```text\nStatus code: 405 (Method Not Allowed)\nOperation returned an invalid status 'Method Not Allowed'\n```\n\n### Logging\n\nThe client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following:\n\n```python\nimport sys\nimport logging\n\n# Acquire the logger for this client library. Use 'azure' to affect both\n# 'azure.core` and `azure.ai.inference' libraries.\nlogger = logging.getLogger(\"azure\")\n\n# Set the desired logging level. logging.INFO or logging.DEBUG are good options.\nlogger.setLevel(logging.DEBUG)\n\n# Direct logging output to stdout:\nhandler = logging.StreamHandler(stream=sys.stdout)\n# Or direct logging output to a file:\n# handler = logging.FileHandler(filename=\"sample.log\")\nlogger.addHandler(handler)\n\n# Optional: change the default logging format. Here we add a timestamp.\nformatter = logging.Formatter(\"%(asctime)s:%(levelname)s:%(name)s:%(message)s\")\nhandler.setFormatter(formatter)\n```\n\nBy default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Authorization` which holds the key or token), and the request and response payloads. To create logs without redaction, do these two things:\n\n1. Set the method argument `logging_enable = True` when you construct the client library, or when you call the client's `complete` or `embed`  methods.\n    ```python\n    client = ChatCompletionsClient(\n        endpoint=endpoint,\n        credential=AzureKeyCredential(key),\n        logging_enable=True\n    )\n    ```\n1. Set the log level to `logging.DEBUG`. Logs will be redacted with any other log level.\n\nBe sure to protect non redacted logs to avoid compromising security.\n\nFor more information, see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging)\n\n### Reporting issues\n\nTo report issues with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues)\n\n## Observability With OpenTelemetry\n\nThe Azure AI Inference client library provides experimental support for tracing with OpenTelemetry.\n\nYou can capture prompt and completion contents by setting `AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED` environment to `true` (case insensitive).\nBy default prompts, completions, function name, parameters or outputs are not recorded.\n\n### Setup with Azure Monitor\n\nWhen using Azure AI Inference library with [Azure Monitor OpenTelemetry Distro](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python),\ndistributed tracing for Azure AI Inference calls is enabled by default when using latest version of the distro.\n\n### Setup with OpenTelemetry\n\nCheck out your observability vendor documentation on how to configure OpenTelemetry or refer to the [official OpenTelemetry documentation](https://opentelemetry.io/docs/languages/python/).\n\n#### Installation\n\nMake sure to install OpenTelemetry and the Azure SDK tracing plugin via\n\n```bash\npip install opentelemetry\npip install azure-core-tracing-opentelemetry\n```\n\nYou will also need an exporter to send telemetry to your observability backend. You can print traces to the console or use a local viewer such as [Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash).\n\nTo connect to Aspire Dashboard or another OpenTelemetry compatible backend, install OTLP exporter:\n\n```bash\npip install opentelemetry-exporter-otlp\n```\n\n#### Configuration\n\nTo enable Azure SDK tracing set `AZURE_SDK_TRACING_IMPLEMENTATION` environment variable to `opentelemetry`.\n\nOr configure it in the code with the following snippet:\n\n<!-- SNIPPET:sample_chat_completions_with_tracing.trace_setting -->\n\n```python\nfrom azure.core.settings import settings\n\nsettings.tracing_implementation = \"opentelemetry\"\n```\n\n<!-- END SNIPPET -->\n\nPlease refer to [azure-core-tracing-documentation](https://learn.microsoft.com/python/api/overview/azure/core-tracing-opentelemetry-readme) for more information.\n\nThe final step is to enable Azure AI Inference instrumentation with the following code snippet:\n\n<!-- SNIPPET:sample_chat_completions_with_tracing.instrument_inferencing -->\n\n```python\nfrom azure.ai.inference.tracing import AIInferenceInstrumentor\n\n# Instrument AI Inference API\nAIInferenceInstrumentor().instrument()\n```\n\n<!-- END SNIPPET -->\n\n\nIt is also possible to uninstrument the Azure AI Inferencing API by using the uninstrument call. After this call, the traces will no longer be emitted by the Azure AI Inferencing API until instrument is called again.\n\n<!-- SNIPPET:sample_chat_completions_with_tracing.uninstrument_inferencing -->\n\n```python\nAIInferenceInstrumentor().uninstrument()\n```\n\n<!-- END SNIPPET -->\n\n### Tracing Your Own Functions\n\nThe `@tracer.start_as_current_span` decorator can be used to trace your own functions. This will trace the function parameters and their values. You can also add further attributes to the span in the function implementation as demonstrated below. Note that you will have to setup the tracer in your code before using the decorator. More information is available [here](https://opentelemetry.io/docs/languages/python/).\n\n<!-- SNIPPET:sample_chat_completions_with_tracing.trace_function -->\n\n```python\nfrom opentelemetry.trace import get_tracer\n\ntracer = get_tracer(__name__)\n\n\n# The tracer.start_as_current_span decorator will trace the function call and enable adding additional attributes\n# to the span in the function implementation. Note that this will trace the function parameters and their values.\n@tracer.start_as_current_span(\"get_temperature\")  # type: ignore\ndef get_temperature(city: str) -> str:\n\n    # Adding attributes to the current span\n    span = trace.get_current_span()\n    span.set_attribute(\"requested_city\", city)\n\n    if city == \"Seattle\":\n        return \"75\"\n    elif city == \"New York City\":\n        return \"80\"\n    else:\n        return \"Unavailable\"\n```\n\n<!-- END SNIPPET -->\n\n## Next steps\n\n* Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder, containing fully runnable Python code for doing inference using synchronous and asynchronous clients.\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require\nyou to agree to a Contributor License Agreement (CLA) declaring that you have\nthe right to, and actually do, grant us the rights to use your contribution.\nFor details, visit [https://cla.microsoft.com](https://cla.microsoft.com).\n\nWhen you submit a pull request, a CLA-bot will automatically determine whether\nyou need to provide a CLA and decorate the PR appropriately (e.g., label,\ncomment). Simply follow the instructions provided by the bot. You will only\nneed to do this once across all repos using our CLA.\n\nThis project has adopted the\n[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct). For more information,\nsee the Code of Conduct FAQ or contact opencode@microsoft.com with any\nadditional questions or comments.\n\n\n<!-- Note: I did not use LINKS section here with a list of `[link-label](link-url)` because these\nlinks don't work in the Sphinx generated documentation. The index.html page of these docs\ninclude this README, but with broken links.-->\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Microsoft Azure AI Inference Client Library for Python",
    "version": "1.0.0b6",
    "project_urls": {
        "Homepage": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference"
    },
    "split_keywords": [
        "azure",
        " azure sdk"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5aaa47459ab2e67c55ff98dbb9694c47cf98e484ce1ae1acb244d28b25a8c1c1",
                "md5": "f27e6638819ce3a09dece4c386785aec",
                "sha256": "5699ad78d70ec2d227a5eff2c1bafc845018f6624edc5b03589dfff861c54958"
            },
            "downloads": -1,
            "filename": "azure_ai_inference-1.0.0b6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f27e6638819ce3a09dece4c386785aec",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 115312,
            "upload_time": "2024-11-12T21:11:17",
            "upload_time_iso_8601": "2024-11-12T21:11:17.584919Z",
            "url": "https://files.pythonhosted.org/packages/5a/aa/47459ab2e67c55ff98dbb9694c47cf98e484ce1ae1acb244d28b25a8c1c1/azure_ai_inference-1.0.0b6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2ac9264ae0ef0460dbd7c7efe1d3a093ad6a00fb2823d341ac457459396df2d6",
                "md5": "df340e5665bd078d7c8896b907bf3d4b",
                "sha256": "b8ac941de1e69151bad464191e18856d4e74f962ae03235da137a9a326143676"
            },
            "downloads": -1,
            "filename": "azure_ai_inference-1.0.0b6.tar.gz",
            "has_sig": false,
            "md5_digest": "df340e5665bd078d7c8896b907bf3d4b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 145414,
            "upload_time": "2024-11-12T21:11:15",
            "upload_time_iso_8601": "2024-11-12T21:11:15.336318Z",
            "url": "https://files.pythonhosted.org/packages/2a/c9/264ae0ef0460dbd7c7efe1d3a093ad6a00fb2823d341ac457459396df2d6/azure_ai_inference-1.0.0b6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-12 21:11:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Azure",
    "github_project": "azure-sdk-for-python",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "azure-ai-inference"
}
        
Elapsed time: 0.53944s