llama-index-llms-azure-openai


Namellama-index-llms-azure-openai JSON
Version 0.2.2 PyPI version JSON
download
home_pageNone
Summaryllama-index llms azure openai integration
upload_time2024-10-08 22:22:11
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Llms Integration: Azure Openai

### Installation

```bash
%pip install llama-index-llms-azure-openai
!pip install llama-index
```

### Prerequisites

Follow this to setup your Azure account: [Setup Azure account](https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/#prerequisites)

### Set the environment variables

```py
OPENAI_API_VERSION = "2023-07-01-preview"
AZURE_OPENAI_ENDPOINT = "https://YOUR_RESOURCE_NAME.openai.azure.com/"
OPENAI_API_KEY = "<your-api-key>"

import os

os.environ["OPENAI_API_KEY"] = "<your-api-key>"
os.environ[
    "AZURE_OPENAI_ENDPOINT"
] = "https://<your-resource-name>.openai.azure.com/"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"

# Use your LLM
from llama_index.llms.azure_openai import AzureOpenAI

# Unlike normal OpenAI, you need to pass an engine argument in addition to model.
# The engine is the name of your model deployment you selected in Azure OpenAI Studio.

llm = AzureOpenAI(
    engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0
)

# Alternatively, you can also skip setting environment variables, and pass the parameters in directly via constructor.
llm = AzureOpenAI(
    engine="my-custom-llm",
    model="gpt-35-turbo-16k",
    temperature=0.0,
    azure_endpoint="https://<your-resource-name>.openai.azure.com/",
    api_key="<your-api-key>",
    api_version="2023-07-01-preview",
)

# Use the complete endpoint for text completion
response = llm.complete("The sky is a beautiful blue and")
print(response)

# Expected Output:
# the sun is shining brightly. Fluffy white clouds float lazily across the sky,
# creating a picturesque scene. The vibrant blue color of the sky brings a sense
# of calm and tranquility...
```

### Streaming completion

```py
response = llm.stream_complete("The sky is a beautiful blue and")
for r in response:
    print(r.delta, end="")

# Expected Output (Stream):
# the sun is shining brightly. Fluffy white clouds float lazily across the sky,
# creating a picturesque scene. The vibrant blue color of the sky brings a sense
# of calm and tranquility...

# Use the chat endpoint for conversation
from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality."
    ),
    ChatMessage(role="user", content="Hello"),
]

response = llm.chat(messages)
print(response)

# Expected Output:
# assistant: Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,
# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?
```

### Streaming chat

```py
response = llm.stream_chat(messages)
for r in response:
    print(r.delta, end="")

# Expected Output (Stream):
# Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,
# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?

# Rather than adding the same parameters to each chat or completion call,
# you can set them at a per-instance level with additional_kwargs.
llm = AzureOpenAI(
    engine="simon-llm",
    model="gpt-35-turbo-16k",
    temperature=0.0,
    additional_kwargs={"user": "your_user_id"},
)
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-llms-azure-openai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/ce/f4/6659a0b4e4cf3c47f6ebfe8e7dcbc035d046cacf8050d0b340d0e116ddf6/llama_index_llms_azure_openai-0.2.2.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Llms Integration: Azure Openai\n\n### Installation\n\n```bash\n%pip install llama-index-llms-azure-openai\n!pip install llama-index\n```\n\n### Prerequisites\n\nFollow this to setup your Azure account: [Setup Azure account](https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/#prerequisites)\n\n### Set the environment variables\n\n```py\nOPENAI_API_VERSION = \"2023-07-01-preview\"\nAZURE_OPENAI_ENDPOINT = \"https://YOUR_RESOURCE_NAME.openai.azure.com/\"\nOPENAI_API_KEY = \"<your-api-key>\"\n\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"<your-api-key>\"\nos.environ[\n    \"AZURE_OPENAI_ENDPOINT\"\n] = \"https://<your-resource-name>.openai.azure.com/\"\nos.environ[\"OPENAI_API_VERSION\"] = \"2023-07-01-preview\"\n\n# Use your LLM\nfrom llama_index.llms.azure_openai import AzureOpenAI\n\n# Unlike normal OpenAI, you need to pass an engine argument in addition to model.\n# The engine is the name of your model deployment you selected in Azure OpenAI Studio.\n\nllm = AzureOpenAI(\n    engine=\"simon-llm\", model=\"gpt-35-turbo-16k\", temperature=0.0\n)\n\n# Alternatively, you can also skip setting environment variables, and pass the parameters in directly via constructor.\nllm = AzureOpenAI(\n    engine=\"my-custom-llm\",\n    model=\"gpt-35-turbo-16k\",\n    temperature=0.0,\n    azure_endpoint=\"https://<your-resource-name>.openai.azure.com/\",\n    api_key=\"<your-api-key>\",\n    api_version=\"2023-07-01-preview\",\n)\n\n# Use the complete endpoint for text completion\nresponse = llm.complete(\"The sky is a beautiful blue and\")\nprint(response)\n\n# Expected Output:\n# the sun is shining brightly. Fluffy white clouds float lazily across the sky,\n# creating a picturesque scene. The vibrant blue color of the sky brings a sense\n# of calm and tranquility...\n```\n\n### Streaming completion\n\n```py\nresponse = llm.stream_complete(\"The sky is a beautiful blue and\")\nfor r in response:\n    print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# the sun is shining brightly. Fluffy white clouds float lazily across the sky,\n# creating a picturesque scene. The vibrant blue color of the sky brings a sense\n# of calm and tranquility...\n\n# Use the chat endpoint for conversation\nfrom llama_index.core.llms import ChatMessage\n\nmessages = [\n    ChatMessage(\n        role=\"system\", content=\"You are a pirate with a colorful personality.\"\n    ),\n    ChatMessage(role=\"user\", content=\"Hello\"),\n]\n\nresponse = llm.chat(messages)\nprint(response)\n\n# Expected Output:\n# assistant: Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,\n# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?\n```\n\n### Streaming chat\n\n```py\nresponse = llm.stream_chat(messages)\nfor r in response:\n    print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,\n# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?\n\n# Rather than adding the same parameters to each chat or completion call,\n# you can set them at a per-instance level with additional_kwargs.\nllm = AzureOpenAI(\n    engine=\"simon-llm\",\n    model=\"gpt-35-turbo-16k\",\n    temperature=0.0,\n    additional_kwargs={\"user\": \"your_user_id\"},\n)\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index llms azure openai integration",
    "version": "0.2.2",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "589144a6d7c546e8b23be76743768b815a36f27770434108a69b1d08f6884abc",
                "md5": "a408ade2a800e6396ce6617728632623",
                "sha256": "c8a7d04a111ceff0b4335dc9273fbdb37fdb5095b6234190ca727736f6466d7b"
            },
            "downloads": -1,
            "filename": "llama_index_llms_azure_openai-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a408ade2a800e6396ce6617728632623",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 6306,
            "upload_time": "2024-10-08T22:22:10",
            "upload_time_iso_8601": "2024-10-08T22:22:10.891612Z",
            "url": "https://files.pythonhosted.org/packages/58/91/44a6d7c546e8b23be76743768b815a36f27770434108a69b1d08f6884abc/llama_index_llms_azure_openai-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cef46659a0b4e4cf3c47f6ebfe8e7dcbc035d046cacf8050d0b340d0e116ddf6",
                "md5": "574b53808e9a897eee8155cbe97fe82c",
                "sha256": "717bc3bf858e800d66e4f2ddec85a2e7dd503006d55981053d08e98771ec3abc"
            },
            "downloads": -1,
            "filename": "llama_index_llms_azure_openai-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "574b53808e9a897eee8155cbe97fe82c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 5466,
            "upload_time": "2024-10-08T22:22:11",
            "upload_time_iso_8601": "2024-10-08T22:22:11.731504Z",
            "url": "https://files.pythonhosted.org/packages/ce/f4/6659a0b4e4cf3c47f6ebfe8e7dcbc035d046cacf8050d0b340d0e116ddf6/llama_index_llms_azure_openai-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-08 22:22:11",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-llms-azure-openai"
}
        
Elapsed time: 0.44032s