Name | llama-index-llms-azure-openai JSON |
Version |
0.3.0
JSON |
| download |
home_page | None |
Summary | llama-index llms azure openai integration |
upload_time | 2024-11-18 01:06:37 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Llms Integration: Azure Openai
### Installation
```bash
%pip install llama-index-llms-azure-openai
!pip install llama-index
```
### Prerequisites
Follow this to setup your Azure account: [Setup Azure account](https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/#prerequisites)
### Set the environment variables
```py
OPENAI_API_VERSION = "2023-07-01-preview"
AZURE_OPENAI_ENDPOINT = "https://YOUR_RESOURCE_NAME.openai.azure.com/"
OPENAI_API_KEY = "<your-api-key>"
import os
os.environ["OPENAI_API_KEY"] = "<your-api-key>"
os.environ[
"AZURE_OPENAI_ENDPOINT"
] = "https://<your-resource-name>.openai.azure.com/"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
# Use your LLM
from llama_index.llms.azure_openai import AzureOpenAI
# Unlike normal OpenAI, you need to pass an engine argument in addition to model.
# The engine is the name of your model deployment you selected in Azure OpenAI Studio.
llm = AzureOpenAI(
engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0
)
# Alternatively, you can also skip setting environment variables, and pass the parameters in directly via constructor.
llm = AzureOpenAI(
engine="my-custom-llm",
model="gpt-35-turbo-16k",
temperature=0.0,
azure_endpoint="https://<your-resource-name>.openai.azure.com/",
api_key="<your-api-key>",
api_version="2023-07-01-preview",
)
# Use the complete endpoint for text completion
response = llm.complete("The sky is a beautiful blue and")
print(response)
# Expected Output:
# the sun is shining brightly. Fluffy white clouds float lazily across the sky,
# creating a picturesque scene. The vibrant blue color of the sky brings a sense
# of calm and tranquility...
```
### Streaming completion
```py
response = llm.stream_complete("The sky is a beautiful blue and")
for r in response:
print(r.delta, end="")
# Expected Output (Stream):
# the sun is shining brightly. Fluffy white clouds float lazily across the sky,
# creating a picturesque scene. The vibrant blue color of the sky brings a sense
# of calm and tranquility...
# Use the chat endpoint for conversation
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality."
),
ChatMessage(role="user", content="Hello"),
]
response = llm.chat(messages)
print(response)
# Expected Output:
# assistant: Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,
# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?
```
### Streaming chat
```py
response = llm.stream_chat(messages)
for r in response:
print(r.delta, end="")
# Expected Output (Stream):
# Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,
# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?
# Rather than adding the same parameters to each chat or completion call,
# you can set them at a per-instance level with additional_kwargs.
llm = AzureOpenAI(
engine="simon-llm",
model="gpt-35-turbo-16k",
temperature=0.0,
additional_kwargs={"user": "your_user_id"},
)
```
### LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-azure-openai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/81/d7/21264774d0e0819d869ac2f6527fd6b405340647feb4fef7b6b59c520858/llama_index_llms_azure_openai-0.3.0.tar.gz",
"platform": null,
"description": "# LlamaIndex Llms Integration: Azure Openai\n\n### Installation\n\n```bash\n%pip install llama-index-llms-azure-openai\n!pip install llama-index\n```\n\n### Prerequisites\n\nFollow this to setup your Azure account: [Setup Azure account](https://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/#prerequisites)\n\n### Set the environment variables\n\n```py\nOPENAI_API_VERSION = \"2023-07-01-preview\"\nAZURE_OPENAI_ENDPOINT = \"https://YOUR_RESOURCE_NAME.openai.azure.com/\"\nOPENAI_API_KEY = \"<your-api-key>\"\n\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"<your-api-key>\"\nos.environ[\n \"AZURE_OPENAI_ENDPOINT\"\n] = \"https://<your-resource-name>.openai.azure.com/\"\nos.environ[\"OPENAI_API_VERSION\"] = \"2023-07-01-preview\"\n\n# Use your LLM\nfrom llama_index.llms.azure_openai import AzureOpenAI\n\n# Unlike normal OpenAI, you need to pass an engine argument in addition to model.\n# The engine is the name of your model deployment you selected in Azure OpenAI Studio.\n\nllm = AzureOpenAI(\n engine=\"simon-llm\", model=\"gpt-35-turbo-16k\", temperature=0.0\n)\n\n# Alternatively, you can also skip setting environment variables, and pass the parameters in directly via constructor.\nllm = AzureOpenAI(\n engine=\"my-custom-llm\",\n model=\"gpt-35-turbo-16k\",\n temperature=0.0,\n azure_endpoint=\"https://<your-resource-name>.openai.azure.com/\",\n api_key=\"<your-api-key>\",\n api_version=\"2023-07-01-preview\",\n)\n\n# Use the complete endpoint for text completion\nresponse = llm.complete(\"The sky is a beautiful blue and\")\nprint(response)\n\n# Expected Output:\n# the sun is shining brightly. Fluffy white clouds float lazily across the sky,\n# creating a picturesque scene. The vibrant blue color of the sky brings a sense\n# of calm and tranquility...\n```\n\n### Streaming completion\n\n```py\nresponse = llm.stream_complete(\"The sky is a beautiful blue and\")\nfor r in response:\n print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# the sun is shining brightly. Fluffy white clouds float lazily across the sky,\n# creating a picturesque scene. The vibrant blue color of the sky brings a sense\n# of calm and tranquility...\n\n# Use the chat endpoint for conversation\nfrom llama_index.core.llms import ChatMessage\n\nmessages = [\n ChatMessage(\n role=\"system\", content=\"You are a pirate with a colorful personality.\"\n ),\n ChatMessage(role=\"user\", content=\"Hello\"),\n]\n\nresponse = llm.chat(messages)\nprint(response)\n\n# Expected Output:\n# assistant: Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,\n# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?\n```\n\n### Streaming chat\n\n```py\nresponse = llm.stream_chat(messages)\nfor r in response:\n print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger,\n# the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?\n\n# Rather than adding the same parameters to each chat or completion call,\n# you can set them at a per-instance level with additional_kwargs.\nllm = AzureOpenAI(\n engine=\"simon-llm\",\n model=\"gpt-35-turbo-16k\",\n temperature=0.0,\n additional_kwargs={\"user\": \"your_user_id\"},\n)\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/azure_openai/\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms azure openai integration",
"version": "0.3.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9049a90c17bddddb411e0bc2d05bcf393fb03474279fb6fbe20c98db68473d98",
"md5": "a5ecf026717fbe75f57710ff61726e2d",
"sha256": "24091aedf7ba24a7b217d17c4358e62b5d6b43a4d3ca44750d442b02a440d26e"
},
"downloads": -1,
"filename": "llama_index_llms_azure_openai-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a5ecf026717fbe75f57710ff61726e2d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 6306,
"upload_time": "2024-11-18T01:06:35",
"upload_time_iso_8601": "2024-11-18T01:06:35.435804Z",
"url": "https://files.pythonhosted.org/packages/90/49/a90c17bddddb411e0bc2d05bcf393fb03474279fb6fbe20c98db68473d98/llama_index_llms_azure_openai-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "81d721264774d0e0819d869ac2f6527fd6b405340647feb4fef7b6b59c520858",
"md5": "2cb8e952e3f023c95e3af95a7749637e",
"sha256": "0feea9319d832c8b5e8e0f397c905e45df54c529b6a778825adcd0d254bd7d63"
},
"downloads": -1,
"filename": "llama_index_llms_azure_openai-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "2cb8e952e3f023c95e3af95a7749637e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 5557,
"upload_time": "2024-11-18T01:06:37",
"upload_time_iso_8601": "2024-11-18T01:06:37.775910Z",
"url": "https://files.pythonhosted.org/packages/81/d7/21264774d0e0819d869ac2f6527fd6b405340647feb4fef7b6b59c520858/llama_index_llms_azure_openai-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 01:06:37",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-azure-openai"
}