Name | llama-index-llms-deepinfra JSON |
Version |
0.3.0
JSON |
| download |
home_page | None |
Summary | llama-index llms deepinfra integration |
upload_time | 2024-11-18 01:28:03 |
maintainer | None |
docs_url | None |
author | Oguz Vuruskaneer |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Llms Integration: DeepInfra
## Installation
First, install the necessary package:
```bash
pip install llama-index-llms-deepinfra
```
## Initialization
Set up the `DeepInfraLLM` class with your API key and desired parameters:
```python
from llama_index.llms.deepinfra import DeepInfraLLM
import asyncio
llm = DeepInfraLLM(
model="mistralai/Mixtral-8x22B-Instruct-v0.1", # Default model name
api_key="your-deepinfra-api-key", # Replace with your DeepInfra API key
temperature=0.5,
max_tokens=50,
additional_kwargs={"top_p": 0.9},
)
```
## Synchronous Complete
Generate a text completion synchronously using the `complete` method:
```python
response = llm.complete("Hello World!")
print(response.text)
```
## Synchronous Stream Complete
Generate a streaming text completion synchronously using the `stream_complete` method:
```python
content = ""
for completion in llm.stream_complete("Once upon a time"):
content += completion.delta
print(completion.delta, end="")
```
## Synchronous Chat
Generate a chat response synchronously using the `chat` method:
```python
from llama_index.core.base.llms.types import ChatMessage
messages = [
ChatMessage(role="user", content="Tell me a joke."),
]
chat_response = llm.chat(messages)
print(chat_response.message.content)
```
## Synchronous Stream Chat
Generate a streaming chat response synchronously using the `stream_chat` method:
```python
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content="Tell me a story."),
]
content = ""
for chat_response in llm.stream_chat(messages):
content += chat_response.message.delta
print(chat_response.message.delta, end="")
```
## Asynchronous Complete
Generate a text completion asynchronously using the `acomplete` method:
```python
async def async_complete():
response = await llm.acomplete("Hello Async World!")
print(response.text)
asyncio.run(async_complete())
```
## Asynchronous Stream Complete
Generate a streaming text completion asynchronously using the `astream_complete` method:
```python
async def async_stream_complete():
content = ""
response = await llm.astream_complete("Once upon an async time")
async for completion in response:
content += completion.delta
print(completion.delta, end="")
asyncio.run(async_stream_complete())
```
## Asynchronous Chat
Generate a chat response asynchronously using the `achat` method:
```python
async def async_chat():
messages = [
ChatMessage(role="user", content="Tell me an async joke."),
]
chat_response = await llm.achat(messages)
print(chat_response.message.content)
asyncio.run(async_chat())
```
## Asynchronous Stream Chat
Generate a streaming chat response asynchronously using the `astream_chat` method:
```python
async def async_stream_chat():
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content="Tell me an async story."),
]
content = ""
response = await llm.astream_chat(messages)
async for chat_response in response:
content += chat_response.message.delta
print(chat_response.message.delta, end="")
asyncio.run(async_stream_chat())
```
---
For any questions or feedback, please contact us at [feedback@deepinfra.com](mailto:feedback@deepinfra.com).
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-deepinfra",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Oguz Vuruskaneer",
"author_email": "oguzvuruskaner@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/80/60/cbf61a9e1982b92b3c4d9d4c53b646a2685f36e20768e2e643c2748d471f/llama_index_llms_deepinfra-0.3.0.tar.gz",
"platform": null,
"description": "# LlamaIndex Llms Integration: DeepInfra\n\n## Installation\n\nFirst, install the necessary package:\n\n```bash\npip install llama-index-llms-deepinfra\n```\n\n## Initialization\n\nSet up the `DeepInfraLLM` class with your API key and desired parameters:\n\n```python\nfrom llama_index.llms.deepinfra import DeepInfraLLM\nimport asyncio\n\nllm = DeepInfraLLM(\n model=\"mistralai/Mixtral-8x22B-Instruct-v0.1\", # Default model name\n api_key=\"your-deepinfra-api-key\", # Replace with your DeepInfra API key\n temperature=0.5,\n max_tokens=50,\n additional_kwargs={\"top_p\": 0.9},\n)\n```\n\n## Synchronous Complete\n\nGenerate a text completion synchronously using the `complete` method:\n\n```python\nresponse = llm.complete(\"Hello World!\")\nprint(response.text)\n```\n\n## Synchronous Stream Complete\n\nGenerate a streaming text completion synchronously using the `stream_complete` method:\n\n```python\ncontent = \"\"\nfor completion in llm.stream_complete(\"Once upon a time\"):\n content += completion.delta\n print(completion.delta, end=\"\")\n```\n\n## Synchronous Chat\n\nGenerate a chat response synchronously using the `chat` method:\n\n```python\nfrom llama_index.core.base.llms.types import ChatMessage\n\nmessages = [\n ChatMessage(role=\"user\", content=\"Tell me a joke.\"),\n]\nchat_response = llm.chat(messages)\nprint(chat_response.message.content)\n```\n\n## Synchronous Stream Chat\n\nGenerate a streaming chat response synchronously using the `stream_chat` method:\n\n```python\nmessages = [\n ChatMessage(role=\"system\", content=\"You are a helpful assistant.\"),\n ChatMessage(role=\"user\", content=\"Tell me a story.\"),\n]\ncontent = \"\"\nfor chat_response in llm.stream_chat(messages):\n content += chat_response.message.delta\n print(chat_response.message.delta, end=\"\")\n```\n\n## Asynchronous Complete\n\nGenerate a text completion asynchronously using the `acomplete` method:\n\n```python\nasync def async_complete():\n response = await llm.acomplete(\"Hello Async World!\")\n print(response.text)\n\n\nasyncio.run(async_complete())\n```\n\n## Asynchronous Stream Complete\n\nGenerate a streaming text completion asynchronously using the `astream_complete` method:\n\n```python\nasync def async_stream_complete():\n content = \"\"\n response = await llm.astream_complete(\"Once upon an async time\")\n async for completion in response:\n content += completion.delta\n print(completion.delta, end=\"\")\n\n\nasyncio.run(async_stream_complete())\n```\n\n## Asynchronous Chat\n\nGenerate a chat response asynchronously using the `achat` method:\n\n```python\nasync def async_chat():\n messages = [\n ChatMessage(role=\"user\", content=\"Tell me an async joke.\"),\n ]\n chat_response = await llm.achat(messages)\n print(chat_response.message.content)\n\n\nasyncio.run(async_chat())\n```\n\n## Asynchronous Stream Chat\n\nGenerate a streaming chat response asynchronously using the `astream_chat` method:\n\n```python\nasync def async_stream_chat():\n messages = [\n ChatMessage(role=\"system\", content=\"You are a helpful assistant.\"),\n ChatMessage(role=\"user\", content=\"Tell me an async story.\"),\n ]\n content = \"\"\n response = await llm.astream_chat(messages)\n async for chat_response in response:\n content += chat_response.message.delta\n print(chat_response.message.delta, end=\"\")\n\n\nasyncio.run(async_stream_chat())\n```\n\n---\n\nFor any questions or feedback, please contact us at [feedback@deepinfra.com](mailto:feedback@deepinfra.com).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms deepinfra integration",
"version": "0.3.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "cda5959a906a8c36a84476c2874266b289b01692c6277c0e3712673e8a4d2f63",
"md5": "5b4c2cecc4fb537777f10bc55fd16ab1",
"sha256": "c24d2990ac1cab470c33a1b3e233026dca0775d94d5569e1eeb99aa2ba08004e"
},
"downloads": -1,
"filename": "llama_index_llms_deepinfra-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5b4c2cecc4fb537777f10bc55fd16ab1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 9752,
"upload_time": "2024-11-18T01:28:02",
"upload_time_iso_8601": "2024-11-18T01:28:02.359761Z",
"url": "https://files.pythonhosted.org/packages/cd/a5/959a906a8c36a84476c2874266b289b01692c6277c0e3712673e8a4d2f63/llama_index_llms_deepinfra-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8060cbf61a9e1982b92b3c4d9d4c53b646a2685f36e20768e2e643c2748d471f",
"md5": "894491cd00840a3a6487669089d4a927",
"sha256": "1789644c51b36531f4be262f09580ff9135b51ae5f81e58672718ba8e9e19169"
},
"downloads": -1,
"filename": "llama_index_llms_deepinfra-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "894491cd00840a3a6487669089d4a927",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 8688,
"upload_time": "2024-11-18T01:28:03",
"upload_time_iso_8601": "2024-11-18T01:28:03.851705Z",
"url": "https://files.pythonhosted.org/packages/80/60/cbf61a9e1982b92b3c4d9d4c53b646a2685f36e20768e2e643c2748d471f/llama_index_llms_deepinfra-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 01:28:03",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-deepinfra"
}