Name | llama-index-llms-ai21 JSON |
Version |
0.4.0
JSON |
| download |
home_page | None |
Summary | llama-index llms ai21 integration |
upload_time | 2024-11-18 00:21:43 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex LLMs Integration: AI21 Labs
## Installation
First, you need to install the package. You can do this using pip:
```bash
pip install llama-index-llms-ai21
```
## Usage
Here's a basic example of how to use the AI21 class to generate text completions and handle chat interactions.
## Initializing the AI21 Client
You need to initialize the AI21 client with the appropriate model and API key.
```python
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
```
### Chat Completions
```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
```
### Chat Streaming
```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
for chunk in llm.stream_chat(messages):
print(chunk.message.content)
```
### Text Completion
```python
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
```
### Stream Text Completion
```python
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
response = llm.stream_complete(prompt="What is the meaning of life?")
for chunk in response:
print(response.text)
```
## Other Models Support
You could also use more model types. For example the `j2-ultra` and `j2-mid`
These models support `chat` and `complete` methods only.
### Chat
```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="j2-chat", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
```
### Complete
```python
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="j2-ultra", api_key=api_key)
response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
```
## Tokenizer
The type of the tokenizer is determined by the name of the model
```python
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
tokenizer = llm.tokenizer
tokens = tokenizer.encode("What is the meaning of life?")
print(tokens)
text = tokenizer.decode(tokens)
print(text)
```
## Async Support
You can also use the async functionalities
### async chat
```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
async def main():
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [
ChatMessage(role="user", content="What is the meaning of life?")
]
response = await llm.achat(messages)
print(response.message.content)
```
### async stream_chat
```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
async def main():
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [
ChatMessage(role="user", content="What is the meaning of life?")
]
response = await llm.astream_chat(messages)
async for chunk in response:
print(chunk.message.content)
```
## Tool Calling
```python
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.llms.ai21 import AI21
from llama_index.core.tools import FunctionTool
def multiply(a: int, b: int) -> int:
"""Multiply two integers and returns the result integer"""
return a * b
def subtract(a: int, b: int) -> int:
"""Subtract two integers and returns the result integer"""
return a - b
def divide(a: int, b: int) -> float:
"""Divide two integers and returns the result float"""
return a - b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
add_tool = FunctionTool.from_defaults(fn=add)
subtract_tool = FunctionTool.from_defaults(fn=subtract)
divide_tool = FunctionTool.from_defaults(fn=divide)
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
agent_worker = FunctionCallingAgentWorker.from_tools(
[multiply_tool, add_tool, subtract_tool, divide_tool],
llm=llm,
verbose=True,
allow_parallel_tool_calls=True,
)
agent = agent_worker.as_agent()
response = agent.chat(
"My friend Moses had 10 apples. He ate 5 apples in the morning. Then he found a box with 25 apples."
"He divided all his apples between his 5 friends. How many apples did each friend get?"
)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-ai21",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/ef/89/d07dfcb6b94c991b0114aa4b4f5161a5a19cd01540e168b6303a933c2870/llama_index_llms_ai21-0.4.0.tar.gz",
"platform": null,
"description": "# LlamaIndex LLMs Integration: AI21 Labs\n\n## Installation\n\nFirst, you need to install the package. You can do this using pip:\n\n```bash\npip install llama-index-llms-ai21\n```\n\n## Usage\n\nHere's a basic example of how to use the AI21 class to generate text completions and handle chat interactions.\n\n## Initializing the AI21 Client\n\nYou need to initialize the AI21 client with the appropriate model and API key.\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n```\n\n### Chat Completions\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nmessages = [ChatMessage(role=\"user\", content=\"What is the meaning of life?\")]\nresponse = llm.chat(messages)\nprint(response.message.content)\n```\n\n### Chat Streaming\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nmessages = [ChatMessage(role=\"user\", content=\"What is the meaning of life?\")]\n\nfor chunk in llm.stream_chat(messages):\n print(chunk.message.content)\n```\n\n### Text Completion\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nresponse = llm.complete(prompt=\"What is the meaning of life?\")\nprint(response.text)\n```\n\n### Stream Text Completion\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nresponse = llm.stream_complete(prompt=\"What is the meaning of life?\")\n\nfor chunk in response:\n print(response.text)\n```\n\n## Other Models Support\n\nYou could also use more model types. For example the `j2-ultra` and `j2-mid`\n\nThese models support `chat` and `complete` methods only.\n\n### Chat\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"j2-chat\", api_key=api_key)\n\nmessages = [ChatMessage(role=\"user\", content=\"What is the meaning of life?\")]\nresponse = llm.chat(messages)\nprint(response.message.content)\n```\n\n### Complete\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"j2-ultra\", api_key=api_key)\n\nresponse = llm.complete(prompt=\"What is the meaning of life?\")\nprint(response.text)\n```\n\n## Tokenizer\n\nThe type of the tokenizer is determined by the name of the model\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\ntokenizer = llm.tokenizer\n\ntokens = tokenizer.encode(\"What is the meaning of life?\")\nprint(tokens)\n\ntext = tokenizer.decode(tokens)\nprint(text)\n```\n\n## Async Support\n\nYou can also use the async functionalities\n\n### async chat\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\n\nasync def main():\n api_key = \"your_api_key\"\n llm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\n messages = [\n ChatMessage(role=\"user\", content=\"What is the meaning of life?\")\n ]\n response = await llm.achat(messages)\n print(response.message.content)\n```\n\n### async stream_chat\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\n\nasync def main():\n api_key = \"your_api_key\"\n llm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\n messages = [\n ChatMessage(role=\"user\", content=\"What is the meaning of life?\")\n ]\n response = await llm.astream_chat(messages)\n\n async for chunk in response:\n print(chunk.message.content)\n```\n\n## Tool Calling\n\n```python\nfrom llama_index.core.agent import FunctionCallingAgentWorker\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.tools import FunctionTool\n\n\ndef multiply(a: int, b: int) -> int:\n \"\"\"Multiply two integers and returns the result integer\"\"\"\n return a * b\n\n\ndef subtract(a: int, b: int) -> int:\n \"\"\"Subtract two integers and returns the result integer\"\"\"\n return a - b\n\n\ndef divide(a: int, b: int) -> float:\n \"\"\"Divide two integers and returns the result float\"\"\"\n return a - b\n\n\ndef add(a: int, b: int) -> int:\n \"\"\"Add two integers and returns the result integer\"\"\"\n return a + b\n\n\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\nadd_tool = FunctionTool.from_defaults(fn=add)\nsubtract_tool = FunctionTool.from_defaults(fn=subtract)\ndivide_tool = FunctionTool.from_defaults(fn=divide)\n\napi_key = \"your_api_key\"\n\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nagent_worker = FunctionCallingAgentWorker.from_tools(\n [multiply_tool, add_tool, subtract_tool, divide_tool],\n llm=llm,\n verbose=True,\n allow_parallel_tool_calls=True,\n)\nagent = agent_worker.as_agent()\n\nresponse = agent.chat(\n \"My friend Moses had 10 apples. He ate 5 apples in the morning. Then he found a box with 25 apples.\"\n \"He divided all his apples between his 5 friends. How many apples did each friend get?\"\n)\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms ai21 integration",
"version": "0.4.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "811f73d32307e2a5f25dc3a87bffb42a62886be9655504f760deadbf2121a749",
"md5": "0e2263d87ecd7b9ca80adbc89a72379c",
"sha256": "c3e80a285092648eb6ddfd1f838e956090d6ec5d42718121dd4fb9261780867b"
},
"downloads": -1,
"filename": "llama_index_llms_ai21-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0e2263d87ecd7b9ca80adbc89a72379c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 6854,
"upload_time": "2024-11-18T00:21:42",
"upload_time_iso_8601": "2024-11-18T00:21:42.593034Z",
"url": "https://files.pythonhosted.org/packages/81/1f/73d32307e2a5f25dc3a87bffb42a62886be9655504f760deadbf2121a749/llama_index_llms_ai21-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ef89d07dfcb6b94c991b0114aa4b4f5161a5a19cd01540e168b6303a933c2870",
"md5": "8ee7740763b78e01ce406da9e375ae87",
"sha256": "2dd3b856ee35e687d9fdf50bb27f0126c5620965738fab321624d0b554559cf2"
},
"downloads": -1,
"filename": "llama_index_llms_ai21-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "8ee7740763b78e01ce406da9e375ae87",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 7150,
"upload_time": "2024-11-18T00:21:43",
"upload_time_iso_8601": "2024-11-18T00:21:43.468781Z",
"url": "https://files.pythonhosted.org/packages/ef/89/d07dfcb6b94c991b0114aa4b4f5161a5a19cd01540e168b6303a933c2870/llama_index_llms_ai21-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 00:21:43",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-ai21"
}