llama-index-llms-ai21


Namellama-index-llms-ai21 JSON
Version 0.3.4 PyPI version JSON
download
home_pageNone
Summaryllama-index llms ai21 integration
upload_time2024-09-13 19:55:52
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex LLMs Integration: AI21 Labs

## Installation

First, you need to install the package. You can do this using pip:

```bash
pip install llama-index-llms-ai21
```

## Usage

Here's a basic example of how to use the AI21 class to generate text completions and handle chat interactions.

## Initializing the AI21 Client

You need to initialize the AI21 client with the appropriate model and API key.

```python
from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
```

### Chat Completions

```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
```

### Chat Streaming

```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

messages = [ChatMessage(role="user", content="What is the meaning of life?")]

for chunk in llm.stream_chat(messages):
    print(chunk.message.content)
```

### Text Completion

```python
from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
```

### Stream Text Completion

```python
from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

response = llm.stream_complete(prompt="What is the meaning of life?")

for chunk in response:
    print(response.text)
```

## Other Models Support

You could also use more model types. For example the `j2-ultra` and `j2-mid`

These models support `chat` and `complete` methods only.

### Chat

```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage

api_key = "your_api_key"
llm = AI21(model="j2-chat", api_key=api_key)

messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
```

### Complete

```python
from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="j2-ultra", api_key=api_key)

response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
```

## Tokenizer

The type of the tokenizer is determined by the name of the model

```python
from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
tokenizer = llm.tokenizer

tokens = tokenizer.encode("What is the meaning of life?")
print(tokens)

text = tokenizer.decode(tokens)
print(text)
```

## Async Support

You can also use the async functionalities

### async chat

```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage


async def main():
    api_key = "your_api_key"
    llm = AI21(model="jamba-1.5-mini", api_key=api_key)

    messages = [
        ChatMessage(role="user", content="What is the meaning of life?")
    ]
    response = await llm.achat(messages)
    print(response.message.content)
```

### async stream_chat

```python
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage


async def main():
    api_key = "your_api_key"
    llm = AI21(model="jamba-1.5-mini", api_key=api_key)

    messages = [
        ChatMessage(role="user", content="What is the meaning of life?")
    ]
    response = await llm.astream_chat(messages)

    async for chunk in response:
        print(chunk.message.content)
```

## Tool Calling

```python
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.llms.ai21 import AI21
from llama_index.core.tools import FunctionTool


def multiply(a: int, b: int) -> int:
    """Multiply two integers and returns the result integer"""
    return a * b


def subtract(a: int, b: int) -> int:
    """Subtract two integers and returns the result integer"""
    return a - b


def divide(a: int, b: int) -> float:
    """Divide two integers and returns the result float"""
    return a - b


def add(a: int, b: int) -> int:
    """Add two integers and returns the result integer"""
    return a + b


multiply_tool = FunctionTool.from_defaults(fn=multiply)
add_tool = FunctionTool.from_defaults(fn=add)
subtract_tool = FunctionTool.from_defaults(fn=subtract)
divide_tool = FunctionTool.from_defaults(fn=divide)

api_key = "your_api_key"

llm = AI21(model="jamba-1.5-mini", api_key=api_key)

agent_worker = FunctionCallingAgentWorker.from_tools(
    [multiply_tool, add_tool, subtract_tool, divide_tool],
    llm=llm,
    verbose=True,
    allow_parallel_tool_calls=True,
)
agent = agent_worker.as_agent()

response = agent.chat(
    "My friend Moses had 10 apples. He ate 5 apples in the morning. Then he found a box with 25 apples."
    "He divided all his apples between his 5 friends. How many apples did each friend get?"
)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-llms-ai21",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/40/dc/02cc54ad9ade872702d73fc4354736ecb0c4d99390cd1d8a1d005c2a486c/llama_index_llms_ai21-0.3.4.tar.gz",
    "platform": null,
    "description": "# LlamaIndex LLMs Integration: AI21 Labs\n\n## Installation\n\nFirst, you need to install the package. You can do this using pip:\n\n```bash\npip install llama-index-llms-ai21\n```\n\n## Usage\n\nHere's a basic example of how to use the AI21 class to generate text completions and handle chat interactions.\n\n## Initializing the AI21 Client\n\nYou need to initialize the AI21 client with the appropriate model and API key.\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n```\n\n### Chat Completions\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nmessages = [ChatMessage(role=\"user\", content=\"What is the meaning of life?\")]\nresponse = llm.chat(messages)\nprint(response.message.content)\n```\n\n### Chat Streaming\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nmessages = [ChatMessage(role=\"user\", content=\"What is the meaning of life?\")]\n\nfor chunk in llm.stream_chat(messages):\n    print(chunk.message.content)\n```\n\n### Text Completion\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nresponse = llm.complete(prompt=\"What is the meaning of life?\")\nprint(response.text)\n```\n\n### Stream Text Completion\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nresponse = llm.stream_complete(prompt=\"What is the meaning of life?\")\n\nfor chunk in response:\n    print(response.text)\n```\n\n## Other Models Support\n\nYou could also use more model types. For example the `j2-ultra` and `j2-mid`\n\nThese models support `chat` and `complete` methods only.\n\n### Chat\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"j2-chat\", api_key=api_key)\n\nmessages = [ChatMessage(role=\"user\", content=\"What is the meaning of life?\")]\nresponse = llm.chat(messages)\nprint(response.message.content)\n```\n\n### Complete\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"j2-ultra\", api_key=api_key)\n\nresponse = llm.complete(prompt=\"What is the meaning of life?\")\nprint(response.text)\n```\n\n## Tokenizer\n\nThe type of the tokenizer is determined by the name of the model\n\n```python\nfrom llama_index.llms.ai21 import AI21\n\napi_key = \"your_api_key\"\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\ntokenizer = llm.tokenizer\n\ntokens = tokenizer.encode(\"What is the meaning of life?\")\nprint(tokens)\n\ntext = tokenizer.decode(tokens)\nprint(text)\n```\n\n## Async Support\n\nYou can also use the async functionalities\n\n### async chat\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\n\nasync def main():\n    api_key = \"your_api_key\"\n    llm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\n    messages = [\n        ChatMessage(role=\"user\", content=\"What is the meaning of life?\")\n    ]\n    response = await llm.achat(messages)\n    print(response.message.content)\n```\n\n### async stream_chat\n\n```python\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.base.llms.types import ChatMessage\n\n\nasync def main():\n    api_key = \"your_api_key\"\n    llm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\n    messages = [\n        ChatMessage(role=\"user\", content=\"What is the meaning of life?\")\n    ]\n    response = await llm.astream_chat(messages)\n\n    async for chunk in response:\n        print(chunk.message.content)\n```\n\n## Tool Calling\n\n```python\nfrom llama_index.core.agent import FunctionCallingAgentWorker\nfrom llama_index.llms.ai21 import AI21\nfrom llama_index.core.tools import FunctionTool\n\n\ndef multiply(a: int, b: int) -> int:\n    \"\"\"Multiply two integers and returns the result integer\"\"\"\n    return a * b\n\n\ndef subtract(a: int, b: int) -> int:\n    \"\"\"Subtract two integers and returns the result integer\"\"\"\n    return a - b\n\n\ndef divide(a: int, b: int) -> float:\n    \"\"\"Divide two integers and returns the result float\"\"\"\n    return a - b\n\n\ndef add(a: int, b: int) -> int:\n    \"\"\"Add two integers and returns the result integer\"\"\"\n    return a + b\n\n\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\nadd_tool = FunctionTool.from_defaults(fn=add)\nsubtract_tool = FunctionTool.from_defaults(fn=subtract)\ndivide_tool = FunctionTool.from_defaults(fn=divide)\n\napi_key = \"your_api_key\"\n\nllm = AI21(model=\"jamba-1.5-mini\", api_key=api_key)\n\nagent_worker = FunctionCallingAgentWorker.from_tools(\n    [multiply_tool, add_tool, subtract_tool, divide_tool],\n    llm=llm,\n    verbose=True,\n    allow_parallel_tool_calls=True,\n)\nagent = agent_worker.as_agent()\n\nresponse = agent.chat(\n    \"My friend Moses had 10 apples. He ate 5 apples in the morning. Then he found a box with 25 apples.\"\n    \"He divided all his apples between his 5 friends. How many apples did each friend get?\"\n)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index llms ai21 integration",
    "version": "0.3.4",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7e76ea87eb0169dfebbf002a040d2d29fa6ac499d2944295b80747d9dbf85c6e",
                "md5": "a09372c534fc5b6c2c55f347ae4b817c",
                "sha256": "36d8653594f655799466dafac817ebd56de25c908804bbb65d5e85ec383c49c7"
            },
            "downloads": -1,
            "filename": "llama_index_llms_ai21-0.3.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a09372c534fc5b6c2c55f347ae4b817c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 6855,
            "upload_time": "2024-09-13T19:55:50",
            "upload_time_iso_8601": "2024-09-13T19:55:50.783089Z",
            "url": "https://files.pythonhosted.org/packages/7e/76/ea87eb0169dfebbf002a040d2d29fa6ac499d2944295b80747d9dbf85c6e/llama_index_llms_ai21-0.3.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "40dc02cc54ad9ade872702d73fc4354736ecb0c4d99390cd1d8a1d005c2a486c",
                "md5": "45f4759a2edba214a45f543b191f6822",
                "sha256": "316299cf913046f075feb2e3aee739e5761ff2a399b840baa0231afddbc6b110"
            },
            "downloads": -1,
            "filename": "llama_index_llms_ai21-0.3.4.tar.gz",
            "has_sig": false,
            "md5_digest": "45f4759a2edba214a45f543b191f6822",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 7055,
            "upload_time": "2024-09-13T19:55:52",
            "upload_time_iso_8601": "2024-09-13T19:55:52.004032Z",
            "url": "https://files.pythonhosted.org/packages/40/dc/02cc54ad9ade872702d73fc4354736ecb0c4d99390cd1d8a1d005c2a486c/llama_index_llms_ai21-0.3.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-13 19:55:52",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-llms-ai21"
}
        
Elapsed time: 0.42874s