llama-index-llms-bedrock-converse


Namellama-index-llms-bedrock-converse JSON
Version 0.3.4 PyPI version JSON
download
home_pageNone
Summaryllama-index llms bedrock converse integration
upload_time2024-10-25 19:42:55
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Llms Integration: Bedrock Converse

### Installation

```bash
%pip install llama-index-llms-bedrock-converse
!pip install llama-index
```

### Usage

```py
from llama_index.llms.bedrock_converse import BedrockConverse

# Set your AWS profile name
profile_name = "Your aws profile name"

# Simple completion call
resp = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
).complete("Paul Graham is ")
print(resp)
```

### Call chat with a list of messages

```py
from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock_converse import BedrockConverse

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]

resp = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
).chat(messages)
print(resp)
```

### Streaming

```py
# Using stream_complete endpoint
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Using stream_chat endpoint
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")
```

### Configure Model

```py
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
resp = llm.complete("Paul Graham is ")
print(resp)
```

### Connect to Bedrock with Access Keys

```py
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, eg. us-east-1",
)

resp = llm.complete("Paul Graham is ")
print(resp)
```

### Function Calling

```py
# Claude, Command, and Mistral Large models support native function calling through AWS Bedrock Converse.
# There is seamless integration with LlamaIndex tools through the predict_and_call function on the LLM.

from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.core.tools import FunctionTool


# Define some functions
def multiply(a: int, b: int) -> int:
    """Multiply two integers and return the result"""
    return a * b


def mystery(a: int, b: int) -> int:
    """Mystery function on two integers."""
    return a * b + a + b


# Create tools from functions
mystery_tool = FunctionTool.from_defaults(fn=mystery)
multiply_tool = FunctionTool.from_defaults(fn=multiply)

# Instantiate the BedrockConverse model
llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)

# Use function tools with the LLM
response = llm.predict_and_call(
    [mystery_tool, multiply_tool],
    user_msg="What happens if I run the mystery function on 5 and 7",
)
print(str(response))

response = llm.predict_and_call(
    [mystery_tool, multiply_tool],
    user_msg=(
        """What happens if I run the mystery function on the following pairs of numbers?
        Generate a separate result for each row:
        - 1 and 2
        - 8 and 4
        - 100 and 20

        NOTE: you need to run the mystery function for all of the pairs above at the same time"""
    ),
    allow_parallel_tool_calls=True,
)
print(str(response))

for s in response.sources:
    print(f"Name: {s.tool_name}, Input: {s.raw_input}, Output: {str(s)}")
```

### Async usage

```py
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, eg. us-east-1",
)

# Use async complete
resp = await llm.acomplete("Paul Graham is ")
print(resp)
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/bedrock_converse/

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-llms-bedrock-converse",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/a0/32/bdecdab4d2509848e6d5e6fd852e659f15872735a345cfe1c08fc2979449/llama_index_llms_bedrock_converse-0.3.4.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Llms Integration: Bedrock Converse\n\n### Installation\n\n```bash\n%pip install llama-index-llms-bedrock-converse\n!pip install llama-index\n```\n\n### Usage\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\n# Set your AWS profile name\nprofile_name = \"Your aws profile name\"\n\n# Simple completion call\nresp = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n).complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### Call chat with a list of messages\n\n```py\nfrom llama_index.core.llms import ChatMessage\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nmessages = [\n    ChatMessage(\n        role=\"system\", content=\"You are a pirate with a colorful personality\"\n    ),\n    ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\n\nresp = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n).chat(messages)\nprint(resp)\n```\n\n### Streaming\n\n```py\n# Using stream_complete endpoint\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n    print(r.delta, end=\"\")\n\n# Using stream_chat endpoint\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\nmessages = [\n    ChatMessage(\n        role=\"system\", content=\"You are a pirate with a colorful personality\"\n    ),\n    ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\nresp = llm.stream_chat(messages)\nfor r in resp:\n    print(r.delta, end=\"\")\n```\n\n### Configure Model\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### Connect to Bedrock with Access Keys\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    aws_access_key_id=\"AWS Access Key ID to use\",\n    aws_secret_access_key=\"AWS Secret Access Key to use\",\n    aws_session_token=\"AWS Session Token to use\",\n    region_name=\"AWS Region to use, eg. us-east-1\",\n)\n\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### Function Calling\n\n```py\n# Claude, Command, and Mistral Large models support native function calling through AWS Bedrock Converse.\n# There is seamless integration with LlamaIndex tools through the predict_and_call function on the LLM.\n\nfrom llama_index.llms.bedrock_converse import BedrockConverse\nfrom llama_index.core.tools import FunctionTool\n\n\n# Define some functions\ndef multiply(a: int, b: int) -> int:\n    \"\"\"Multiply two integers and return the result\"\"\"\n    return a * b\n\n\ndef mystery(a: int, b: int) -> int:\n    \"\"\"Mystery function on two integers.\"\"\"\n    return a * b + a + b\n\n\n# Create tools from functions\nmystery_tool = FunctionTool.from_defaults(fn=mystery)\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\n\n# Instantiate the BedrockConverse model\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\n\n# Use function tools with the LLM\nresponse = llm.predict_and_call(\n    [mystery_tool, multiply_tool],\n    user_msg=\"What happens if I run the mystery function on 5 and 7\",\n)\nprint(str(response))\n\nresponse = llm.predict_and_call(\n    [mystery_tool, multiply_tool],\n    user_msg=(\n        \"\"\"What happens if I run the mystery function on the following pairs of numbers?\n        Generate a separate result for each row:\n        - 1 and 2\n        - 8 and 4\n        - 100 and 20\n\n        NOTE: you need to run the mystery function for all of the pairs above at the same time\"\"\"\n    ),\n    allow_parallel_tool_calls=True,\n)\nprint(str(response))\n\nfor s in response.sources:\n    print(f\"Name: {s.tool_name}, Input: {s.raw_input}, Output: {str(s)}\")\n```\n\n### Async usage\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    aws_access_key_id=\"AWS Access Key ID to use\",\n    aws_secret_access_key=\"AWS Secret Access Key to use\",\n    aws_session_token=\"AWS Session Token to use\",\n    region_name=\"AWS Region to use, eg. us-east-1\",\n)\n\n# Use async complete\nresp = await llm.acomplete(\"Paul Graham is \")\nprint(resp)\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/bedrock_converse/\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index llms bedrock converse integration",
    "version": "0.3.4",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "52ef77ac0a8bf155d433750531a640e92784fbc8ba44bd9497257560fd816f78",
                "md5": "c4c634d95992f95669e979c75d88ad1a",
                "sha256": "c8fd7e3bbf20276a4256e0dffdae739b2bc0a7db467e335330cad96f2ee8cfd0"
            },
            "downloads": -1,
            "filename": "llama_index_llms_bedrock_converse-0.3.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c4c634d95992f95669e979c75d88ad1a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 10379,
            "upload_time": "2024-10-25T19:42:54",
            "upload_time_iso_8601": "2024-10-25T19:42:54.405335Z",
            "url": "https://files.pythonhosted.org/packages/52/ef/77ac0a8bf155d433750531a640e92784fbc8ba44bd9497257560fd816f78/llama_index_llms_bedrock_converse-0.3.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a032bdecdab4d2509848e6d5e6fd852e659f15872735a345cfe1c08fc2979449",
                "md5": "936642d686e7ed1cd851e11f58a5395a",
                "sha256": "aa94454700cc87570a80539ae4a0f6f72bf5818b91fe3d84f3c846c485380ef3"
            },
            "downloads": -1,
            "filename": "llama_index_llms_bedrock_converse-0.3.4.tar.gz",
            "has_sig": false,
            "md5_digest": "936642d686e7ed1cd851e11f58a5395a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 10147,
            "upload_time": "2024-10-25T19:42:55",
            "upload_time_iso_8601": "2024-10-25T19:42:55.417910Z",
            "url": "https://files.pythonhosted.org/packages/a0/32/bdecdab4d2509848e6d5e6fd852e659f15872735a345cfe1c08fc2979449/llama_index_llms_bedrock_converse-0.3.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-25 19:42:55",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-llms-bedrock-converse"
}
        
Elapsed time: 0.39324s