llama-index-llms-bedrock-converse


Namellama-index-llms-bedrock-converse JSON
Version 0.9.1 PyPI version JSON
download
home_pageNone
Summaryllama-index llms bedrock converse integration
upload_time2025-09-07 03:54:52
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Llms Integration: Bedrock Converse

### Installation

```bash
%pip install llama-index-llms-bedrock-converse
!pip install llama-index
```

### Usage

```py
from llama_index.llms.bedrock_converse import BedrockConverse

# Set your AWS profile name
profile_name = "Your aws profile name"

# Simple completion call
resp = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
).complete("Paul Graham is ")
print(resp)
```

### Call chat with a list of messages

```py
from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock_converse import BedrockConverse

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]

resp = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
).chat(messages)
print(resp)
```

### Streaming

```py
# Using stream_complete endpoint
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Using stream_chat endpoint
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")
```

### Configure Model

```py
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
resp = llm.complete("Paul Graham is ")
print(resp)
```

### Connect to Bedrock with Access Keys

```py
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, eg. us-east-1",
)

resp = llm.complete("Paul Graham is ")
print(resp)
```

### Use an Application Inference Profile

AWS Bedrock supports Application Inference Profiles which are a sort of provisioned proxy to Bedrock LLMs.

Since these profile ARNs are account-specific, they must be handled specially in BedrockConverse.

When an application inference profile is created as an AWS resource, it references an existing Bedrock foundation model or a cross-region inference profile. The referenced model must be provided to the BedrockConverse initializer as the `model` argument, and the ARN of the application inference profile must be provided as the `application_inference_profile_arn` argument.

**Important:** BedrockConverse does not validate that the `model` argument in fact matches the underlying model referenced by the application inference profile provided. The caller is responsible for making sure they match. Behavior when they do not match is undefined.

```py
# Assumes the existence of a provisioned application inference profile
# that references a foundation model or cross-region inference profile.

from llama_index.llms.bedrock_converse import BedrockConverse


# Instantiate the BedrockConverse model
# with the model and application inference profile
# Make sure the model is the one that the
# application inference profile refers to in AWS
llm = BedrockConverse(
    model="us.anthropic.claude-3-5-sonnet-20240620-v1:0",  # this is the referenced model/profile
    application_inference_profile_arn="arn:aws:bedrock:us-east-1:012345678901:application-inference-profile/fake-profile-name",
)
```

### Function Calling

```py
# Claude, Command, and Mistral Large models support native function calling through AWS Bedrock Converse.
# There is seamless integration with LlamaIndex tools through the predict_and_call function on the LLM.

from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.core.tools import FunctionTool


# Define some functions
def multiply(a: int, b: int) -> int:
    """Multiply two integers and return the result"""
    return a * b


def mystery(a: int, b: int) -> int:
    """Mystery function on two integers."""
    return a * b + a + b


# Create tools from functions
mystery_tool = FunctionTool.from_defaults(fn=mystery)
multiply_tool = FunctionTool.from_defaults(fn=multiply)

# Instantiate the BedrockConverse model
llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)

# Use function tools with the LLM
response = llm.predict_and_call(
    [mystery_tool, multiply_tool],
    user_msg="What happens if I run the mystery function on 5 and 7",
)
print(str(response))

response = llm.predict_and_call(
    [mystery_tool, multiply_tool],
    user_msg=(
        """What happens if I run the mystery function on the following pairs of numbers?
        Generate a separate result for each row:
        - 1 and 2
        - 8 and 4
        - 100 and 20

        NOTE: you need to run the mystery function for all of the pairs above at the same time"""
    ),
    allow_parallel_tool_calls=True,
)
print(str(response))

for s in response.sources:
    print(f"Name: {s.tool_name}, Input: {s.raw_input}, Output: {str(s)}")
```

### Async usage

```py
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, eg. us-east-1",
)

# Use async complete
resp = await llm.acomplete("Paul Graham is ")
print(resp)
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/bedrock_converse/

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-llms-bedrock-converse",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Your Name <you@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/27/58/5ad7b035ab44666f22e8dc547ed16b00f7ec332c57e1e9743a637cf3734f/llama_index_llms_bedrock_converse-0.9.1.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Llms Integration: Bedrock Converse\n\n### Installation\n\n```bash\n%pip install llama-index-llms-bedrock-converse\n!pip install llama-index\n```\n\n### Usage\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\n# Set your AWS profile name\nprofile_name = \"Your aws profile name\"\n\n# Simple completion call\nresp = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n).complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### Call chat with a list of messages\n\n```py\nfrom llama_index.core.llms import ChatMessage\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nmessages = [\n    ChatMessage(\n        role=\"system\", content=\"You are a pirate with a colorful personality\"\n    ),\n    ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\n\nresp = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n).chat(messages)\nprint(resp)\n```\n\n### Streaming\n\n```py\n# Using stream_complete endpoint\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n    print(r.delta, end=\"\")\n\n# Using stream_chat endpoint\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\nmessages = [\n    ChatMessage(\n        role=\"system\", content=\"You are a pirate with a colorful personality\"\n    ),\n    ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\nresp = llm.stream_chat(messages)\nfor r in resp:\n    print(r.delta, end=\"\")\n```\n\n### Configure Model\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### Connect to Bedrock with Access Keys\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    aws_access_key_id=\"AWS Access Key ID to use\",\n    aws_secret_access_key=\"AWS Secret Access Key to use\",\n    aws_session_token=\"AWS Session Token to use\",\n    region_name=\"AWS Region to use, eg. us-east-1\",\n)\n\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### Use an Application Inference Profile\n\nAWS Bedrock supports Application Inference Profiles which are a sort of provisioned proxy to Bedrock LLMs.\n\nSince these profile ARNs are account-specific, they must be handled specially in BedrockConverse.\n\nWhen an application inference profile is created as an AWS resource, it references an existing Bedrock foundation model or a cross-region inference profile. The referenced model must be provided to the BedrockConverse initializer as the `model` argument, and the ARN of the application inference profile must be provided as the `application_inference_profile_arn` argument.\n\n**Important:** BedrockConverse does not validate that the `model` argument in fact matches the underlying model referenced by the application inference profile provided. The caller is responsible for making sure they match. Behavior when they do not match is undefined.\n\n```py\n# Assumes the existence of a provisioned application inference profile\n# that references a foundation model or cross-region inference profile.\n\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\n\n# Instantiate the BedrockConverse model\n# with the model and application inference profile\n# Make sure the model is the one that the\n# application inference profile refers to in AWS\nllm = BedrockConverse(\n    model=\"us.anthropic.claude-3-5-sonnet-20240620-v1:0\",  # this is the referenced model/profile\n    application_inference_profile_arn=\"arn:aws:bedrock:us-east-1:012345678901:application-inference-profile/fake-profile-name\",\n)\n```\n\n### Function Calling\n\n```py\n# Claude, Command, and Mistral Large models support native function calling through AWS Bedrock Converse.\n# There is seamless integration with LlamaIndex tools through the predict_and_call function on the LLM.\n\nfrom llama_index.llms.bedrock_converse import BedrockConverse\nfrom llama_index.core.tools import FunctionTool\n\n\n# Define some functions\ndef multiply(a: int, b: int) -> int:\n    \"\"\"Multiply two integers and return the result\"\"\"\n    return a * b\n\n\ndef mystery(a: int, b: int) -> int:\n    \"\"\"Mystery function on two integers.\"\"\"\n    return a * b + a + b\n\n\n# Create tools from functions\nmystery_tool = FunctionTool.from_defaults(fn=mystery)\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\n\n# Instantiate the BedrockConverse model\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    profile_name=profile_name,\n)\n\n# Use function tools with the LLM\nresponse = llm.predict_and_call(\n    [mystery_tool, multiply_tool],\n    user_msg=\"What happens if I run the mystery function on 5 and 7\",\n)\nprint(str(response))\n\nresponse = llm.predict_and_call(\n    [mystery_tool, multiply_tool],\n    user_msg=(\n        \"\"\"What happens if I run the mystery function on the following pairs of numbers?\n        Generate a separate result for each row:\n        - 1 and 2\n        - 8 and 4\n        - 100 and 20\n\n        NOTE: you need to run the mystery function for all of the pairs above at the same time\"\"\"\n    ),\n    allow_parallel_tool_calls=True,\n)\nprint(str(response))\n\nfor s in response.sources:\n    print(f\"Name: {s.tool_name}, Input: {s.raw_input}, Output: {str(s)}\")\n```\n\n### Async usage\n\n```py\nfrom llama_index.llms.bedrock_converse import BedrockConverse\n\nllm = BedrockConverse(\n    model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n    aws_access_key_id=\"AWS Access Key ID to use\",\n    aws_secret_access_key=\"AWS Secret Access Key to use\",\n    aws_session_token=\"AWS Session Token to use\",\n    region_name=\"AWS Region to use, eg. us-east-1\",\n)\n\n# Use async complete\nresp = await llm.acomplete(\"Paul Graham is \")\nprint(resp)\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/bedrock_converse/\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "llama-index llms bedrock converse integration",
    "version": "0.9.1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3cce1fe063dacdb5c5cadde71de91badd3f62436bcfd75de03a23fce8893d900",
                "md5": "ca856ca0a424020278de03ee68b4d2d4",
                "sha256": "4f8ae6358b1a9eaeffce1e5ab5f2a21a376cf02d0f9ceb19dceb92154a6e6b38"
            },
            "downloads": -1,
            "filename": "llama_index_llms_bedrock_converse-0.9.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ca856ca0a424020278de03ee68b4d2d4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 15681,
            "upload_time": "2025-09-07T03:54:50",
            "upload_time_iso_8601": "2025-09-07T03:54:50.797574Z",
            "url": "https://files.pythonhosted.org/packages/3c/ce/1fe063dacdb5c5cadde71de91badd3f62436bcfd75de03a23fce8893d900/llama_index_llms_bedrock_converse-0.9.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "27585ad7b035ab44666f22e8dc547ed16b00f7ec332c57e1e9743a637cf3734f",
                "md5": "2d5cf9a6cd0ca34a650105d37042e1d6",
                "sha256": "8ac40aa9d3df48538ba26331998976c237d7593e4956576875ab4efe843e5a67"
            },
            "downloads": -1,
            "filename": "llama_index_llms_bedrock_converse-0.9.1.tar.gz",
            "has_sig": false,
            "md5_digest": "2d5cf9a6cd0ca34a650105d37042e1d6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 15235,
            "upload_time": "2025-09-07T03:54:52",
            "upload_time_iso_8601": "2025-09-07T03:54:52.048792Z",
            "url": "https://files.pythonhosted.org/packages/27/58/5ad7b035ab44666f22e8dc547ed16b00f7ec332c57e1e9743a637cf3734f/llama_index_llms_bedrock_converse-0.9.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-07 03:54:52",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-llms-bedrock-converse"
}
        
Elapsed time: 1.88028s