Name | llama-index-llms-bedrock JSON |
Version |
0.3.1
JSON |
| download |
home_page | None |
Summary | llama-index llms bedrock integration |
upload_time | 2024-11-18 20:47:12 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Llms Integration: Bedrock
### Installation
```bash
%pip install llama-index-llms-bedrock
!pip install llama-index
```
### Basic Usage
```py
from llama_index.llms.bedrock import Bedrock
# Set your AWS profile name
profile_name = "Your aws profile name"
# Simple completion call
resp = Bedrock(
model="amazon.titan-text-express-v1", profile_name=profile_name
).complete("Paul Graham is ")
print(resp)
# Expected output:
# Paul Graham is a computer scientist and entrepreneur, best known for co-founding
# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer
# and speaker on technology and business topics...
```
### Call chat with a list of messages
```py
from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock import Bedrock
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = Bedrock(
model="amazon.titan-text-express-v1", profile_name=profile_name
).chat(messages)
print(resp)
# Expected output:
# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate
# named Captain Jack Sparrow who sailed the seas in search of his next adventure...
```
### Streaming
#### Using stream_complete endpoint
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
# Expected Output (Stream):
# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known
# for co-founding the internet firm Y Combinator...
```
### Streaming chat
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
# Expected Output (Stream):
# Once upon a time, there was a pirate with a colorful personality who sailed the
# high seas in search of adventure...
```
### Configure Model
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.complete("Paul Graham is ")
print(resp)
# Expected Output:
# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded
# Viaweb, the first commercial web browser...
```
### Connect to Bedrock with Access Keys
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(
model="amazon.titan-text-express-v1",
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, e.g. us-east-1",
)
resp = llm.complete("Paul Graham is ")
print(resp)
# Expected Output:
# Paul Graham is an American computer scientist, entrepreneur, investor, and author,
# best known for co-founding Viaweb, the first commercial web browser...
```
### LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/bedrock/
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-bedrock",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/ac/10/b658fc6ea6179cf799f12bdd6ae72420110aacd86172ae8532e13b10df8f/llama_index_llms_bedrock-0.3.1.tar.gz",
"platform": null,
"description": "# LlamaIndex Llms Integration: Bedrock\n\n### Installation\n\n```bash\n%pip install llama-index-llms-bedrock\n!pip install llama-index\n```\n\n### Basic Usage\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\n# Set your AWS profile name\nprofile_name = \"Your aws profile name\"\n\n# Simple completion call\nresp = Bedrock(\n model=\"amazon.titan-text-express-v1\", profile_name=profile_name\n).complete(\"Paul Graham is \")\nprint(resp)\n\n# Expected output:\n# Paul Graham is a computer scientist and entrepreneur, best known for co-founding\n# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer\n# and speaker on technology and business topics...\n```\n\n### Call chat with a list of messages\n\n```py\nfrom llama_index.core.llms import ChatMessage\nfrom llama_index.llms.bedrock import Bedrock\n\nmessages = [\n ChatMessage(\n role=\"system\", content=\"You are a pirate with a colorful personality\"\n ),\n ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\n\nresp = Bedrock(\n model=\"amazon.titan-text-express-v1\", profile_name=profile_name\n).chat(messages)\nprint(resp)\n\n# Expected output:\n# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate\n# named Captain Jack Sparrow who sailed the seas in search of his next adventure...\n```\n\n### Streaming\n\n#### Using stream_complete endpoint\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(model=\"amazon.titan-text-express-v1\", profile_name=profile_name)\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known\n# for co-founding the internet firm Y Combinator...\n```\n\n### Streaming chat\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(model=\"amazon.titan-text-express-v1\", profile_name=profile_name)\nmessages = [\n ChatMessage(\n role=\"system\", content=\"You are a pirate with a colorful personality\"\n ),\n ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\nresp = llm.stream_chat(messages)\nfor r in resp:\n print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# Once upon a time, there was a pirate with a colorful personality who sailed the\n# high seas in search of adventure...\n```\n\n### Configure Model\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(model=\"amazon.titan-text-express-v1\", profile_name=profile_name)\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n\n# Expected Output:\n# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded\n# Viaweb, the first commercial web browser...\n```\n\n### Connect to Bedrock with Access Keys\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(\n model=\"amazon.titan-text-express-v1\",\n aws_access_key_id=\"AWS Access Key ID to use\",\n aws_secret_access_key=\"AWS Secret Access Key to use\",\n aws_session_token=\"AWS Session Token to use\",\n region_name=\"AWS Region to use, e.g. us-east-1\",\n)\n\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n\n# Expected Output:\n# Paul Graham is an American computer scientist, entrepreneur, investor, and author,\n# best known for co-founding Viaweb, the first commercial web browser...\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/bedrock/\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms bedrock integration",
"version": "0.3.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1e978794f0f523e1b2b6a88770ea01f4701e7398fd8a8b94ac0cbe0e3f30dc45",
"md5": "1cc44ff03fbf5924592f57362ccf8773",
"sha256": "613586b80594d47cef5633bea82fade84ead3ec59a92cc3d8a1ddabb28dd4349"
},
"downloads": -1,
"filename": "llama_index_llms_bedrock-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1cc44ff03fbf5924592f57362ccf8773",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 9699,
"upload_time": "2024-11-18T20:47:11",
"upload_time_iso_8601": "2024-11-18T20:47:11.366857Z",
"url": "https://files.pythonhosted.org/packages/1e/97/8794f0f523e1b2b6a88770ea01f4701e7398fd8a8b94ac0cbe0e3f30dc45/llama_index_llms_bedrock-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ac10b658fc6ea6179cf799f12bdd6ae72420110aacd86172ae8532e13b10df8f",
"md5": "5b854ec044a63d59a1c4988f9498c904",
"sha256": "3f02da765305bf19272ebf0a66e95e4ea5e516782f8ca197c16eb2d3953b3a79"
},
"downloads": -1,
"filename": "llama_index_llms_bedrock-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "5b854ec044a63d59a1c4988f9498c904",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 9177,
"upload_time": "2024-11-18T20:47:12",
"upload_time_iso_8601": "2024-11-18T20:47:12.976413Z",
"url": "https://files.pythonhosted.org/packages/ac/10/b658fc6ea6179cf799f12bdd6ae72420110aacd86172ae8532e13b10df8f/llama_index_llms_bedrock-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 20:47:12",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-bedrock"
}