Name | llama-index-llms-bedrock JSON |
Version |
0.3.3
JSON |
| download |
home_page | None |
Summary | llama-index llms bedrock integration |
upload_time | 2024-12-17 22:42:11 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Llms Integration: Bedrock
### Installation
```bash
%pip install llama-index-llms-bedrock
!pip install llama-index
```
### Basic Usage
```py
from llama_index.llms.bedrock import Bedrock
# Set your AWS profile name
profile_name = "Your aws profile name"
# Simple completion call
resp = Bedrock(
model="amazon.titan-text-express-v1", profile_name=profile_name
).complete("Paul Graham is ")
print(resp)
# Expected output:
# Paul Graham is a computer scientist and entrepreneur, best known for co-founding
# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer
# and speaker on technology and business topics...
```
### Call chat with a list of messages
```py
from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock import Bedrock
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = Bedrock(
model="amazon.titan-text-express-v1", profile_name=profile_name
).chat(messages)
print(resp)
# Expected output:
# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate
# named Captain Jack Sparrow who sailed the seas in search of his next adventure...
```
### Streaming
#### Using stream_complete endpoint
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
# Expected Output (Stream):
# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known
# for co-founding the internet firm Y Combinator...
```
### Streaming chat
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
# Expected Output (Stream):
# Once upon a time, there was a pirate with a colorful personality who sailed the
# high seas in search of adventure...
```
### Configure Model
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.complete("Paul Graham is ")
print(resp)
# Expected Output:
# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded
# Viaweb, the first commercial web browser...
```
### Connect to Bedrock with Access Keys
```py
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(
model="amazon.titan-text-express-v1",
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, e.g. us-east-1",
)
resp = llm.complete("Paul Graham is ")
print(resp)
# Expected Output:
# Paul Graham is an American computer scientist, entrepreneur, investor, and author,
# best known for co-founding Viaweb, the first commercial web browser...
```
### LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/bedrock/
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-llms-bedrock",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/fb/5e/5cb7a6ccf8362568d87b640a0477fe519c0c4b677d5a9406ab6a5c1820d1/llama_index_llms_bedrock-0.3.3.tar.gz",
"platform": null,
"description": "# LlamaIndex Llms Integration: Bedrock\n\n### Installation\n\n```bash\n%pip install llama-index-llms-bedrock\n!pip install llama-index\n```\n\n### Basic Usage\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\n# Set your AWS profile name\nprofile_name = \"Your aws profile name\"\n\n# Simple completion call\nresp = Bedrock(\n model=\"amazon.titan-text-express-v1\", profile_name=profile_name\n).complete(\"Paul Graham is \")\nprint(resp)\n\n# Expected output:\n# Paul Graham is a computer scientist and entrepreneur, best known for co-founding\n# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer\n# and speaker on technology and business topics...\n```\n\n### Call chat with a list of messages\n\n```py\nfrom llama_index.core.llms import ChatMessage\nfrom llama_index.llms.bedrock import Bedrock\n\nmessages = [\n ChatMessage(\n role=\"system\", content=\"You are a pirate with a colorful personality\"\n ),\n ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\n\nresp = Bedrock(\n model=\"amazon.titan-text-express-v1\", profile_name=profile_name\n).chat(messages)\nprint(resp)\n\n# Expected output:\n# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate\n# named Captain Jack Sparrow who sailed the seas in search of his next adventure...\n```\n\n### Streaming\n\n#### Using stream_complete endpoint\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(model=\"amazon.titan-text-express-v1\", profile_name=profile_name)\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known\n# for co-founding the internet firm Y Combinator...\n```\n\n### Streaming chat\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(model=\"amazon.titan-text-express-v1\", profile_name=profile_name)\nmessages = [\n ChatMessage(\n role=\"system\", content=\"You are a pirate with a colorful personality\"\n ),\n ChatMessage(role=\"user\", content=\"Tell me a story\"),\n]\nresp = llm.stream_chat(messages)\nfor r in resp:\n print(r.delta, end=\"\")\n\n# Expected Output (Stream):\n# Once upon a time, there was a pirate with a colorful personality who sailed the\n# high seas in search of adventure...\n```\n\n### Configure Model\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(model=\"amazon.titan-text-express-v1\", profile_name=profile_name)\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n\n# Expected Output:\n# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded\n# Viaweb, the first commercial web browser...\n```\n\n### Connect to Bedrock with Access Keys\n\n```py\nfrom llama_index.llms.bedrock import Bedrock\n\nllm = Bedrock(\n model=\"amazon.titan-text-express-v1\",\n aws_access_key_id=\"AWS Access Key ID to use\",\n aws_secret_access_key=\"AWS Secret Access Key to use\",\n aws_session_token=\"AWS Session Token to use\",\n region_name=\"AWS Region to use, e.g. us-east-1\",\n)\n\nresp = llm.complete(\"Paul Graham is \")\nprint(resp)\n\n# Expected Output:\n# Paul Graham is an American computer scientist, entrepreneur, investor, and author,\n# best known for co-founding Viaweb, the first commercial web browser...\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/bedrock/\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index llms bedrock integration",
"version": "0.3.3",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8063c7e1fe67fa2dc4a249c05bc56f0781251f18b91bdfb2ed0a484a219259e1",
"md5": "d8b24f3813e51d4280302b2fafabca1f",
"sha256": "ba3657c1ace61eb19474802dbf052a30b8cd0f184823d69ed3276f687193ddd4"
},
"downloads": -1,
"filename": "llama_index_llms_bedrock-0.3.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d8b24f3813e51d4280302b2fafabca1f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 10055,
"upload_time": "2024-12-17T22:42:09",
"upload_time_iso_8601": "2024-12-17T22:42:09.137091Z",
"url": "https://files.pythonhosted.org/packages/80/63/c7e1fe67fa2dc4a249c05bc56f0781251f18b91bdfb2ed0a484a219259e1/llama_index_llms_bedrock-0.3.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fb5e5cb7a6ccf8362568d87b640a0477fe519c0c4b677d5a9406ab6a5c1820d1",
"md5": "4b8dd4cea01ad961a360ba9b7f269878",
"sha256": "b11c29eff88760522b6e9013459a5a4cc57a5dd5d66e57f855d6096b2fb0acd3"
},
"downloads": -1,
"filename": "llama_index_llms_bedrock-0.3.3.tar.gz",
"has_sig": false,
"md5_digest": "4b8dd4cea01ad961a360ba9b7f269878",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 9498,
"upload_time": "2024-12-17T22:42:11",
"upload_time_iso_8601": "2024-12-17T22:42:11.312868Z",
"url": "https://files.pythonhosted.org/packages/fb/5e/5cb7a6ccf8362568d87b640a0477fe519c0c4b677d5a9406ab6a5c1820d1/llama_index_llms_bedrock-0.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-17 22:42:11",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-llms-bedrock"
}