Name | bedrock-llm JSON |
Version |
0.1.6
JSON |
| download |
home_page | https://github.com/Phicks-debug/bedrock_llm |
Summary | A Python LLM framework for interacting with AWS Bedrock services, built on top of boto3. This library serves as a comprehensive tool for fast prototyping, building POCs, and deploying production-ready LLM applications with robust infrastructure support. |
upload_time | 2024-11-16 17:41:34 |
maintainer | None |
docs_url | None |
author | Tran Quy An |
requires_python | >=3.9 |
license | None |
keywords |
aws
bedrock
llm
machine-learning
ai
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Bedrock LLM
A Python library for building LLM applications using Amazon Bedrock Provider and boto3 library. It aims to create best practices and production-ready solutions for various LLM models, including Anthropic, Llama, Amazon Titan, MistralAI, and AI21.
The library is structured into two main components:
1. `bedrock_be`: Infrastructure and services for deploying LLM applications.
2. `bedrock_llm`: LLM orchestration and interaction logic.
This structure allows for seamless integration of LLM capabilities with robust deployment and infrastructure management.
![Conceptual Architecture](/assests/bedrock_llm.png)
## Features
- Support for multiple LLM models through Amazon Bedrock
- Efficient LLM orchestration with `bedrock_llm`
- Infrastructure and deployment services with `bedrock_be`
- Enhanced Agent-based interactions with:
- Robust tool validation and execution
- Comprehensive error handling and logging
- Configurable memory management
- Type-safe responses with `AgentResponse`
- Support for multiple LLM tool-calling conventions (Claude, Llama, Mistral, etc.)
- Asynchronous and synchronous function support
- Performance monitoring and logging functionality
- Support for Retrieval-Augmented Generation (RAG)
- Multi-Agent systems (in progress)
- Workflows, nodes, and event-based systems (coming soon)
- Image generation, speech-to-text (STT), and text-to-speech (TTS) support (coming soon)
## Installation
You can install the Bedrock LLM library using pip:
```bash
pip install bedrock-llm
```
This library requires Python 3.9 or later.
## AWS Credentials Setup
Before using the library, make sure you have your AWS credentials properly configured:
1. Create or update your AWS credentials file at `~/.aws/credentials`:
```ini
[bedrock]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
```
2. Create or update your AWS config file at `~/.aws/config`:
```ini
[profile bedrock]
region = us-east-1
```
3. When initializing the client, specify the profile name:
```python
from bedrock_llm import LLMClient, ModelName, ModelConfig
# Create a LLM client with specific AWS profile
client = LLMClient(
region_name="us-east-1",
model_name=ModelName.MISTRAL_7B,
profile_name="bedrock" # Specify your AWS profile name
)
```
You can verify your credentials by running:
```bash
aws bedrock list-foundation-models --profile bedrock
```
## Usage
Here's a quick example of how to use the Bedrock LLM library:
### Simple text generation
```python
from bedrock_llm import LLMClient, ModelName, ModelConfig
# Create a LLM client
client = LLMClient(
region_name="us-east-1",
model_name=ModelName.MISTRAL_7B
)
# Create a configuration for inference parameters
config = ModelConfig(
temperature=0.1,
top_p=0.9,
max_tokens=512
)
# Create a prompt
prompt = "Who are you?"
# Invoke the model and get results
response, stop_reason = client.generate(config, prompt)
# Print out the results
cprint(response.content, "green")
cprint(stop_reason, "red")
```
### Simple tool calling
```python
from bedrock_llm import Agent, ModelName
from bedrock_llm.schema.tools import ToolMetadata, InputSchema, PropertyAttr
agent = Agent(
region_name="us-east-1",
model_name=ModelName.CLAUDE_3_5_HAIKU
)
# Define the tool description for the model
get_weather_tool = ToolMetadata(
name="get_weather",
description="Get the weather in specific location",
input_schema=InputSchema(
type="object",
properties={
"location": PropertyAttr(
type="string",
description="Location to search for, example: New York, WashingtonDC, ..."
)
},
required=["location"]
)
)
# Define the tool
@Agent.tool(get_weather_tool)
async def get_weather(location: str):
return f"{location} is 20*C"
async def main():
prompt = input("User: ")
async for token, stop_reason, response, tool_result in agent.generate_and_action_async(
prompt=prompt,
tools=["get_weather"]
):
if token:
print(token, end="", flush=True)
if stop_reason:
print(f"\n{stop_reason}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
### Agent Features
The Agent class in `bedrock_llm` provides powerful capabilities for building LLM-powered applications:
#### Tool Management
```python
from bedrock_llm import Agent, ToolMetadata
from typing import Dict
# Define a tool with metadata
@Agent.tool(
metadata=ToolMetadata(
name="search",
description="Search for information",
input_schema={
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
)
)
async def search(query: str) -> Dict:
# Tool implementation
pass
```
#### Error Handling
The library provides comprehensive error handling with custom exceptions:
```python
try:
result = await agent.generate_and_action_async(
prompt="Search for Python tutorials",
tools=["search"]
)
except ToolExecutionError as e:
print(f"Tool '{e.tool_name}' failed: {e.message}")
if e.original_error:
print(f"Original error: {e.original_error}")
```
#### Memory Management
Configure memory limits to manage conversation history:
```python
agent = Agent(
region_name="us-west-2",
model_name=ModelName.ANTHROPIC_CLAUDE_V2,
memory_limit=100 # Keep last 100 messages
)
```
#### Type-Safe Responses
The library now provides type-safe responses using TypedDict:
```python
async for response in agent.generate_and_action_async(...):
token: Optional[str] = response["token"]
stop_reason: Optional[StopReason] = response["stop_reason"]
message: Optional[MessageBlock] = response["message"]
tool_results: Optional[List] = response["tool_results"]
```
#### Tool States
Support for different LLM tool-calling conventions:
- Claude/Llama style: Uses ToolUseBlock for tool execution
- Mistral/Jamba style: Uses ToolCallBlock for function calling
## Monitoring and Logging
Use the `monitor` decorators for performance monitoring:
```python
from bedrock_llm.monitor import Monitor
@Monitor.monitor_async
async def my_async_function():
# Your async function code here
@Monitor.monitor_sync
def my_sync_function():
# Your sync function code here
```
Use the `log` decorators for logging function calls:
```python
from bedrock_llm.monitor import Logging
@Logging.log_async
async def my_async_function():
# Your async function code here
@Logging.log_sync
def my_sync_function():
# Your sync function code here
```
These decorators are optimized for minimal performance impact on your application.
## Architecture
For a detailed overview of the library's architecture, please see [ARCHITECTURE.md](ARCHITECTURE.md).
## Examples
For more detailed usage instructions and API documentation, please refer to our [documentation](https://github.com/yourusername/bedrock_llm/wiki).
You can also see some examples of how to use and build LLM flow using the libary
- [basic](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/1_basic.py)
- [stream response](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/2_stream_response.py)
- [all support llm](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/3_all_llm.py)
- [simple chat bot](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/4_chatbot.py)
- [tool calling](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/5_tool_calling.py)
- [agent](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/7_agent.py)
and more to come, we are working on it :)
## Requirements
python>=3.9
pydantic>=2.0.0
boto3>=1.18.0
botocore>=1.21.0
jinja2>=3.1.2
psutil>=5.9.0
pytz>=2023.3
termcolor>=2.3.0
databases[postgresql]>=0.7.0
sqlalchemy>=2.0.0
asyncpg>=0.27.0 # PostgreSQL async driver
types-redis>=4.6.0
types-pytz
rx==3.2.0
## Contributing
We welcome contributions! Please see our [contributing guidelines](CONTRIBUTING.md) for more details.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/Phicks-debug/bedrock_llm",
"name": "bedrock-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "aws bedrock llm machine-learning ai",
"author": "Tran Quy An",
"author_email": "an.tq@techxcorp.com",
"download_url": "https://files.pythonhosted.org/packages/b9/16/584d73987c98d25ed629c6e2dae104536293bb00c333ac9df8064930eb35/bedrock_llm-0.1.6.tar.gz",
"platform": null,
"description": "# Bedrock LLM\r\n\r\nA Python library for building LLM applications using Amazon Bedrock Provider and boto3 library. It aims to create best practices and production-ready solutions for various LLM models, including Anthropic, Llama, Amazon Titan, MistralAI, and AI21.\r\n\r\nThe library is structured into two main components:\r\n\r\n1. `bedrock_be`: Infrastructure and services for deploying LLM applications.\r\n2. `bedrock_llm`: LLM orchestration and interaction logic.\r\n\r\nThis structure allows for seamless integration of LLM capabilities with robust deployment and infrastructure management.\r\n\r\n![Conceptual Architecture](/assests/bedrock_llm.png)\r\n\r\n## Features\r\n\r\n- Support for multiple LLM models through Amazon Bedrock\r\n- Efficient LLM orchestration with `bedrock_llm`\r\n- Infrastructure and deployment services with `bedrock_be`\r\n- Enhanced Agent-based interactions with:\r\n - Robust tool validation and execution\r\n - Comprehensive error handling and logging\r\n - Configurable memory management\r\n - Type-safe responses with `AgentResponse`\r\n - Support for multiple LLM tool-calling conventions (Claude, Llama, Mistral, etc.)\r\n- Asynchronous and synchronous function support\r\n- Performance monitoring and logging functionality\r\n- Support for Retrieval-Augmented Generation (RAG)\r\n- Multi-Agent systems (in progress)\r\n- Workflows, nodes, and event-based systems (coming soon)\r\n- Image generation, speech-to-text (STT), and text-to-speech (TTS) support (coming soon)\r\n\r\n## Installation\r\n\r\nYou can install the Bedrock LLM library using pip:\r\n\r\n```bash\r\npip install bedrock-llm\r\n```\r\n\r\nThis library requires Python 3.9 or later.\r\n\r\n## AWS Credentials Setup\r\n\r\nBefore using the library, make sure you have your AWS credentials properly configured:\r\n\r\n1. Create or update your AWS credentials file at `~/.aws/credentials`:\r\n\r\n ```ini\r\n [bedrock]\r\n aws_access_key_id = YOUR_ACCESS_KEY\r\n aws_secret_access_key = YOUR_SECRET_KEY\r\n ```\r\n\r\n2. Create or update your AWS config file at `~/.aws/config`:\r\n\r\n ```ini\r\n [profile bedrock]\r\n region = us-east-1\r\n ```\r\n\r\n3. When initializing the client, specify the profile name:\r\n\r\n ```python\r\n from bedrock_llm import LLMClient, ModelName, ModelConfig\r\n\r\n # Create a LLM client with specific AWS profile\r\n client = LLMClient(\r\n region_name=\"us-east-1\",\r\n model_name=ModelName.MISTRAL_7B,\r\n profile_name=\"bedrock\" # Specify your AWS profile name\r\n )\r\n ```\r\n\r\n You can verify your credentials by running:\r\n\r\n ```bash\r\n aws bedrock list-foundation-models --profile bedrock\r\n ```\r\n\r\n## Usage\r\n\r\nHere's a quick example of how to use the Bedrock LLM library:\r\n\r\n### Simple text generation\r\n\r\n```python\r\nfrom bedrock_llm import LLMClient, ModelName, ModelConfig\r\n\r\n# Create a LLM client\r\nclient = LLMClient(\r\n region_name=\"us-east-1\",\r\n model_name=ModelName.MISTRAL_7B\r\n)\r\n\r\n# Create a configuration for inference parameters\r\nconfig = ModelConfig(\r\n temperature=0.1,\r\n top_p=0.9,\r\n max_tokens=512\r\n)\r\n\r\n# Create a prompt\r\nprompt = \"Who are you?\"\r\n\r\n# Invoke the model and get results\r\nresponse, stop_reason = client.generate(config, prompt)\r\n\r\n# Print out the results\r\ncprint(response.content, \"green\")\r\ncprint(stop_reason, \"red\")\r\n```\r\n\r\n### Simple tool calling\r\n\r\n```python\r\nfrom bedrock_llm import Agent, ModelName\r\nfrom bedrock_llm.schema.tools import ToolMetadata, InputSchema, PropertyAttr\r\n\r\nagent = Agent(\r\n region_name=\"us-east-1\",\r\n model_name=ModelName.CLAUDE_3_5_HAIKU\r\n)\r\n\r\n# Define the tool description for the model\r\nget_weather_tool = ToolMetadata(\r\n name=\"get_weather\",\r\n description=\"Get the weather in specific location\",\r\n input_schema=InputSchema(\r\n type=\"object\",\r\n properties={\r\n \"location\": PropertyAttr(\r\n type=\"string\",\r\n description=\"Location to search for, example: New York, WashingtonDC, ...\"\r\n )\r\n },\r\n required=[\"location\"]\r\n )\r\n)\r\n\r\n# Define the tool\r\n@Agent.tool(get_weather_tool)\r\nasync def get_weather(location: str):\r\n return f\"{location} is 20*C\"\r\n\r\n\r\nasync def main():\r\n prompt = input(\"User: \")\r\n\r\n async for token, stop_reason, response, tool_result in agent.generate_and_action_async(\r\n prompt=prompt,\r\n tools=[\"get_weather\"]\r\n ):\r\n if token:\r\n print(token, end=\"\", flush=True)\r\n if stop_reason:\r\n print(f\"\\n{stop_reason}\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n import asyncio\r\n asyncio.run(main())\r\n```\r\n\r\n### Agent Features\r\n\r\nThe Agent class in `bedrock_llm` provides powerful capabilities for building LLM-powered applications:\r\n\r\n#### Tool Management\r\n\r\n```python\r\nfrom bedrock_llm import Agent, ToolMetadata\r\nfrom typing import Dict\r\n\r\n# Define a tool with metadata\r\n@Agent.tool(\r\n metadata=ToolMetadata(\r\n name=\"search\",\r\n description=\"Search for information\",\r\n input_schema={\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"query\": {\"type\": \"string\", \"description\": \"Search query\"}\r\n },\r\n \"required\": [\"query\"]\r\n }\r\n )\r\n)\r\nasync def search(query: str) -> Dict:\r\n # Tool implementation\r\n pass\r\n```\r\n\r\n#### Error Handling\r\n\r\nThe library provides comprehensive error handling with custom exceptions:\r\n\r\n```python\r\ntry:\r\n result = await agent.generate_and_action_async(\r\n prompt=\"Search for Python tutorials\",\r\n tools=[\"search\"]\r\n )\r\nexcept ToolExecutionError as e:\r\n print(f\"Tool '{e.tool_name}' failed: {e.message}\")\r\n if e.original_error:\r\n print(f\"Original error: {e.original_error}\")\r\n```\r\n\r\n#### Memory Management\r\n\r\nConfigure memory limits to manage conversation history:\r\n\r\n```python\r\nagent = Agent(\r\n region_name=\"us-west-2\",\r\n model_name=ModelName.ANTHROPIC_CLAUDE_V2,\r\n memory_limit=100 # Keep last 100 messages\r\n)\r\n```\r\n\r\n#### Type-Safe Responses\r\n\r\nThe library now provides type-safe responses using TypedDict:\r\n\r\n```python\r\nasync for response in agent.generate_and_action_async(...):\r\n token: Optional[str] = response[\"token\"]\r\n stop_reason: Optional[StopReason] = response[\"stop_reason\"]\r\n message: Optional[MessageBlock] = response[\"message\"]\r\n tool_results: Optional[List] = response[\"tool_results\"]\r\n```\r\n\r\n#### Tool States\r\n\r\nSupport for different LLM tool-calling conventions:\r\n\r\n- Claude/Llama style: Uses ToolUseBlock for tool execution\r\n- Mistral/Jamba style: Uses ToolCallBlock for function calling\r\n\r\n## Monitoring and Logging\r\n\r\nUse the `monitor` decorators for performance monitoring:\r\n\r\n```python\r\nfrom bedrock_llm.monitor import Monitor\r\n\r\n@Monitor.monitor_async\r\nasync def my_async_function():\r\n # Your async function code here\r\n\r\n@Monitor.monitor_sync\r\ndef my_sync_function():\r\n # Your sync function code here\r\n```\r\n\r\nUse the `log` decorators for logging function calls:\r\n\r\n```python\r\nfrom bedrock_llm.monitor import Logging\r\n\r\n@Logging.log_async\r\nasync def my_async_function():\r\n # Your async function code here\r\n\r\n@Logging.log_sync\r\ndef my_sync_function():\r\n # Your sync function code here\r\n```\r\n\r\nThese decorators are optimized for minimal performance impact on your application.\r\n\r\n## Architecture\r\n\r\nFor a detailed overview of the library's architecture, please see [ARCHITECTURE.md](ARCHITECTURE.md).\r\n\r\n## Examples\r\n\r\nFor more detailed usage instructions and API documentation, please refer to our [documentation](https://github.com/yourusername/bedrock_llm/wiki).\r\n\r\nYou can also see some examples of how to use and build LLM flow using the libary\r\n\r\n- [basic](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/1_basic.py)\r\n- [stream response](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/2_stream_response.py)\r\n- [all support llm](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/3_all_llm.py)\r\n- [simple chat bot](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/4_chatbot.py)\r\n- [tool calling](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/5_tool_calling.py)\r\n- [agent](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/7_agent.py)\r\n\r\nand more to come, we are working on it :)\r\n\r\n## Requirements\r\n\r\npython>=3.9\r\npydantic>=2.0.0\r\nboto3>=1.18.0\r\nbotocore>=1.21.0\r\njinja2>=3.1.2\r\npsutil>=5.9.0\r\npytz>=2023.3\r\ntermcolor>=2.3.0\r\ndatabases[postgresql]>=0.7.0\r\nsqlalchemy>=2.0.0\r\nasyncpg>=0.27.0 # PostgreSQL async driver\r\ntypes-redis>=4.6.0\r\ntypes-pytz\r\nrx==3.2.0\r\n\r\n## Contributing\r\n\r\nWe welcome contributions! Please see our [contributing guidelines](CONTRIBUTING.md) for more details.\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python LLM framework for interacting with AWS Bedrock services, built on top of boto3. This library serves as a comprehensive tool for fast prototyping, building POCs, and deploying production-ready LLM applications with robust infrastructure support.",
"version": "0.1.6",
"project_urls": {
"Homepage": "https://github.com/Phicks-debug/bedrock_llm"
},
"split_keywords": [
"aws",
"bedrock",
"llm",
"machine-learning",
"ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "825d1bb74fad7b02a7e124925ab47905481e545d84ec49111ac7420a0c7ce217",
"md5": "1bc953889def3729ccd92ec40c46667a",
"sha256": "198ed902aa9f2d258c0267a21763575bfe122e594ae36205bb8e4fbf54bbda2d"
},
"downloads": -1,
"filename": "bedrock_llm-0.1.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1bc953889def3729ccd92ec40c46667a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 105516,
"upload_time": "2024-11-16T17:41:32",
"upload_time_iso_8601": "2024-11-16T17:41:32.549665Z",
"url": "https://files.pythonhosted.org/packages/82/5d/1bb74fad7b02a7e124925ab47905481e545d84ec49111ac7420a0c7ce217/bedrock_llm-0.1.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b916584d73987c98d25ed629c6e2dae104536293bb00c333ac9df8064930eb35",
"md5": "1dca81a7c6baa47446e6b4141cccc36d",
"sha256": "8aea640e38ed705eb402f24fdfead3ec266ce68fb3d55600ac685e61128ec0c3"
},
"downloads": -1,
"filename": "bedrock_llm-0.1.6.tar.gz",
"has_sig": false,
"md5_digest": "1dca81a7c6baa47446e6b4141cccc36d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 92291,
"upload_time": "2024-11-16T17:41:34",
"upload_time_iso_8601": "2024-11-16T17:41:34.569868Z",
"url": "https://files.pythonhosted.org/packages/b9/16/584d73987c98d25ed629c6e2dae104536293bb00c333ac9df8064930eb35/bedrock_llm-0.1.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-16 17:41:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Phicks-debug",
"github_project": "bedrock_llm",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "bedrock-llm"
}