Name | bedrock-llm JSON |
Version |
0.1.95b0
JSON |
| download |
home_page | https://github.com/Phicks-debug/bedrock_llm |
Summary | A Python LLM frameworkfor interacting with AWS Bedrock services, built on top of boto3. This library serves as a comprehensive tool for fast prototyping, building POCs, and deploying production-ready LLM applications with robust infrastructure support. |
upload_time | 2024-11-23 05:47:49 |
maintainer | None |
docs_url | None |
author | Tran Quy An |
requires_python | >=3.9 |
license | None |
keywords |
aws
bedrock
llm
machine-learning
ai
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Bedrock LLM
A Python library for building LLM applications using Amazon Bedrock Provider and boto3 library. It aims to create best practices and production-ready solutions for various LLM models, including Anthropic, Llama, Amazon Titan, MistralAI, and AI21.
The library is structured into two main components:
1. `bedrock_be`: Infrastructure and services for deploying LLM applications.
2. `bedrock_llm`: LLM orchestration and interaction logic.
This structure allows for seamless integration of LLM capabilities with robust deployment and infrastructure management.
![Conceptual Architecture](/assests/bedrock_llm.png)
## Features
- Support for multiple LLM models through Amazon Bedrock
- Efficient LLM orchestration with `bedrock_llm`
- Infrastructure and deployment services with `bedrock_be`
- Enhanced Agent-based interactions with:
- Robust tool validation and execution
- Comprehensive error handling and logging
- Configurable memory management
- Type-safe responses with `AgentResponse`
- Support for multiple LLM tool-calling conventions (Claude, Llama, Mistral, etc.)
- Asynchronous and synchronous function support
- Performance monitoring and logging functionality
- Support for Retrieval-Augmented Generation (RAG)
- Optimized Pipeline System:
- Modular node-based architecture
- Batch processing with configurable parameters
- In-memory caching with size management
- Parallel processing with thread pools
- Type-safe node connections
- Event-driven data flow
- Filter nodes for data validation
- Multi-Agent systems (in progress)
- Image generation, speech-to-text (STT), and text-to-speech (TTS) support (coming soon)
## Installation
You can install the Bedrock LLM library using pip:
```bash
pip install bedrock-llm
```
This library requires Python 3.9 or later.
## AWS Credentials Setup
Before using the library, make sure you have your AWS credentials properly configured:
1. Create or update your AWS credentials file at `~/.aws/credentials`:
```ini
[bedrock]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
```
2. Create or update your AWS config file at `~/.aws/config`:
```ini
[profile bedrock]
region = us-east-1
```
3. When initializing the client, specify the profile name:
```python
from bedrock_llm import LLMClient, ModelName, ModelConfig
# Create a LLM client with specific AWS profile
client = LLMClient(
region_name="us-east-1",
model_name=ModelName.MISTRAL_7B,
profile_name="bedrock" # Specify your AWS profile name
)
```
You can verify your credentials by running:
```bash
aws bedrock list-foundation-models --profile bedrock
```
## Quick Start
### Simple text generation
```python
from bedrock_llm import LLMClient, ModelName, ModelConfig
# Create a LLM client
client = LLMClient(
region_name="us-east-1",
model_name=ModelName.MISTRAL_7B
)
# Create a configuration for inference parameters
config = ModelConfig(
temperature=0.1,
top_p=0.9,
max_tokens=512
)
# Create a prompt
prompt = "Who are you?"
# Invoke the model and get results
response, stop_reason = client.generate(config, prompt)
# Print out the results
cprint(response.content, "green")
cprint(stop_reason, "red")
```
### Simple tool calling
```python
from bedrock_llm import Agent, ModelName
from bedrock_llm.schema.tools import ToolMetadata, InputSchema, PropertyAttr
agent = Agent(
region_name="us-east-1",
model_name=ModelName.CLAUDE_3_5_HAIKU
)
# Define the tool description for the model
get_weather_tool = ToolMetadata(
name="get_weather",
description="Get the weather in specific location",
input_schema=InputSchema(
type="object",
properties={
"location": PropertyAttr(
type="string",
description="Location to search for, example: New York, WashingtonDC, ..."
)
},
required=["location"]
)
)
# Define the tool
@Agent.tool(get_weather_tool)
async def get_weather(location: str):
return f"{location} is 20*C"
async def main():
prompt = input("User: ")
async for token, stop_reason, response, tool_result in agent.generate_and_action_async(
prompt=prompt,
tools=["get_weather"]
):
if token:
print(token, end="", flush=True)
if stop_reason:
print(f"\n{stop_reason}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
### Pipeline Usage
```python
from bedrock_llm.pipeline import Pipeline, BatchNode, CachedNode
# Create a pipeline for efficient text processing
pipeline = Pipeline("text-processor")
# Add optimized nodes
batch_node = BatchNode(
"batch-embeddings",
embed_batch_func,
batch_size=32
)
cache_node = CachedNode(
"cached-process",
process_func,
cache_size=1000
)
# Connect nodes
pipeline.add_node(batch_node)
pipeline.add_node(cache_node)
batch_node.connect(cache_node)
# Process data
result = await pipeline.execute(input_data)
```
### Agent Features
The Agent class in `bedrock_llm` provides powerful capabilities for building LLM-powered applications:
#### Tool Management
```python
from bedrock_llm import Agent, ToolMetadata
from typing import Dict
# Define a tool with metadata
@Agent.tool(
metadata=ToolMetadata(
name="search",
description="Search for information",
input_schema={
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
)
)
async def search(query: str) -> Dict:
# Tool implementation
pass
```
#### Error Handling
The library provides comprehensive error handling with custom exceptions:
```python
try:
result = await agent.generate_and_action_async(
prompt="Search for Python tutorials",
tools=["search"]
)
except ToolExecutionError as e:
print(f"Tool '{e.tool_name}' failed: {e.message}")
if e.original_error:
print(f"Original error: {e.original_error}")
```
#### Memory Management
Configure memory limits to manage conversation history:
```python
agent = Agent(
region_name="us-west-2",
model_name=ModelName.ANTHROPIC_CLAUDE_V2,
memory_limit=100 # Keep last 100 messages
)
```
#### Type-Safe Responses
The library now provides type-safe responses using TypedDict:
```python
async for response in agent.generate_and_action_async(...):
token: Optional[str] = response["token"]
stop_reason: Optional[StopReason] = response["stop_reason"]
message: Optional[MessageBlock] = response["message"]
tool_results: Optional[List] = response["tool_results"]
```
#### Tool States
Support for different LLM tool-calling conventions:
- Claude/Llama style: Uses ToolUseBlock for tool execution
- Mistral/Jamba style: Uses ToolCallBlock for function calling
## Monitoring and Logging
Use the `monitor` decorators for performance monitoring:
```python
from bedrock_llm.monitor import Monitor
@Monitor.monitor_async
async def my_async_function():
# Your async function code here
@Monitor.monitor_sync
def my_sync_function():
# Your sync function code here
```
Use the `log` decorators for logging function calls:
```python
from bedrock_llm.monitor import Logging
@Logging.log_async
async def my_async_function():
# Your async function code here
@Logging.log_sync
def my_sync_function():
# Your sync function code here
```
These decorators are optimized for minimal performance impact on your application.
## Architecture
The Bedrock LLM library is architected for scalability, reliability, and extensibility. Key architectural components include:
### Core Components
- **Client Layer**: Robust interfaces for Bedrock service interaction
- Async/Sync clients with streaming support
- Configurable retry logic
- Memory management
- Type-safe operations
- **Model Layer**: Flexible model implementation framework
- Support for multiple LLM providers
- Custom parameter optimization
- Response formatting
- **Agent System**: Advanced autonomous capabilities
- Tool management and execution
- State preservation
- Error handling
- Type-safe responses
### Infrastructure (bedrock_be)
- AWS service integration
- Deployment automation
- Monitoring and scaling
- Security management
For a comprehensive architectural overview, see [ARCHITECTURE.md](ARCHITECTURE.md).
## Examples
For more detailed usage instructions and API documentation, please refer to our [documentation](https://github.com/yourusername/bedrock_llm/LIBRARY_DOCUMENTATION.md).
You can also see some examples of how to use and build LLM flow using the libary
- [basic](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/1_basic.py)
- [stream response](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/2_stream_response.py)
- [all support llm](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/3_all_llm.py)
- [simple chat bot](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/4_chatbot.py)
- [tool calling](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/5_tool_calling.py)
- [agent](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/7_agent.py)
and more to come, we are working on it :)
## More Documents and wanna understand the project more?
For more detailed documentation, examples, and project insights, please refer to the following resources:
- **Documentation**: [https://github.com/Phicks-debug/bedrock_llm/LIBRARY_DOCUMENTATION.md](<https://github.com/Phicks-debug/bedrock_llm/LIBRARY_DOCUMENTATION.md>)
- **Examples**: [https://github.com/Phicks-debug/bedrock_llm/examples](https://github.com/Phicks-debug/bedrock_llm/examples)
- **Project Insights**: [https://github.com/Phicks-debug/bedrock_llm/docs](https://github.com/Phicks-debug/bedrock_llm/docs)
Feel free to reach out if you have any questions or need further assistance!
## Requirements
- python>=3.9
- pydantic>=2.0.0
- boto3>=1.18.0
- botocore>=1.21.0
- jinja2>=3.1.2
- psutil>=5.9.0
- pytz>=2023.3
- termcolor>=2.3.0
- databases[postgresql]>=0.7.0
- sqlalchemy>=2.0.0
- asyncpg>=0.27.0 # PostgreSQL async driver
- types-redis>=4.6.0
- types-pytz
- rx==3.2.0
## Contributing
We welcome contributions! Please see our [contributing guidelines](CONTRIBUTING.md) for more details.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/Phicks-debug/bedrock_llm",
"name": "bedrock-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "aws bedrock llm machine-learning ai",
"author": "Tran Quy An",
"author_email": "an.tq@techxcorp.com",
"download_url": "https://files.pythonhosted.org/packages/90/49/7515f6d8ba351d39aad236544df0049e8c6071f24f8f973f62ea9993422e/bedrock_llm-0.1.95b0.tar.gz",
"platform": null,
"description": "# Bedrock LLM\n\nA Python library for building LLM applications using Amazon Bedrock Provider and boto3 library. It aims to create best practices and production-ready solutions for various LLM models, including Anthropic, Llama, Amazon Titan, MistralAI, and AI21.\n\nThe library is structured into two main components:\n\n1. `bedrock_be`: Infrastructure and services for deploying LLM applications.\n2. `bedrock_llm`: LLM orchestration and interaction logic.\n\nThis structure allows for seamless integration of LLM capabilities with robust deployment and infrastructure management.\n\n![Conceptual Architecture](/assests/bedrock_llm.png)\n\n## Features\n\n- Support for multiple LLM models through Amazon Bedrock\n- Efficient LLM orchestration with `bedrock_llm`\n- Infrastructure and deployment services with `bedrock_be`\n- Enhanced Agent-based interactions with:\n - Robust tool validation and execution\n - Comprehensive error handling and logging\n - Configurable memory management\n - Type-safe responses with `AgentResponse`\n - Support for multiple LLM tool-calling conventions (Claude, Llama, Mistral, etc.)\n- Asynchronous and synchronous function support\n- Performance monitoring and logging functionality\n- Support for Retrieval-Augmented Generation (RAG)\n- Optimized Pipeline System:\n - Modular node-based architecture\n - Batch processing with configurable parameters\n - In-memory caching with size management\n - Parallel processing with thread pools\n - Type-safe node connections\n - Event-driven data flow\n - Filter nodes for data validation\n- Multi-Agent systems (in progress)\n- Image generation, speech-to-text (STT), and text-to-speech (TTS) support (coming soon)\n\n## Installation\n\nYou can install the Bedrock LLM library using pip:\n\n```bash\npip install bedrock-llm\n```\n\nThis library requires Python 3.9 or later.\n\n## AWS Credentials Setup\n\nBefore using the library, make sure you have your AWS credentials properly configured:\n\n1. Create or update your AWS credentials file at `~/.aws/credentials`:\n\n ```ini\n [bedrock]\n aws_access_key_id = YOUR_ACCESS_KEY\n aws_secret_access_key = YOUR_SECRET_KEY\n ```\n\n2. Create or update your AWS config file at `~/.aws/config`:\n\n ```ini\n [profile bedrock]\n region = us-east-1\n ```\n\n3. When initializing the client, specify the profile name:\n\n ```python\n from bedrock_llm import LLMClient, ModelName, ModelConfig\n\n # Create a LLM client with specific AWS profile\n client = LLMClient(\n region_name=\"us-east-1\",\n model_name=ModelName.MISTRAL_7B,\n profile_name=\"bedrock\" # Specify your AWS profile name\n )\n ```\n\n You can verify your credentials by running:\n\n ```bash\n aws bedrock list-foundation-models --profile bedrock\n ```\n\n## Quick Start\n\n### Simple text generation\n\n```python\nfrom bedrock_llm import LLMClient, ModelName, ModelConfig\n\n# Create a LLM client\nclient = LLMClient(\n region_name=\"us-east-1\",\n model_name=ModelName.MISTRAL_7B\n)\n\n# Create a configuration for inference parameters\nconfig = ModelConfig(\n temperature=0.1,\n top_p=0.9,\n max_tokens=512\n)\n\n# Create a prompt\nprompt = \"Who are you?\"\n\n# Invoke the model and get results\nresponse, stop_reason = client.generate(config, prompt)\n\n# Print out the results\ncprint(response.content, \"green\")\ncprint(stop_reason, \"red\")\n```\n\n### Simple tool calling\n\n```python\nfrom bedrock_llm import Agent, ModelName\nfrom bedrock_llm.schema.tools import ToolMetadata, InputSchema, PropertyAttr\n\nagent = Agent(\n region_name=\"us-east-1\",\n model_name=ModelName.CLAUDE_3_5_HAIKU\n)\n\n# Define the tool description for the model\nget_weather_tool = ToolMetadata(\n name=\"get_weather\",\n description=\"Get the weather in specific location\",\n input_schema=InputSchema(\n type=\"object\",\n properties={\n \"location\": PropertyAttr(\n type=\"string\",\n description=\"Location to search for, example: New York, WashingtonDC, ...\"\n )\n },\n required=[\"location\"]\n )\n)\n\n# Define the tool\n@Agent.tool(get_weather_tool)\nasync def get_weather(location: str):\n return f\"{location} is 20*C\"\n\n\nasync def main():\n prompt = input(\"User: \")\n\n async for token, stop_reason, response, tool_result in agent.generate_and_action_async(\n prompt=prompt,\n tools=[\"get_weather\"]\n ):\n if token:\n print(token, end=\"\", flush=True)\n if stop_reason:\n print(f\"\\n{stop_reason}\")\n\n\nif __name__ == \"__main__\":\n import asyncio\n asyncio.run(main())\n```\n\n### Pipeline Usage\n\n```python\nfrom bedrock_llm.pipeline import Pipeline, BatchNode, CachedNode\n\n# Create a pipeline for efficient text processing\npipeline = Pipeline(\"text-processor\")\n\n# Add optimized nodes\nbatch_node = BatchNode(\n \"batch-embeddings\",\n embed_batch_func,\n batch_size=32\n)\n\ncache_node = CachedNode(\n \"cached-process\",\n process_func,\n cache_size=1000\n)\n\n# Connect nodes\npipeline.add_node(batch_node)\npipeline.add_node(cache_node)\nbatch_node.connect(cache_node)\n\n# Process data\nresult = await pipeline.execute(input_data)\n```\n\n### Agent Features\n\nThe Agent class in `bedrock_llm` provides powerful capabilities for building LLM-powered applications:\n\n#### Tool Management\n\n```python\nfrom bedrock_llm import Agent, ToolMetadata\nfrom typing import Dict\n\n# Define a tool with metadata\n@Agent.tool(\n metadata=ToolMetadata(\n name=\"search\",\n description=\"Search for information\",\n input_schema={\n \"type\": \"object\",\n \"properties\": {\n \"query\": {\"type\": \"string\", \"description\": \"Search query\"}\n },\n \"required\": [\"query\"]\n }\n )\n)\nasync def search(query: str) -> Dict:\n # Tool implementation\n pass\n```\n\n#### Error Handling\n\nThe library provides comprehensive error handling with custom exceptions:\n\n```python\ntry:\n result = await agent.generate_and_action_async(\n prompt=\"Search for Python tutorials\",\n tools=[\"search\"]\n )\nexcept ToolExecutionError as e:\n print(f\"Tool '{e.tool_name}' failed: {e.message}\")\n if e.original_error:\n print(f\"Original error: {e.original_error}\")\n```\n\n#### Memory Management\n\nConfigure memory limits to manage conversation history:\n\n```python\nagent = Agent(\n region_name=\"us-west-2\",\n model_name=ModelName.ANTHROPIC_CLAUDE_V2,\n memory_limit=100 # Keep last 100 messages\n)\n```\n\n#### Type-Safe Responses\n\nThe library now provides type-safe responses using TypedDict:\n\n```python\nasync for response in agent.generate_and_action_async(...):\n token: Optional[str] = response[\"token\"]\n stop_reason: Optional[StopReason] = response[\"stop_reason\"]\n message: Optional[MessageBlock] = response[\"message\"]\n tool_results: Optional[List] = response[\"tool_results\"]\n```\n\n#### Tool States\n\nSupport for different LLM tool-calling conventions:\n\n- Claude/Llama style: Uses ToolUseBlock for tool execution\n- Mistral/Jamba style: Uses ToolCallBlock for function calling\n\n## Monitoring and Logging\n\nUse the `monitor` decorators for performance monitoring:\n\n```python\nfrom bedrock_llm.monitor import Monitor\n\n@Monitor.monitor_async\nasync def my_async_function():\n # Your async function code here\n\n@Monitor.monitor_sync\ndef my_sync_function():\n # Your sync function code here\n```\n\nUse the `log` decorators for logging function calls:\n\n```python\nfrom bedrock_llm.monitor import Logging\n\n@Logging.log_async\nasync def my_async_function():\n # Your async function code here\n\n@Logging.log_sync\ndef my_sync_function():\n # Your sync function code here\n```\n\nThese decorators are optimized for minimal performance impact on your application.\n\n## Architecture\n\nThe Bedrock LLM library is architected for scalability, reliability, and extensibility. Key architectural components include:\n\n### Core Components\n\n- **Client Layer**: Robust interfaces for Bedrock service interaction\n - Async/Sync clients with streaming support\n - Configurable retry logic\n - Memory management\n - Type-safe operations\n\n- **Model Layer**: Flexible model implementation framework\n - Support for multiple LLM providers\n - Custom parameter optimization\n - Response formatting\n\n- **Agent System**: Advanced autonomous capabilities\n - Tool management and execution\n - State preservation\n - Error handling\n - Type-safe responses\n\n### Infrastructure (bedrock_be)\n\n- AWS service integration\n- Deployment automation\n- Monitoring and scaling\n- Security management\n\nFor a comprehensive architectural overview, see [ARCHITECTURE.md](ARCHITECTURE.md).\n\n## Examples\n\nFor more detailed usage instructions and API documentation, please refer to our [documentation](https://github.com/yourusername/bedrock_llm/LIBRARY_DOCUMENTATION.md).\n\nYou can also see some examples of how to use and build LLM flow using the libary\n\n- [basic](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/1_basic.py)\n- [stream response](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/2_stream_response.py)\n- [all support llm](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/3_all_llm.py)\n- [simple chat bot](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/4_chatbot.py)\n- [tool calling](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/5_tool_calling.py)\n- [agent](https://github.com/Phicks-debug/bedrock_llm/blob/main/examples/7_agent.py)\n\nand more to come, we are working on it :)\n\n## More Documents and wanna understand the project more?\n\nFor more detailed documentation, examples, and project insights, please refer to the following resources:\n\n- **Documentation**: [https://github.com/Phicks-debug/bedrock_llm/LIBRARY_DOCUMENTATION.md](<https://github.com/Phicks-debug/bedrock_llm/LIBRARY_DOCUMENTATION.md>)\n- **Examples**: [https://github.com/Phicks-debug/bedrock_llm/examples](https://github.com/Phicks-debug/bedrock_llm/examples)\n- **Project Insights**: [https://github.com/Phicks-debug/bedrock_llm/docs](https://github.com/Phicks-debug/bedrock_llm/docs)\n\nFeel free to reach out if you have any questions or need further assistance!\n\n## Requirements\n\n- python>=3.9\n- pydantic>=2.0.0\n- boto3>=1.18.0\n- botocore>=1.21.0\n- jinja2>=3.1.2\n- psutil>=5.9.0\n- pytz>=2023.3\n- termcolor>=2.3.0\n- databases[postgresql]>=0.7.0\n- sqlalchemy>=2.0.0\n- asyncpg>=0.27.0 # PostgreSQL async driver\n- types-redis>=4.6.0\n- types-pytz\n- rx==3.2.0\n\n## Contributing\n\nWe welcome contributions! Please see our [contributing guidelines](CONTRIBUTING.md) for more details.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python LLM frameworkfor interacting with AWS Bedrock services, built on top of boto3. This library serves as a comprehensive tool for fast prototyping, building POCs, and deploying production-ready LLM applications with robust infrastructure support.",
"version": "0.1.95b0",
"project_urls": {
"Homepage": "https://github.com/Phicks-debug/bedrock_llm"
},
"split_keywords": [
"aws",
"bedrock",
"llm",
"machine-learning",
"ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c9ae9e741faba3c9c70d5ad6fc0102842df6bb4772991b61d58fc6c24055401c",
"md5": "741e3d23d7212ca21762301ca1872072",
"sha256": "ef66b4918227dd871bb51623a5b2c86d0848cb46636b0d65c0dd069ba0b49a9c"
},
"downloads": -1,
"filename": "bedrock_llm-0.1.95b0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "741e3d23d7212ca21762301ca1872072",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 227668,
"upload_time": "2024-11-23T05:47:47",
"upload_time_iso_8601": "2024-11-23T05:47:47.333400Z",
"url": "https://files.pythonhosted.org/packages/c9/ae/9e741faba3c9c70d5ad6fc0102842df6bb4772991b61d58fc6c24055401c/bedrock_llm-0.1.95b0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "90497515f6d8ba351d39aad236544df0049e8c6071f24f8f973f62ea9993422e",
"md5": "445ecdd52e7b255ea62a91d3779f42ee",
"sha256": "22b4eaf274b88ddcfdbd334e7cbb1d3d895b3b2e51730bc1ff9ae57e2af145c5"
},
"downloads": -1,
"filename": "bedrock_llm-0.1.95b0.tar.gz",
"has_sig": false,
"md5_digest": "445ecdd52e7b255ea62a91d3779f42ee",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 101720,
"upload_time": "2024-11-23T05:47:49",
"upload_time_iso_8601": "2024-11-23T05:47:49.907392Z",
"url": "https://files.pythonhosted.org/packages/90/49/7515f6d8ba351d39aad236544df0049e8c6071f24f8f973f62ea9993422e/bedrock_llm-0.1.95b0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-23 05:47:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Phicks-debug",
"github_project": "bedrock_llm",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "bedrock-llm"
}