Name | abs-langchain-suite JSON |
Version |
0.1.8
JSON |
| download |
home_page | None |
Summary | LangChain utilities with token tracking, RAG, and agent support |
upload_time | 2025-07-11 10:53:05 |
maintainer | None |
docs_url | None |
author | AutoBridgeSystems |
requires_python | <4.0,>=3.13 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LangChain Suite Package
A comprehensive package providing LangChain utilities with multiple AI providers, token tracking, RAG (Retrieval-Augmented Generation), vector services, and agent support.
## Features
- **Multiple AI Providers**: Support for OpenAI, Azure OpenAI, and Claude (Anthropic)
- **Embeddings Support**: Easy text embedding generation with batch processing
- **Vector Database Integration**: Support for Cosmos DB, Pinecone, Weaviate, Chroma, and Qdrant
- **RAG Services**: Complete RAG pipeline with semantic search and retrieval chains
- **Embedding Hooks**: Automatic embedding generation and storage for CRUD operations
- **Token Usage Tracking**: Database logging for token consumption monitoring
- **CLI Interface**: Command-line tool for quick AI provider interactions
- **Configuration Management**: Flexible configuration options for all components
## Installation
```bash
pip install abs-langchain-suite
```
## Quick Start
### Basic Provider Usage
```python
from abs_langchain_suite import OpenAIProvider, AzureOpenAIProvider, ClaudeProvider
# OpenAI Provider
openai_provider = OpenAIProvider(api_key="your-openai-key")
response = openai_provider.chat("Hello, how are you?")
# Azure OpenAI Provider
azure_provider = AzureOpenAIProvider(
api_key="your-azure-key",
azure_endpoint="https://your-resource.openai.azure.com/",
deployment_name="your-deployment"
)
response = azure_provider.chat("Explain quantum computing")
# Claude Provider
claude_provider = ClaudeProvider(api_key="your-anthropic-key")
response = claude_provider.chat("Write a short story")
```
### Using Provider Factory
```python
from abs_langchain_suite import create_provider
# Create providers using factory
openai_provider = create_provider("openai", api_key="your-key")
azure_provider = create_provider("azure", api_key="your-key", azure_endpoint="...", deployment_name="...")
claude_provider = create_provider("claude", api_key="your-key")
```
### Embeddings and Vector Operations
```python
from abs_langchain_suite import OpenAIProvider
from abs_langchain_suite.embeddings import EmbeddingService
from abs_langchain_suite.schemas import EmbeddingConfig
# Create embedding service
provider = OpenAIProvider(api_key="your-key")
config = EmbeddingConfig(
model_name="text-embedding-3-small",
dimensions=1536,
batch_size=100
)
embedding_service = EmbeddingService(provider, config)
# Generate embeddings
embedding = embedding_service.embed_query("Sample text")
embeddings = embedding_service.embed_documents(["Doc 1", "Doc 2", "Doc 3"])
```
### RAG (Retrieval-Augmented Generation)
```python
from abs_langchain_suite import OpenAIProvider
from abs_langchain_suite.services import RAGService
# Setup RAG service
provider = OpenAIProvider(api_key="your-key")
rag_service = RAGService(provider)
# Your documents
documents = [
"Python is a programming language.",
"Machine learning is a subset of AI.",
"Deep learning uses neural networks."
]
# Perform semantic search
results = rag_service.semantic_search(
query="What is Python?",
texts=documents,
k=2
)
# Build and run RAG chain
response = rag_service.run_rag(
query="Explain machine learning",
texts=documents,
chain_type="stuff"
)
```
### Vector Database Integration
```python
from abs_langchain_suite.embeddings import VectorServiceFactory
from abs_langchain_suite.schemas import CosmosVectorConfig
# Configure Cosmos DB
cosmos_config = CosmosVectorConfig(
database=your_cosmos_database,
container_name="vectors",
vector_field="embedding"
)
# Create vector service
vector_service = VectorServiceFactory.create_service(cosmos_config)
# Store vectors
await vector_service.create_item(
item_id="doc_1",
vector=[0.1, 0.2, 0.3, ...],
metadata={"title": "Sample Document", "content": "..."}
)
# Search similar vectors
results = await vector_service.search_similar(
query_vector=[0.1, 0.2, 0.3, ...],
limit=5
)
```
### Embedding Hooks for CRUD Operations
```python
from abs_langchain_suite.services import EmbeddingHookMixin
from abs_langchain_suite.embeddings import EmbeddingService
from abs_langchain_suite.schemas import EmbeddingConfig, CosmosVectorConfig
class DocumentService(EmbeddingHookMixin):
def __init__(self):
provider = OpenAIProvider(api_key="your-key")
embedding_config = EmbeddingConfig(model_name="text-embedding-3-small")
vector_config = CosmosVectorConfig(database=db, container_name="documents")
embedding_service = EmbeddingService(provider, embedding_config)
super().__init__(embedding_service, embedding_config, vector_config)
async def create_document(self, document_id: str, content: str, metadata: dict):
# Your document creation logic
document = {"id": document_id, "content": content, **metadata}
# Automatically generate and store embedding
await self.embed_on_create(document_id, content, document)
return document
async def search_similar_documents(self, query: str, limit: int = 10):
return await self.search_similar(query, limit)
```
### Token Usage Tracking
```python
from abs_langchain_suite.logging import DBTokenUsageLogger
from abs_langchain_suite.logging.db import SQLClient
from abs_langchain_suite import OpenAIProvider
# Setup token usage logger
db_client = SQLClient(connection_string="your-db-connection")
token_logger = DBTokenUsageLogger(db_client, table="token_usage")
# Create provider with token tracking
provider = OpenAIProvider(
api_key="your-key",
callbacks=[token_logger]
)
# All interactions will now log token usage
response = provider.chat("Hello world")
```
## CLI Usage
The package includes a command-line interface for quick interactions:
```bash
# Basic usage
provider-cli --provider openai --api_key YOUR_KEY --message "Hello world"
# With custom model
provider-cli --provider openai --api_key YOUR_KEY --model_name gpt-4 --message "Explain AI"
# Azure OpenAI
provider-cli --provider azure --api_key YOUR_KEY --azure_endpoint "https://your-resource.openai.azure.com/" --deployment_name "your-deployment" --message "Hello"
# Claude
provider-cli --provider claude --api_key YOUR_KEY --message "Write a poem"
```
## Configuration
### Provider Configuration
All providers support extensive configuration options:
```python
# OpenAI Provider
openai_provider = OpenAIProvider(
api_key="your-key",
model_name="gpt-4",
temperature=0.7,
max_tokens=1000,
streaming=True,
base_url="https://api.openai.com/v1"
)
# Azure OpenAI Provider
azure_provider = AzureOpenAIProvider(
api_key="your-key",
azure_endpoint="https://your-resource.openai.azure.com/",
deployment_name="your-deployment",
api_version="2024-02-15-preview",
temperature=0.3
)
# Claude Provider
claude_provider = ClaudeProvider(
api_key="your-key",
model_name="claude-3-sonnet-20240229",
max_tokens=1000,
temperature=0.5
)
```
### Embedding Configuration
```python
from abs_langchain_suite.schemas import EmbeddingConfig
config = EmbeddingConfig(
model_name="text-embedding-3-small",
dimensions=1536,
batch_size=100,
max_retries=3,
timeout=30.0,
enable_caching=True,
cache_ttl=3600
)
```
### Vector Database Configuration
```python
# Cosmos DB
from abs_langchain_suite.schemas import CosmosVectorConfig
cosmos_config = CosmosVectorConfig(
database=your_cosmos_database,
container_name="vectors",
vector_field="embedding"
)
# Pinecone (future implementation)
from abs_langchain_suite.schemas import PineconeVectorConfig
pinecone_config = PineconeVectorConfig(
environment="us-west1-gcp",
index_name="my-index",
namespace="my-namespace"
)
```
## Advanced Features
### Custom Chains and Prompts
```python
from abs_langchain_suite import OpenAIProvider
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
provider = OpenAIProvider(api_key="your-key")
# Create custom prompt
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant specialized in {domain}."),
("human", "Answer this question: {question}")
])
# Create chain with custom parameters
chain = provider.create_chain(
prompt,
output_parser=StrOutputParser(),
temperature=0.1,
max_tokens=200
)
# Use the chain
result = chain.invoke({
"domain": "machine learning",
"question": "What is supervised learning?"
})
```
### Async Operations
All providers support async operations:
```python
import asyncio
async def main():
provider = OpenAIProvider(api_key="your-key")
# Async chat
response = await provider.async_chat("Hello world")
# Async embeddings
embedding = await provider.async_embed_text("Sample text")
embeddings = await provider.async_embed_documents(["Doc 1", "Doc 2"])
asyncio.run(main())
```
### Agent Support
```python
from abs_langchain_suite import OpenAIProvider
from langchain_core.tools import BaseTool
provider = OpenAIProvider(api_key="your-key")
# Define tools
tools = [
# Your custom tools here
]
# Create prompt for agent
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools."),
("human", "{input}")
])
# Run agent
result = provider.run_agent(tools, prompt, {"input": "What's the weather like?"})
```
## Environment Variables
The package automatically reads these environment variables:
- `OPENAI_API_KEY`: Your OpenAI API key
- `OPENAI_BASE_URL`: Custom base URL for OpenAI
- `OPENAI_ORGANIZATION`: Organization ID
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key
- `AZURE_OPENAI_ENDPOINT`: Azure OpenAI endpoint
- `ANTHROPIC_API_KEY`: Your Anthropic API key
## API Reference
### Providers
- `OpenAIProvider`: OpenAI API integration
- `AzureOpenAIProvider`: Azure OpenAI integration
- `ClaudeProvider`: Anthropic Claude integration
- `BaseProvider`: Abstract base class for custom providers
- `create_provider()`: Factory function for creating providers
### Services
- `RAGService`: Complete RAG pipeline implementation
- `EmbeddingService`: Embedding generation and management
- `EmbeddingHookMixin`: Mixin for automatic embedding in CRUD operations
### Vector Services
- `VectorService`: Abstract base for vector database operations
- `CosmosVectorService`: Azure Cosmos DB implementation
- `VectorServiceFactory`: Factory for creating vector services
### Schemas
- `EmbeddingConfig`: Configuration for embedding generation
- `BaseVectorConfig`: Base vector database configuration
- `CosmosVectorConfig`: Cosmos DB specific configuration
- `PineconeVectorConfig`: Pinecone specific configuration
- `WeaviateVectorConfig`: Weaviate specific configuration
- `ChromaVectorConfig`: Chroma specific configuration
- `QdrantVectorConfig`: Qdrant specific configuration
### Logging
- `DBTokenUsageLogger`: Database token usage tracking
- `BaseDBClient`: Abstract base for database clients
- `SQLClient`: SQL database client
- `NoSQLClient`: NoSQL database client
## Examples
See the `examples/` directory for comprehensive usage examples covering all features.
## License
MIT License
Raw data
{
"_id": null,
"home_page": null,
"name": "abs-langchain-suite",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.13",
"maintainer_email": null,
"keywords": null,
"author": "AutoBridgeSystems",
"author_email": "info@autobridgesystems.com",
"download_url": "https://files.pythonhosted.org/packages/59/04/00ec3c23605afed49b83d92c6d9a4d3c5379b127228839d190a6205d4886/abs_langchain_suite-0.1.8.tar.gz",
"platform": null,
"description": "# LangChain Suite Package\n\nA comprehensive package providing LangChain utilities with multiple AI providers, token tracking, RAG (Retrieval-Augmented Generation), vector services, and agent support.\n\n## Features\n\n- **Multiple AI Providers**: Support for OpenAI, Azure OpenAI, and Claude (Anthropic)\n- **Embeddings Support**: Easy text embedding generation with batch processing\n- **Vector Database Integration**: Support for Cosmos DB, Pinecone, Weaviate, Chroma, and Qdrant\n- **RAG Services**: Complete RAG pipeline with semantic search and retrieval chains\n- **Embedding Hooks**: Automatic embedding generation and storage for CRUD operations\n- **Token Usage Tracking**: Database logging for token consumption monitoring\n- **CLI Interface**: Command-line tool for quick AI provider interactions\n- **Configuration Management**: Flexible configuration options for all components\n\n## Installation\n\n```bash\npip install abs-langchain-suite\n```\n\n## Quick Start\n\n### Basic Provider Usage\n\n```python\nfrom abs_langchain_suite import OpenAIProvider, AzureOpenAIProvider, ClaudeProvider\n\n# OpenAI Provider\nopenai_provider = OpenAIProvider(api_key=\"your-openai-key\")\nresponse = openai_provider.chat(\"Hello, how are you?\")\n\n# Azure OpenAI Provider\nazure_provider = AzureOpenAIProvider(\n api_key=\"your-azure-key\",\n azure_endpoint=\"https://your-resource.openai.azure.com/\",\n deployment_name=\"your-deployment\"\n)\nresponse = azure_provider.chat(\"Explain quantum computing\")\n\n# Claude Provider\nclaude_provider = ClaudeProvider(api_key=\"your-anthropic-key\")\nresponse = claude_provider.chat(\"Write a short story\")\n```\n\n### Using Provider Factory\n\n```python\nfrom abs_langchain_suite import create_provider\n\n# Create providers using factory\nopenai_provider = create_provider(\"openai\", api_key=\"your-key\")\nazure_provider = create_provider(\"azure\", api_key=\"your-key\", azure_endpoint=\"...\", deployment_name=\"...\")\nclaude_provider = create_provider(\"claude\", api_key=\"your-key\")\n```\n\n### Embeddings and Vector Operations\n\n```python\nfrom abs_langchain_suite import OpenAIProvider\nfrom abs_langchain_suite.embeddings import EmbeddingService\nfrom abs_langchain_suite.schemas import EmbeddingConfig\n\n# Create embedding service\nprovider = OpenAIProvider(api_key=\"your-key\")\nconfig = EmbeddingConfig(\n model_name=\"text-embedding-3-small\",\n dimensions=1536,\n batch_size=100\n)\nembedding_service = EmbeddingService(provider, config)\n\n# Generate embeddings\nembedding = embedding_service.embed_query(\"Sample text\")\nembeddings = embedding_service.embed_documents([\"Doc 1\", \"Doc 2\", \"Doc 3\"])\n```\n\n### RAG (Retrieval-Augmented Generation)\n\n```python\nfrom abs_langchain_suite import OpenAIProvider\nfrom abs_langchain_suite.services import RAGService\n\n# Setup RAG service\nprovider = OpenAIProvider(api_key=\"your-key\")\nrag_service = RAGService(provider)\n\n# Your documents\ndocuments = [\n \"Python is a programming language.\",\n \"Machine learning is a subset of AI.\",\n \"Deep learning uses neural networks.\"\n]\n\n# Perform semantic search\nresults = rag_service.semantic_search(\n query=\"What is Python?\",\n texts=documents,\n k=2\n)\n\n# Build and run RAG chain\nresponse = rag_service.run_rag(\n query=\"Explain machine learning\",\n texts=documents,\n chain_type=\"stuff\"\n)\n```\n\n### Vector Database Integration\n\n```python\nfrom abs_langchain_suite.embeddings import VectorServiceFactory\nfrom abs_langchain_suite.schemas import CosmosVectorConfig\n\n# Configure Cosmos DB\ncosmos_config = CosmosVectorConfig(\n database=your_cosmos_database,\n container_name=\"vectors\",\n vector_field=\"embedding\"\n)\n\n# Create vector service\nvector_service = VectorServiceFactory.create_service(cosmos_config)\n\n# Store vectors\nawait vector_service.create_item(\n item_id=\"doc_1\",\n vector=[0.1, 0.2, 0.3, ...],\n metadata={\"title\": \"Sample Document\", \"content\": \"...\"}\n)\n\n# Search similar vectors\nresults = await vector_service.search_similar(\n query_vector=[0.1, 0.2, 0.3, ...],\n limit=5\n)\n```\n\n### Embedding Hooks for CRUD Operations\n\n```python\nfrom abs_langchain_suite.services import EmbeddingHookMixin\nfrom abs_langchain_suite.embeddings import EmbeddingService\nfrom abs_langchain_suite.schemas import EmbeddingConfig, CosmosVectorConfig\n\nclass DocumentService(EmbeddingHookMixin):\n def __init__(self):\n provider = OpenAIProvider(api_key=\"your-key\")\n embedding_config = EmbeddingConfig(model_name=\"text-embedding-3-small\")\n vector_config = CosmosVectorConfig(database=db, container_name=\"documents\")\n \n embedding_service = EmbeddingService(provider, embedding_config)\n super().__init__(embedding_service, embedding_config, vector_config)\n \n async def create_document(self, document_id: str, content: str, metadata: dict):\n # Your document creation logic\n document = {\"id\": document_id, \"content\": content, **metadata}\n \n # Automatically generate and store embedding\n await self.embed_on_create(document_id, content, document)\n \n return document\n \n async def search_similar_documents(self, query: str, limit: int = 10):\n return await self.search_similar(query, limit)\n```\n\n### Token Usage Tracking\n\n```python\nfrom abs_langchain_suite.logging import DBTokenUsageLogger\nfrom abs_langchain_suite.logging.db import SQLClient\nfrom abs_langchain_suite import OpenAIProvider\n\n# Setup token usage logger\ndb_client = SQLClient(connection_string=\"your-db-connection\")\ntoken_logger = DBTokenUsageLogger(db_client, table=\"token_usage\")\n\n# Create provider with token tracking\nprovider = OpenAIProvider(\n api_key=\"your-key\",\n callbacks=[token_logger]\n)\n\n# All interactions will now log token usage\nresponse = provider.chat(\"Hello world\")\n```\n\n## CLI Usage\n\nThe package includes a command-line interface for quick interactions:\n\n```bash\n# Basic usage\nprovider-cli --provider openai --api_key YOUR_KEY --message \"Hello world\"\n\n# With custom model\nprovider-cli --provider openai --api_key YOUR_KEY --model_name gpt-4 --message \"Explain AI\"\n\n# Azure OpenAI\nprovider-cli --provider azure --api_key YOUR_KEY --azure_endpoint \"https://your-resource.openai.azure.com/\" --deployment_name \"your-deployment\" --message \"Hello\"\n\n# Claude\nprovider-cli --provider claude --api_key YOUR_KEY --message \"Write a poem\"\n```\n\n## Configuration\n\n### Provider Configuration\n\nAll providers support extensive configuration options:\n\n```python\n# OpenAI Provider\nopenai_provider = OpenAIProvider(\n api_key=\"your-key\",\n model_name=\"gpt-4\",\n temperature=0.7,\n max_tokens=1000,\n streaming=True,\n base_url=\"https://api.openai.com/v1\"\n)\n\n# Azure OpenAI Provider\nazure_provider = AzureOpenAIProvider(\n api_key=\"your-key\",\n azure_endpoint=\"https://your-resource.openai.azure.com/\",\n deployment_name=\"your-deployment\",\n api_version=\"2024-02-15-preview\",\n temperature=0.3\n)\n\n# Claude Provider\nclaude_provider = ClaudeProvider(\n api_key=\"your-key\",\n model_name=\"claude-3-sonnet-20240229\",\n max_tokens=1000,\n temperature=0.5\n)\n```\n\n### Embedding Configuration\n\n```python\nfrom abs_langchain_suite.schemas import EmbeddingConfig\n\nconfig = EmbeddingConfig(\n model_name=\"text-embedding-3-small\",\n dimensions=1536,\n batch_size=100,\n max_retries=3,\n timeout=30.0,\n enable_caching=True,\n cache_ttl=3600\n)\n```\n\n### Vector Database Configuration\n\n```python\n# Cosmos DB\nfrom abs_langchain_suite.schemas import CosmosVectorConfig\n\ncosmos_config = CosmosVectorConfig(\n database=your_cosmos_database,\n container_name=\"vectors\",\n vector_field=\"embedding\"\n)\n\n# Pinecone (future implementation)\nfrom abs_langchain_suite.schemas import PineconeVectorConfig\n\npinecone_config = PineconeVectorConfig(\n environment=\"us-west1-gcp\",\n index_name=\"my-index\",\n namespace=\"my-namespace\"\n)\n```\n\n## Advanced Features\n\n### Custom Chains and Prompts\n\n```python\nfrom abs_langchain_suite import OpenAIProvider\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\n\nprovider = OpenAIProvider(api_key=\"your-key\")\n\n# Create custom prompt\nprompt = ChatPromptTemplate.from_messages([\n (\"system\", \"You are a helpful assistant specialized in {domain}.\"),\n (\"human\", \"Answer this question: {question}\")\n])\n\n# Create chain with custom parameters\nchain = provider.create_chain(\n prompt,\n output_parser=StrOutputParser(),\n temperature=0.1,\n max_tokens=200\n)\n\n# Use the chain\nresult = chain.invoke({\n \"domain\": \"machine learning\",\n \"question\": \"What is supervised learning?\"\n})\n```\n\n### Async Operations\n\nAll providers support async operations:\n\n```python\nimport asyncio\n\nasync def main():\n provider = OpenAIProvider(api_key=\"your-key\")\n \n # Async chat\n response = await provider.async_chat(\"Hello world\")\n \n # Async embeddings\n embedding = await provider.async_embed_text(\"Sample text\")\n embeddings = await provider.async_embed_documents([\"Doc 1\", \"Doc 2\"])\n\nasyncio.run(main())\n```\n\n### Agent Support\n\n```python\nfrom abs_langchain_suite import OpenAIProvider\nfrom langchain_core.tools import BaseTool\n\nprovider = OpenAIProvider(api_key=\"your-key\")\n\n# Define tools\ntools = [\n # Your custom tools here\n]\n\n# Create prompt for agent\nprompt = ChatPromptTemplate.from_messages([\n (\"system\", \"You are a helpful assistant with access to tools.\"),\n (\"human\", \"{input}\")\n])\n\n# Run agent\nresult = provider.run_agent(tools, prompt, {\"input\": \"What's the weather like?\"})\n```\n\n## Environment Variables\n\nThe package automatically reads these environment variables:\n\n- `OPENAI_API_KEY`: Your OpenAI API key\n- `OPENAI_BASE_URL`: Custom base URL for OpenAI\n- `OPENAI_ORGANIZATION`: Organization ID\n- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key\n- `AZURE_OPENAI_ENDPOINT`: Azure OpenAI endpoint\n- `ANTHROPIC_API_KEY`: Your Anthropic API key\n\n## API Reference\n\n### Providers\n\n- `OpenAIProvider`: OpenAI API integration\n- `AzureOpenAIProvider`: Azure OpenAI integration\n- `ClaudeProvider`: Anthropic Claude integration\n- `BaseProvider`: Abstract base class for custom providers\n- `create_provider()`: Factory function for creating providers\n\n### Services\n\n- `RAGService`: Complete RAG pipeline implementation\n- `EmbeddingService`: Embedding generation and management\n- `EmbeddingHookMixin`: Mixin for automatic embedding in CRUD operations\n\n### Vector Services\n\n- `VectorService`: Abstract base for vector database operations\n- `CosmosVectorService`: Azure Cosmos DB implementation\n- `VectorServiceFactory`: Factory for creating vector services\n\n### Schemas\n\n- `EmbeddingConfig`: Configuration for embedding generation\n- `BaseVectorConfig`: Base vector database configuration\n- `CosmosVectorConfig`: Cosmos DB specific configuration\n- `PineconeVectorConfig`: Pinecone specific configuration\n- `WeaviateVectorConfig`: Weaviate specific configuration\n- `ChromaVectorConfig`: Chroma specific configuration\n- `QdrantVectorConfig`: Qdrant specific configuration\n\n### Logging\n\n- `DBTokenUsageLogger`: Database token usage tracking\n- `BaseDBClient`: Abstract base for database clients\n- `SQLClient`: SQL database client\n- `NoSQLClient`: NoSQL database client\n\n## Examples\n\nSee the `examples/` directory for comprehensive usage examples covering all features.\n\n## License\n\nMIT License\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "LangChain utilities with token tracking, RAG, and agent support",
"version": "0.1.8",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0df3663f9c56642b43f498c7cae8e1f3bf5b0ff11fdd8ae3434b1ef1a17d5690",
"md5": "cfbe45001c2c138f73951a8fec8dfcf9",
"sha256": "d8d2599a8ed8dff8941006634f8b9e4d84e89dfdcc1e03b9777b1e56afae7d3e"
},
"downloads": -1,
"filename": "abs_langchain_suite-0.1.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cfbe45001c2c138f73951a8fec8dfcf9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.13",
"size": 23940,
"upload_time": "2025-07-11T10:53:03",
"upload_time_iso_8601": "2025-07-11T10:53:03.586646Z",
"url": "https://files.pythonhosted.org/packages/0d/f3/663f9c56642b43f498c7cae8e1f3bf5b0ff11fdd8ae3434b1ef1a17d5690/abs_langchain_suite-0.1.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "590400ec3c23605afed49b83d92c6d9a4d3c5379b127228839d190a6205d4886",
"md5": "f44dbc7826275fba2769a676ae59231f",
"sha256": "b5aa013f88fd28b749ec6227efbd8e2481288c18817dac9e14031bf68f0d46ca"
},
"downloads": -1,
"filename": "abs_langchain_suite-0.1.8.tar.gz",
"has_sig": false,
"md5_digest": "f44dbc7826275fba2769a676ae59231f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.13",
"size": 17421,
"upload_time": "2025-07-11T10:53:05",
"upload_time_iso_8601": "2025-07-11T10:53:05.006626Z",
"url": "https://files.pythonhosted.org/packages/59/04/00ec3c23605afed49b83d92c6d9a4d3c5379b127228839d190a6205d4886/abs_langchain_suite-0.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 10:53:05",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "abs-langchain-suite"
}