# LangChain Anthropic Smart Cache
🚀 **Intelligent cache management for LangChain Anthropic models with advanced optimization strategies**
[](https://badge.fury.io/py/langchain-anthropic-smart-cache)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
> **📚 Learn about Anthropic's prompt caching:** [Official Documentation](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)
## ⚡ What is this?
A sophisticated callback handler that automatically optimizes Anthropic Claude's cache usage to **reduce costs and improve performance**. It implements intelligent priority-based caching that ensures your most important content (tools, system prompts, large content blocks) gets cached first.
## 🎯 Key Features
- **Smart Priority System**: Tools and system prompts get priority when not cached
- **Automatic Cache Management**: 5-minute cache duration with intelligent refresh
- **Cost Optimization**: Prioritizes larger content blocks for maximum savings
- **Detailed Analytics**: Comprehensive logging and cache efficiency metrics
- **Zero Configuration**: Works out of the box with sensible defaults
- **Anthropic Optimized**: Built specifically for Claude's cache_control feature
## 📦 Installation
```bash
pip install langchain-anthropic-smart-cache
```
## 🚀 Quick Start
```python
from langchain_anthropic import ChatAnthropic
from langchain_anthropic_smart_cache import SmartCacheCallbackHandler
# Initialize the cache handler
cache_handler = SmartCacheCallbackHandler(
cache_duration=300, # 5 minutes
max_cache_blocks=4, # Anthropic's limit
min_token_count=1024 # Minimum tokens to cache
)
# Add to your LangChain model
llm = ChatAnthropic(
model="claude-3-5-sonnet-20241022",
callbacks=[cache_handler]
)
# Use normally - caching happens automatically!
response = llm.invoke([
{"role": "system", "content": "You are a helpful assistant..."},
{"role": "user", "content": "Hello!"}
])
```
## 🧠 How Smart Caching Works
### 🎯 Priority-Based Cache Management
The system uses a sophisticated 5-level priority system to ensure the most valuable content gets cached first:
```mermaid
graph TD
A[Incoming Request] --> B{Analyze Content}
B --> C[Tools Available?]
B --> D[System Prompts?]
B --> E[User Content]
C --> F{Tools Cached?}
F -->|No| G[Priority 1: Cache Tools]
F -->|Yes, Fresh| H[Skip Tools]
F -->|Yes, Expiring| I[Priority 4: Refresh Tools]
D --> J{System Cached?}
J -->|No| K[Priority 2: Cache System]
J -->|Yes, Fresh| L[Skip System]
J -->|Yes, Expiring| M[Priority 5: Refresh System]
E --> N[Priority 3: Cache Content by Size]
G --> O[Allocate Cache Slots]
K --> O
N --> O
I --> O
M --> O
O --> P{Slots Available?}
P -->|Yes| Q[Apply Cache Control]
P -->|No| R[Skip Lower Priority Items]
```
### 🔄 Cache Lifecycle Flow
```mermaid
sequenceDiagram
participant User
participant Handler as SmartCacheHandler
participant Cache as Cache Storage
participant Claude as Anthropic API
User->>Handler: Send Request with Tools + System + Content
Handler->>Cache: Check existing cache status
Cache-->>Handler: Return cache metadata
Note over Handler: Priority Analysis
Handler->>Handler: Priority 1: Uncached Tools
Handler->>Handler: Priority 2: Uncached System
Handler->>Handler: Priority 3: Content (by size)
Handler->>Handler: Priority 4: Expiring Tools
Handler->>Handler: Priority 5: Expiring System
Note over Handler: Slot Allocation (Max 4)
Handler->>Handler: Assign cache_control headers
Handler->>Claude: Send optimized request
Claude-->>Handler: Response with cache info
Handler->>Cache: Update cache metadata
Handler-->>User: Response + Cache Analytics
```
### 🎲 Decision Algorithm
The cache decision algorithm follows this logic:
```mermaid
flowchart TD
Start([New Request]) --> Clear[Clear Previous Cache Controls]
Clear --> Parse[Parse Messages for Tools/System/Content]
Parse --> CheckTools{Tools Present?}
CheckTools -->|Yes| ToolsCached{Tools Cached & Fresh?}
CheckTools -->|No| CheckSystem{System Prompts?}
ToolsCached -->|No| AddTools[Add Tools - Priority 1]
ToolsCached -->|Yes| CheckSystem
CheckSystem -->|Yes| SystemCached{System Cached & Fresh?}
CheckSystem -->|No| ProcessContent[Process Content Blocks]
SystemCached -->|No| AddSystem[Add System - Priority 2]
SystemCached -->|Yes| ProcessContent
ProcessContent --> SortContent[Sort Content by Token Count]
SortContent --> AddContent[Add Content - Priority 3]
AddTools --> CheckSlots{Slots < 4?}
AddSystem --> CheckSlots
AddContent --> CheckSlots
CheckSlots -->|Yes| MoreContent{More Items?}
CheckSlots -->|No| RefreshCheck{Expired Items to Refresh?}
MoreContent -->|Yes| AddContent
MoreContent -->|No| RefreshCheck
RefreshCheck -->|Yes| AddRefresh[Add Refresh - Priority 4/5]
RefreshCheck -->|No| Complete[Complete Cache Assignment]
AddRefresh --> FinalCheck{Slots < 4?}
FinalCheck -->|Yes| RefreshCheck
FinalCheck -->|No| Complete
Complete --> SendRequest[Send to Anthropic API]
SendRequest --> UpdateCache[Update Cache Metadata]
UpdateCache --> End([Return Response])
style AddTools fill:#ff6b6b
style AddSystem fill:#4ecdc4
style AddContent fill:#45b7d1
style AddRefresh fill:#96ceb4
```
### 💡 Priority System Explained
| Priority | Type | Condition | Why? |
|----------|------|-----------|------|
| **1** 🔴 | Tools | Not cached or expired | Critical for function calling - failures break functionality |
| **2** 🟠 | System | Not cached or expired | Core instructions that define AI behavior |
| **3** 🟡 | Content | Always evaluated | User data, sorted by size for maximum cache efficiency |
| **4** 🟢 | Tools | Cached but expiring soon | Refresh tools proactively to avoid cache misses |
| **5** 🔵 | System | Cached but expiring soon | Refresh system prompts when slots available |
### 📊 Cache Efficiency Example
```mermaid
pie title Cache Slot Allocation Example
"Tools (Priority 1)" : 25
"System (Priority 2)" : 25
"Large Content (Priority 3)" : 35
"Medium Content (Priority 3)" : 15
```
**Scenario**: 4 available slots, competing content
- 🔴 **Slot 1**: Tools (3,000 tokens) - Priority 1 (uncached)
- 🟠 **Slot 2**: System prompt (1,200 tokens) - Priority 2 (uncached)
- 🟡 **Slot 3**: Large content (5,000 tokens) - Priority 3 (new)
- 🟡 **Slot 4**: Medium content (2,000 tokens) - Priority 3 (new)
- ❌ **Skipped**: Small content (800 tokens) - Priority 3 (below minimum)
- ❌ **Skipped**: Cached system refresh (1,200 tokens) - Priority 5 (no slots left)
**Result**: 11,200 tokens cached, optimizing for both functionality and cost savings.
## 📊 Cache Analytics
The handler provides detailed logging:
```
💾 CACHED tools (slot 1/4) - NEW tools needed caching
⚡ CACHED content (slot 2/4, 3001 tokens) - MAINTAIN existing cache
🔄 CACHED content (slot 3/4, 2000 tokens) - REFRESH expiring cache
💾 CACHED content (slot 4/4, 1705 tokens) - NEW content block
🚫 SKIPPED ITEMS (2 items):
❌ content (priority 3, new, 1524 tokens) - smaller new content, larger cached content prioritized
❌ system (priority 5, cached, 1182 tokens) - system already cached, content got priority
📊 CACHE SUMMARY:
🎯 Slots used: 4/4
⚡ Previously cached: 2 items (50.0%)
💾 Newly cached: 2 items
🚫 Skipped: 2 items
📈 Cached tokens: 7,886 | Skipped tokens: 2,706
```
## ⚙️ Configuration
```python
cache_handler = SmartCacheCallbackHandler(
cache_duration=300, # Cache validity in seconds (default: 5 minutes)
max_cache_blocks=4, # Max cache blocks (Anthropic limit: 4)
min_token_count=1024, # Minimum tokens to consider for caching
enable_logging=True, # Enable detailed cache logging
log_level="INFO", # Logging level
cache_dir=None, # Custom cache directory (default: temp)
)
```
## 🎯 Advanced Usage
### With Tools
```python
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get current weather for a location."""
return f"Weather in {location}: Sunny, 72°F"
# Tools automatically get highest priority when not cached
llm_with_tools = llm.bind_tools([get_weather])
```
### Cache Statistics
```python
# Access cache statistics
stats = cache_handler.get_stats()
print(f"Cache hit rate: {stats.cache_hit_rate:.1f}%")
print(f"Total tokens cached: {stats.total_tokens_cached:,}")
print(f"Estimated cost savings: ${stats.estimated_savings:.2f}")
```
## 🔧 Requirements
- **Python 3.8+**
- **langchain-core >= 0.3.62**
- **langchain-anthropic >= 0.3.14**
- **tiktoken >= 0.8.0**
> **Note**: This package is specifically designed for Anthropic Claude models that support the `cache_control` feature. Other providers may be added in future versions.
## 📈 Performance Benefits
- **Cost Reduction**: Up to 90% savings on repeated content
- **Latency Improvement**: Cached content loads ~10x faster
- **Smart Prioritization**: Ensures most valuable content stays cached
- **Automatic Management**: No manual cache invalidation needed
## 🤝 Contributing
Contributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Built for the [LangChain](https://github.com/langchain-ai/langchain) ecosystem
- Optimized for [Anthropic Claude](https://www.anthropic.com/claude) models
- Inspired by modern caching strategies and cost optimization principles
Raw data
{
"_id": null,
"home_page": "https://github.com/imranarshad/langchain-anthropic-smart-cache",
"name": "langchain-anthropic-smart-cache",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "langchain, cache, anthropic, claude, optimization, ai, llm",
"author": "Imran Arshad",
"author_email": "Imran Arshad <imran.arshad01@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/f2/a3/0b85267c57daa7a6745b7d6cf063b4af1e4fba0dca63e3679302b315b6c2/langchain_anthropic_smart_cache-0.2.1.tar.gz",
"platform": null,
"description": "# LangChain Anthropic Smart Cache\n\n\ud83d\ude80 **Intelligent cache management for LangChain Anthropic models with advanced optimization strategies**\n\n[](https://badge.fury.io/py/langchain-anthropic-smart-cache)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n\n> **\ud83d\udcda Learn about Anthropic's prompt caching:** [Official Documentation](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)\n\n## \u26a1 What is this?\n\nA sophisticated callback handler that automatically optimizes Anthropic Claude's cache usage to **reduce costs and improve performance**. It implements intelligent priority-based caching that ensures your most important content (tools, system prompts, large content blocks) gets cached first.\n\n## \ud83c\udfaf Key Features\n\n- **Smart Priority System**: Tools and system prompts get priority when not cached\n- **Automatic Cache Management**: 5-minute cache duration with intelligent refresh\n- **Cost Optimization**: Prioritizes larger content blocks for maximum savings\n- **Detailed Analytics**: Comprehensive logging and cache efficiency metrics\n- **Zero Configuration**: Works out of the box with sensible defaults\n- **Anthropic Optimized**: Built specifically for Claude's cache_control feature\n\n## \ud83d\udce6 Installation\n\n```bash\npip install langchain-anthropic-smart-cache\n```\n\n## \ud83d\ude80 Quick Start\n\n```python\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_anthropic_smart_cache import SmartCacheCallbackHandler\n\n# Initialize the cache handler\ncache_handler = SmartCacheCallbackHandler(\n cache_duration=300, # 5 minutes\n max_cache_blocks=4, # Anthropic's limit\n min_token_count=1024 # Minimum tokens to cache\n)\n\n# Add to your LangChain model\nllm = ChatAnthropic(\n model=\"claude-3-5-sonnet-20241022\",\n callbacks=[cache_handler]\n)\n\n# Use normally - caching happens automatically!\nresponse = llm.invoke([\n {\"role\": \"system\", \"content\": \"You are a helpful assistant...\"},\n {\"role\": \"user\", \"content\": \"Hello!\"}\n])\n```\n\n## \ud83e\udde0 How Smart Caching Works\n\n### \ud83c\udfaf Priority-Based Cache Management\n\nThe system uses a sophisticated 5-level priority system to ensure the most valuable content gets cached first:\n\n```mermaid\ngraph TD\n A[Incoming Request] --> B{Analyze Content}\n B --> C[Tools Available?]\n B --> D[System Prompts?]\n B --> E[User Content]\n\n C --> F{Tools Cached?}\n F -->|No| G[Priority 1: Cache Tools]\n F -->|Yes, Fresh| H[Skip Tools]\n F -->|Yes, Expiring| I[Priority 4: Refresh Tools]\n\n D --> J{System Cached?}\n J -->|No| K[Priority 2: Cache System]\n J -->|Yes, Fresh| L[Skip System]\n J -->|Yes, Expiring| M[Priority 5: Refresh System]\n\n E --> N[Priority 3: Cache Content by Size]\n\n G --> O[Allocate Cache Slots]\n K --> O\n N --> O\n I --> O\n M --> O\n\n O --> P{Slots Available?}\n P -->|Yes| Q[Apply Cache Control]\n P -->|No| R[Skip Lower Priority Items]\n```\n\n### \ud83d\udd04 Cache Lifecycle Flow\n\n```mermaid\nsequenceDiagram\n participant User\n participant Handler as SmartCacheHandler\n participant Cache as Cache Storage\n participant Claude as Anthropic API\n\n User->>Handler: Send Request with Tools + System + Content\n Handler->>Cache: Check existing cache status\n Cache-->>Handler: Return cache metadata\n\n Note over Handler: Priority Analysis\n Handler->>Handler: Priority 1: Uncached Tools\n Handler->>Handler: Priority 2: Uncached System\n Handler->>Handler: Priority 3: Content (by size)\n Handler->>Handler: Priority 4: Expiring Tools\n Handler->>Handler: Priority 5: Expiring System\n\n Note over Handler: Slot Allocation (Max 4)\n Handler->>Handler: Assign cache_control headers\n Handler->>Claude: Send optimized request\n Claude-->>Handler: Response with cache info\n Handler->>Cache: Update cache metadata\n Handler-->>User: Response + Cache Analytics\n```\n\n### \ud83c\udfb2 Decision Algorithm\n\nThe cache decision algorithm follows this logic:\n\n```mermaid\nflowchart TD\n Start([New Request]) --> Clear[Clear Previous Cache Controls]\n Clear --> Parse[Parse Messages for Tools/System/Content]\n\n Parse --> CheckTools{Tools Present?}\n CheckTools -->|Yes| ToolsCached{Tools Cached & Fresh?}\n CheckTools -->|No| CheckSystem{System Prompts?}\n\n ToolsCached -->|No| AddTools[Add Tools - Priority 1]\n ToolsCached -->|Yes| CheckSystem\n\n CheckSystem -->|Yes| SystemCached{System Cached & Fresh?}\n CheckSystem -->|No| ProcessContent[Process Content Blocks]\n\n SystemCached -->|No| AddSystem[Add System - Priority 2]\n SystemCached -->|Yes| ProcessContent\n\n ProcessContent --> SortContent[Sort Content by Token Count]\n SortContent --> AddContent[Add Content - Priority 3]\n\n AddTools --> CheckSlots{Slots < 4?}\n AddSystem --> CheckSlots\n AddContent --> CheckSlots\n\n CheckSlots -->|Yes| MoreContent{More Items?}\n CheckSlots -->|No| RefreshCheck{Expired Items to Refresh?}\n\n MoreContent -->|Yes| AddContent\n MoreContent -->|No| RefreshCheck\n\n RefreshCheck -->|Yes| AddRefresh[Add Refresh - Priority 4/5]\n RefreshCheck -->|No| Complete[Complete Cache Assignment]\n\n AddRefresh --> FinalCheck{Slots < 4?}\n FinalCheck -->|Yes| RefreshCheck\n FinalCheck -->|No| Complete\n\n Complete --> SendRequest[Send to Anthropic API]\n SendRequest --> UpdateCache[Update Cache Metadata]\n UpdateCache --> End([Return Response])\n\n style AddTools fill:#ff6b6b\n style AddSystem fill:#4ecdc4\n style AddContent fill:#45b7d1\n style AddRefresh fill:#96ceb4\n```\n\n### \ud83d\udca1 Priority System Explained\n\n| Priority | Type | Condition | Why? |\n|----------|------|-----------|------|\n| **1** \ud83d\udd34 | Tools | Not cached or expired | Critical for function calling - failures break functionality |\n| **2** \ud83d\udfe0 | System | Not cached or expired | Core instructions that define AI behavior |\n| **3** \ud83d\udfe1 | Content | Always evaluated | User data, sorted by size for maximum cache efficiency |\n| **4** \ud83d\udfe2 | Tools | Cached but expiring soon | Refresh tools proactively to avoid cache misses |\n| **5** \ud83d\udd35 | System | Cached but expiring soon | Refresh system prompts when slots available |\n\n### \ud83d\udcca Cache Efficiency Example\n\n```mermaid\npie title Cache Slot Allocation Example\n \"Tools (Priority 1)\" : 25\n \"System (Priority 2)\" : 25\n \"Large Content (Priority 3)\" : 35\n \"Medium Content (Priority 3)\" : 15\n```\n\n**Scenario**: 4 available slots, competing content\n- \ud83d\udd34 **Slot 1**: Tools (3,000 tokens) - Priority 1 (uncached)\n- \ud83d\udfe0 **Slot 2**: System prompt (1,200 tokens) - Priority 2 (uncached)\n- \ud83d\udfe1 **Slot 3**: Large content (5,000 tokens) - Priority 3 (new)\n- \ud83d\udfe1 **Slot 4**: Medium content (2,000 tokens) - Priority 3 (new)\n- \u274c **Skipped**: Small content (800 tokens) - Priority 3 (below minimum)\n- \u274c **Skipped**: Cached system refresh (1,200 tokens) - Priority 5 (no slots left)\n\n**Result**: 11,200 tokens cached, optimizing for both functionality and cost savings.\n\n## \ud83d\udcca Cache Analytics\n\nThe handler provides detailed logging:\n\n```\n\ud83d\udcbe CACHED tools (slot 1/4) - NEW tools needed caching\n\u26a1 CACHED content (slot 2/4, 3001 tokens) - MAINTAIN existing cache\n\ud83d\udd04 CACHED content (slot 3/4, 2000 tokens) - REFRESH expiring cache\n\ud83d\udcbe CACHED content (slot 4/4, 1705 tokens) - NEW content block\n\n\ud83d\udeab SKIPPED ITEMS (2 items):\n \u274c content (priority 3, new, 1524 tokens) - smaller new content, larger cached content prioritized\n \u274c system (priority 5, cached, 1182 tokens) - system already cached, content got priority\n\n\ud83d\udcca CACHE SUMMARY:\n \ud83c\udfaf Slots used: 4/4\n \u26a1 Previously cached: 2 items (50.0%)\n \ud83d\udcbe Newly cached: 2 items\n \ud83d\udeab Skipped: 2 items\n \ud83d\udcc8 Cached tokens: 7,886 | Skipped tokens: 2,706\n```\n\n## \u2699\ufe0f Configuration\n\n```python\ncache_handler = SmartCacheCallbackHandler(\n cache_duration=300, # Cache validity in seconds (default: 5 minutes)\n max_cache_blocks=4, # Max cache blocks (Anthropic limit: 4)\n min_token_count=1024, # Minimum tokens to consider for caching\n enable_logging=True, # Enable detailed cache logging\n log_level=\"INFO\", # Logging level\n cache_dir=None, # Custom cache directory (default: temp)\n)\n```\n\n## \ud83c\udfaf Advanced Usage\n\n### With Tools\n```python\nfrom langchain_core.tools import tool\n\n@tool\ndef get_weather(location: str) -> str:\n \"\"\"Get current weather for a location.\"\"\"\n return f\"Weather in {location}: Sunny, 72\u00b0F\"\n\n# Tools automatically get highest priority when not cached\nllm_with_tools = llm.bind_tools([get_weather])\n```\n\n### Cache Statistics\n```python\n# Access cache statistics\nstats = cache_handler.get_stats()\nprint(f\"Cache hit rate: {stats.cache_hit_rate:.1f}%\")\nprint(f\"Total tokens cached: {stats.total_tokens_cached:,}\")\nprint(f\"Estimated cost savings: ${stats.estimated_savings:.2f}\")\n```\n\n## \ud83d\udd27 Requirements\n\n- **Python 3.8+**\n- **langchain-core >= 0.3.62**\n- **langchain-anthropic >= 0.3.14**\n- **tiktoken >= 0.8.0**\n\n> **Note**: This package is specifically designed for Anthropic Claude models that support the `cache_control` feature. Other providers may be added in future versions.\n\n## \ud83d\udcc8 Performance Benefits\n\n- **Cost Reduction**: Up to 90% savings on repeated content\n- **Latency Improvement**: Cached content loads ~10x faster\n- **Smart Prioritization**: Ensures most valuable content stays cached\n- **Automatic Management**: No manual cache invalidation needed\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Built for the [LangChain](https://github.com/langchain-ai/langchain) ecosystem\n- Optimized for [Anthropic Claude](https://www.anthropic.com/claude) models\n- Inspired by modern caching strategies and cost optimization principles\n",
"bugtrack_url": null,
"license": null,
"summary": "Intelligent cache management for LangChain Anthropic models with advanced optimization strategies",
"version": "0.2.1",
"project_urls": {
"Bug Reports": "https://github.com/imranarshad/langchain-anthropic-smart-cache/issues",
"Documentation": "https://github.com/imranarshad/langchain-anthropic-smart-cache#readme",
"Homepage": "https://github.com/imranarshad/langchain-anthropic-smart-cache",
"Source": "https://github.com/imranarshad/langchain-anthropic-smart-cache"
},
"split_keywords": [
"langchain",
" cache",
" anthropic",
" claude",
" optimization",
" ai",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3cd899a65100e2ec4ae4a4b731f61bb8c9ec983b8ccfdc6fe529f8f463523712",
"md5": "7316917001e24aea4289cc5d6e873f4c",
"sha256": "7abecdf5e2b1813e3e7428fb463c7efe0521ee52af2b967a8d7046c8a9de6d7e"
},
"downloads": -1,
"filename": "langchain_anthropic_smart_cache-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7316917001e24aea4289cc5d6e873f4c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 16555,
"upload_time": "2025-08-27T12:34:47",
"upload_time_iso_8601": "2025-08-27T12:34:47.589956Z",
"url": "https://files.pythonhosted.org/packages/3c/d8/99a65100e2ec4ae4a4b731f61bb8c9ec983b8ccfdc6fe529f8f463523712/langchain_anthropic_smart_cache-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f2a30b85267c57daa7a6745b7d6cf063b4af1e4fba0dca63e3679302b315b6c2",
"md5": "e1337362ec38a91839bd3b5ce3506738",
"sha256": "f4e1b40758e93b8d9ff52317a4f6f303fe693d1cde6c5e2037898c29fcfb48cf"
},
"downloads": -1,
"filename": "langchain_anthropic_smart_cache-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "e1337362ec38a91839bd3b5ce3506738",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 24668,
"upload_time": "2025-08-27T12:34:49",
"upload_time_iso_8601": "2025-08-27T12:34:49.484845Z",
"url": "https://files.pythonhosted.org/packages/f2/a3/0b85267c57daa7a6745b7d6cf063b4af1e4fba0dca63e3679302b315b6c2/langchain_anthropic_smart_cache-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-27 12:34:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "imranarshad",
"github_project": "langchain-anthropic-smart-cache",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "langchain-core",
"specs": [
[
">=",
"0.3.62"
]
]
},
{
"name": "tiktoken",
"specs": [
[
">=",
"0.8.0"
]
]
},
{
"name": "langchain-anthropic",
"specs": [
[
">=",
"0.3.14"
]
]
}
],
"lcname": "langchain-anthropic-smart-cache"
}