# Cost Katana Python SDK
A simple, unified interface for AI models with built-in cost optimization, failover, and analytics. Use any AI provider through one consistent API - no need to manage API keys or worry about provider-specific implementations!
## 🚀 Quick Start
### Installation
```bash
pip install cost-katana
```
### Get Your API Key
1. Visit [Cost Katana Dashboard](https://costkatana.com/dashboard)
2. Create an account or sign in
3. Go to API Keys section
4. Generate a new API key (starts with `dak_`)
### Basic Usage
```python
import cost_katana as ck
# Configure once with your API key
ck.configure(api_key='dak_your_key_here')
# Use any AI model with the same simple interface
model = ck.GenerativeModel('nova-lite')
response = model.generate_content("Explain quantum computing in simple terms")
print(response.text)
print(f"Cost: ${response.usage_metadata.cost:.4f}")
```
### Chat Sessions
```python
import cost_katana as ck
ck.configure(api_key='dak_your_key_here')
# Start a conversation
model = ck.GenerativeModel('claude-3-sonnet')
chat = model.start_chat()
# Send messages back and forth
response1 = chat.send_message("Hello! What's your name?")
print("AI:", response1.text)
response2 = chat.send_message("Can you help me write a Python function?")
print("AI:", response2.text)
# Get total conversation cost
total_cost = sum(msg.get('metadata', {}).get('cost', 0) for msg in chat.history)
print(f"Total conversation cost: ${total_cost:.4f}")
```
## 🎯 Why Cost Katana?
### Simple Interface, Powerful Backend
- **One API for all providers**: Use Google Gemini, Anthropic Claude, OpenAI GPT, AWS Bedrock models through one interface
- **No API key juggling**: Store your provider keys securely in Cost Katana, use one key in your code
- **Automatic failover**: If one provider is down, automatically switch to alternatives
- **Cost optimization**: Intelligent routing to minimize costs while maintaining quality
### Enterprise Features
- **Cost tracking**: Real-time cost monitoring and budgets
- **Usage analytics**: Detailed insights into model performance and usage patterns
- **Team management**: Share projects and manage API usage across teams
- **Approval workflows**: Set spending limits with approval requirements
## 📚 Configuration Options
### Using Configuration File (Recommended)
Create `config.json`:
```json
{
"api_key": "dak_your_key_here",
"default_model": "gemini-2.0-flash",
"default_temperature": 0.7,
"cost_limit_per_day": 50.0,
"enable_optimization": true,
"enable_failover": true,
"model_mappings": {
"gemini": "gemini-2.0-flash-exp",
"claude": "anthropic.claude-3-sonnet-20240229-v1:0",
"gpt4": "gpt-4-turbo-preview"
},
"providers": {
"google": {
"priority": 1,
"models": ["gemini-2.0-flash", "gemini-pro"]
},
"anthropic": {
"priority": 2,
"models": ["claude-3-sonnet", "claude-3-haiku"]
}
}
}
```
```python
import cost_katana as ck
# Configure from file
ck.configure(config_file='config.json')
# Now use any model
model = ck.GenerativeModel('gemini') # Uses mapping from config
```
### Environment Variables
```bash
export COST_KATANA_API_KEY=dak_your_key_here
export COST_KATANA_DEFAULT_MODEL=claude-3-sonnet
```
```python
import cost_katana as ck
# Automatically loads from environment
ck.configure()
model = ck.GenerativeModel() # Uses default model from env
```
## 🤖 Supported Models
### Amazon Nova Models (Primary Recommendation)
- `nova-micro` - Ultra-fast and cost-effective for simple tasks
- `nova-lite` - Balanced performance and cost for general use
- `nova-pro` - High-performance model for complex tasks
### Anthropic Claude Models
- `claude-3-haiku` - Fast and cost-effective responses
- `claude-3-sonnet` - Balanced performance for complex tasks
- `claude-3-opus` - Most capable Claude model for advanced reasoning
- `claude-3.5-haiku` - Latest fast model with enhanced capabilities
- `claude-3.5-sonnet` - Advanced reasoning and analysis
### Meta Llama Models
- `llama-3.1-8b` - Good balance of performance and efficiency
- `llama-3.1-70b` - Large model for complex reasoning
- `llama-3.1-405b` - Most capable Llama model
- `llama-3.2-1b` - Compact and efficient
- `llama-3.2-3b` - Efficient for general tasks
### Mistral Models
- `mistral-7b` - Efficient open-source model
- `mixtral-8x7b` - High-quality mixture of experts
- `mistral-large` - Advanced reasoning capabilities
### Cohere Models
- `command` - General purpose text generation
- `command-light` - Lighter, faster version
- `command-r` - Retrieval-augmented generation
- `command-r-plus` - Enhanced RAG with better reasoning
### Friendly Aliases
- `fast` → Nova Micro (optimized for speed)
- `balanced` → Nova Lite (balanced cost/performance)
- `powerful` → Nova Pro (maximum capabilities)
## ⚙️ Advanced Usage
### Generation Configuration
```python
from cost_katana import GenerativeModel, GenerationConfig
config = GenerationConfig(
temperature=0.3,
max_output_tokens=1000,
top_p=0.9
)
model = GenerativeModel('claude-3-sonnet', generation_config=config)
response = model.generate_content("Write a haiku about programming")
```
### Multi-Agent Processing
```python
# Enable multi-agent processing for complex queries
model = GenerativeModel('gemini-2.0-flash')
response = model.generate_content(
"Analyze the economic impact of AI on job markets",
use_multi_agent=True,
chat_mode='balanced'
)
# See which agents were involved
print("Agent path:", response.usage_metadata.agent_path)
print("Optimizations applied:", response.usage_metadata.optimizations_applied)
```
### Cost Optimization Modes
```python
# Different optimization strategies
fast_response = model.generate_content(
"Quick summary of today's news",
chat_mode='fastest' # Prioritize speed
)
cheap_response = model.generate_content(
"Detailed analysis of market trends",
chat_mode='cheapest' # Prioritize cost
)
balanced_response = model.generate_content(
"Help me debug this Python code",
chat_mode='balanced' # Balance speed and cost
)
```
## 🖥️ Command Line Interface
Cost Katana includes a CLI for easy interaction:
```bash
# Initialize configuration
cost-katana init
# Test your setup
cost-katana test
# List available models
cost-katana models
# Start interactive chat
cost-katana chat --model gemini-2.0-flash
# Use specific config file
cost-katana chat --config my-config.json
```
## 📊 Usage Analytics
Track your AI usage and costs:
```python
import cost_katana as ck
ck.configure(config_file='config.json')
model = ck.GenerativeModel('claude-3-sonnet')
response = model.generate_content("Explain machine learning")
# Detailed usage information
metadata = response.usage_metadata
print(f"Model used: {metadata.model}")
print(f"Cost: ${metadata.cost:.4f}")
print(f"Latency: {metadata.latency:.2f}s")
print(f"Tokens: {metadata.total_tokens}")
print(f"Cache hit: {metadata.cache_hit}")
print(f"Risk level: {metadata.risk_level}")
```
## 🔧 Error Handling
```python
from cost_katana import GenerativeModel
from cost_katana.exceptions import (
CostLimitExceededError,
ModelNotAvailableError,
RateLimitError
)
try:
model = GenerativeModel('expensive-model')
response = model.generate_content("Complex analysis task")
except CostLimitExceededError:
print("Cost limit reached! Check your budget settings.")
except ModelNotAvailableError:
print("Model is currently unavailable. Trying fallback...")
model = GenerativeModel('backup-model')
response = model.generate_content("Complex analysis task")
except RateLimitError:
print("Rate limit hit. Please wait before retrying.")
```
## 🌟 Comparison with Direct Provider SDKs
### Before (Google Gemini)
```python
import google.generativeai as genai
# Need to manage API key
genai.configure(api_key="your-google-api-key")
# Provider-specific code
model = genai.GenerativeModel('gemini-2.0-flash')
response = model.generate_content("Hello")
# No cost tracking, no failover, provider lock-in
```
### After (Cost Katana)
```python
import cost_katana as ck
# One API key for all providers
ck.configure(api_key='dak_your_key_here')
# Same interface, any provider
model = ck.GenerativeModel('nova-lite')
response = model.generate_content("Hello")
# Built-in cost tracking, failover, optimization
print(f"Cost: ${response.usage_metadata.cost:.4f}")
```
## 🏢 Enterprise Features
- **Team Management**: Share configurations across team members
- **Cost Centers**: Track usage by project or department
- **Approval Workflows**: Require approval for high-cost operations
- **Analytics Dashboard**: Web interface for usage insights
- **Custom Models**: Support for fine-tuned and custom models
- **SLA Monitoring**: Track model availability and performance
## 🔒 Security & Privacy
- **Secure Key Storage**: API keys encrypted at rest
- **No Data Retention**: Your prompts and responses are not stored
- **Audit Logs**: Complete audit trail of API usage
- **GDPR Compliant**: Full compliance with data protection regulations
## 📖 API Reference
### GenerativeModel
```python
class GenerativeModel:
def __init__(self, model_name: str, generation_config: GenerationConfig = None)
def generate_content(self, prompt: str, **kwargs) -> GenerateContentResponse
def start_chat(self, history: List = None) -> ChatSession
def count_tokens(self, prompt: str) -> Dict[str, int]
```
### ChatSession
```python
class ChatSession:
def send_message(self, message: str, **kwargs) -> GenerateContentResponse
def get_history(self) -> List[Dict]
def clear_history(self) -> None
def delete_conversation(self) -> None
```
### GenerateContentResponse
```python
class GenerateContentResponse:
text: str # Generated text
usage_metadata: UsageMetadata # Cost, tokens, latency info
thinking: Dict # AI reasoning (if available)
```
## 🤝 Support
- **Documentation**: [docs.costkatana.com](https://docs.costkatana.com)
- **Discord Community**: [discord.gg/costkatana](https://discord.gg/costkatana)
- **Email Support**: support@costkatana.com
- **GitHub Issues**: [github.com/cost-katana/python-sdk](https://github.com/cost-katana/python-sdk)
## 📄 License
MIT License - see [LICENSE](LICENSE) for details.
---
**Ready to optimize your AI costs?** Get started at [costkatana.com](https://costkatana.com) 🚀# cost-katana-python
Raw data
{
"_id": null,
"home_page": "https://github.com/Hypothesize-Tech/cost-katana-python",
"name": "cost-katana",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, machine learning, cost optimization, openai, anthropic, aws bedrock, gemini",
"author": "Cost Katana Team",
"author_email": "abdul@hypothesize.tech",
"download_url": "https://files.pythonhosted.org/packages/b6/8e/f074b438e51d384e82171bbf670d3f5ff207ba0fd25637b8b7ed12e24f9b/cost_katana-1.0.1.tar.gz",
"platform": null,
"description": "# Cost Katana Python SDK\n\nA simple, unified interface for AI models with built-in cost optimization, failover, and analytics. Use any AI provider through one consistent API - no need to manage API keys or worry about provider-specific implementations!\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install cost-katana\n```\n\n### Get Your API Key\n\n1. Visit [Cost Katana Dashboard](https://costkatana.com/dashboard)\n2. Create an account or sign in\n3. Go to API Keys section\n4. Generate a new API key (starts with `dak_`)\n\n### Basic Usage\n\n```python\nimport cost_katana as ck\n\n# Configure once with your API key\nck.configure(api_key='dak_your_key_here')\n\n# Use any AI model with the same simple interface\nmodel = ck.GenerativeModel('nova-lite')\nresponse = model.generate_content(\"Explain quantum computing in simple terms\")\nprint(response.text)\nprint(f\"Cost: ${response.usage_metadata.cost:.4f}\")\n```\n\n### Chat Sessions\n\n```python\nimport cost_katana as ck\n\nck.configure(api_key='dak_your_key_here')\n\n# Start a conversation\nmodel = ck.GenerativeModel('claude-3-sonnet')\nchat = model.start_chat()\n\n# Send messages back and forth\nresponse1 = chat.send_message(\"Hello! What's your name?\")\nprint(\"AI:\", response1.text)\n\nresponse2 = chat.send_message(\"Can you help me write a Python function?\")\nprint(\"AI:\", response2.text)\n\n# Get total conversation cost\ntotal_cost = sum(msg.get('metadata', {}).get('cost', 0) for msg in chat.history)\nprint(f\"Total conversation cost: ${total_cost:.4f}\")\n```\n\n## \ud83c\udfaf Why Cost Katana?\n\n### Simple Interface, Powerful Backend\n- **One API for all providers**: Use Google Gemini, Anthropic Claude, OpenAI GPT, AWS Bedrock models through one interface\n- **No API key juggling**: Store your provider keys securely in Cost Katana, use one key in your code\n- **Automatic failover**: If one provider is down, automatically switch to alternatives\n- **Cost optimization**: Intelligent routing to minimize costs while maintaining quality\n\n### Enterprise Features\n- **Cost tracking**: Real-time cost monitoring and budgets\n- **Usage analytics**: Detailed insights into model performance and usage patterns \n- **Team management**: Share projects and manage API usage across teams\n- **Approval workflows**: Set spending limits with approval requirements\n\n## \ud83d\udcda Configuration Options\n\n### Using Configuration File (Recommended)\n\nCreate `config.json`:\n\n```json\n{\n \"api_key\": \"dak_your_key_here\",\n \"default_model\": \"gemini-2.0-flash\",\n \"default_temperature\": 0.7,\n \"cost_limit_per_day\": 50.0,\n \"enable_optimization\": true,\n \"enable_failover\": true,\n \"model_mappings\": {\n \"gemini\": \"gemini-2.0-flash-exp\",\n \"claude\": \"anthropic.claude-3-sonnet-20240229-v1:0\",\n \"gpt4\": \"gpt-4-turbo-preview\"\n },\n \"providers\": {\n \"google\": {\n \"priority\": 1,\n \"models\": [\"gemini-2.0-flash\", \"gemini-pro\"]\n },\n \"anthropic\": {\n \"priority\": 2, \n \"models\": [\"claude-3-sonnet\", \"claude-3-haiku\"]\n }\n }\n}\n```\n\n```python\nimport cost_katana as ck\n\n# Configure from file\nck.configure(config_file='config.json')\n\n# Now use any model\nmodel = ck.GenerativeModel('gemini') # Uses mapping from config\n```\n\n### Environment Variables\n\n```bash\nexport COST_KATANA_API_KEY=dak_your_key_here\nexport COST_KATANA_DEFAULT_MODEL=claude-3-sonnet\n```\n\n```python\nimport cost_katana as ck\n\n# Automatically loads from environment\nck.configure()\n\nmodel = ck.GenerativeModel() # Uses default model from env\n```\n\n## \ud83e\udd16 Supported Models\n\n### Amazon Nova Models (Primary Recommendation)\n- `nova-micro` - Ultra-fast and cost-effective for simple tasks\n- `nova-lite` - Balanced performance and cost for general use\n- `nova-pro` - High-performance model for complex tasks\n\n### Anthropic Claude Models \n- `claude-3-haiku` - Fast and cost-effective responses\n- `claude-3-sonnet` - Balanced performance for complex tasks\n- `claude-3-opus` - Most capable Claude model for advanced reasoning\n- `claude-3.5-haiku` - Latest fast model with enhanced capabilities\n- `claude-3.5-sonnet` - Advanced reasoning and analysis\n\n### Meta Llama Models\n- `llama-3.1-8b` - Good balance of performance and efficiency\n- `llama-3.1-70b` - Large model for complex reasoning\n- `llama-3.1-405b` - Most capable Llama model\n- `llama-3.2-1b` - Compact and efficient\n- `llama-3.2-3b` - Efficient for general tasks\n\n### Mistral Models\n- `mistral-7b` - Efficient open-source model\n- `mixtral-8x7b` - High-quality mixture of experts\n- `mistral-large` - Advanced reasoning capabilities\n\n### Cohere Models\n- `command` - General purpose text generation\n- `command-light` - Lighter, faster version\n- `command-r` - Retrieval-augmented generation\n- `command-r-plus` - Enhanced RAG with better reasoning\n\n### Friendly Aliases\n- `fast` \u2192 Nova Micro (optimized for speed) \n- `balanced` \u2192 Nova Lite (balanced cost/performance)\n- `powerful` \u2192 Nova Pro (maximum capabilities)\n\n## \u2699\ufe0f Advanced Usage\n\n### Generation Configuration\n\n```python\nfrom cost_katana import GenerativeModel, GenerationConfig\n\nconfig = GenerationConfig(\n temperature=0.3,\n max_output_tokens=1000,\n top_p=0.9\n)\n\nmodel = GenerativeModel('claude-3-sonnet', generation_config=config)\nresponse = model.generate_content(\"Write a haiku about programming\")\n```\n\n### Multi-Agent Processing\n\n```python\n# Enable multi-agent processing for complex queries\nmodel = GenerativeModel('gemini-2.0-flash')\nresponse = model.generate_content(\n \"Analyze the economic impact of AI on job markets\",\n use_multi_agent=True,\n chat_mode='balanced'\n)\n\n# See which agents were involved\nprint(\"Agent path:\", response.usage_metadata.agent_path)\nprint(\"Optimizations applied:\", response.usage_metadata.optimizations_applied)\n```\n\n### Cost Optimization Modes\n\n```python\n# Different optimization strategies\nfast_response = model.generate_content(\n \"Quick summary of today's news\",\n chat_mode='fastest' # Prioritize speed\n)\n\ncheap_response = model.generate_content(\n \"Detailed analysis of market trends\", \n chat_mode='cheapest' # Prioritize cost\n)\n\nbalanced_response = model.generate_content(\n \"Help me debug this Python code\",\n chat_mode='balanced' # Balance speed and cost\n)\n```\n\n## \ud83d\udda5\ufe0f Command Line Interface\n\nCost Katana includes a CLI for easy interaction:\n\n```bash\n# Initialize configuration\ncost-katana init\n\n# Test your setup\ncost-katana test\n\n# List available models\ncost-katana models\n\n# Start interactive chat\ncost-katana chat --model gemini-2.0-flash\n\n# Use specific config file\ncost-katana chat --config my-config.json\n```\n\n## \ud83d\udcca Usage Analytics\n\nTrack your AI usage and costs:\n\n```python\nimport cost_katana as ck\n\nck.configure(config_file='config.json')\n\nmodel = ck.GenerativeModel('claude-3-sonnet')\nresponse = model.generate_content(\"Explain machine learning\")\n\n# Detailed usage information\nmetadata = response.usage_metadata\nprint(f\"Model used: {metadata.model}\")\nprint(f\"Cost: ${metadata.cost:.4f}\")\nprint(f\"Latency: {metadata.latency:.2f}s\")\nprint(f\"Tokens: {metadata.total_tokens}\")\nprint(f\"Cache hit: {metadata.cache_hit}\")\nprint(f\"Risk level: {metadata.risk_level}\")\n```\n\n## \ud83d\udd27 Error Handling\n\n```python\nfrom cost_katana import GenerativeModel\nfrom cost_katana.exceptions import (\n CostLimitExceededError,\n ModelNotAvailableError,\n RateLimitError\n)\n\ntry:\n model = GenerativeModel('expensive-model')\n response = model.generate_content(\"Complex analysis task\")\n \nexcept CostLimitExceededError:\n print(\"Cost limit reached! Check your budget settings.\")\n \nexcept ModelNotAvailableError:\n print(\"Model is currently unavailable. Trying fallback...\")\n model = GenerativeModel('backup-model')\n response = model.generate_content(\"Complex analysis task\")\n \nexcept RateLimitError:\n print(\"Rate limit hit. Please wait before retrying.\")\n```\n\n## \ud83c\udf1f Comparison with Direct Provider SDKs\n\n### Before (Google Gemini)\n```python\nimport google.generativeai as genai\n\n# Need to manage API key\ngenai.configure(api_key=\"your-google-api-key\")\n\n# Provider-specific code\nmodel = genai.GenerativeModel('gemini-2.0-flash')\nresponse = model.generate_content(\"Hello\")\n\n# No cost tracking, no failover, provider lock-in\n```\n\n### After (Cost Katana)\n```python\nimport cost_katana as ck\n\n# One API key for all providers\nck.configure(api_key='dak_your_key_here')\n\n# Same interface, any provider\nmodel = ck.GenerativeModel('nova-lite')\nresponse = model.generate_content(\"Hello\")\n\n# Built-in cost tracking, failover, optimization\nprint(f\"Cost: ${response.usage_metadata.cost:.4f}\")\n```\n\n## \ud83c\udfe2 Enterprise Features\n\n- **Team Management**: Share configurations across team members\n- **Cost Centers**: Track usage by project or department \n- **Approval Workflows**: Require approval for high-cost operations\n- **Analytics Dashboard**: Web interface for usage insights\n- **Custom Models**: Support for fine-tuned and custom models\n- **SLA Monitoring**: Track model availability and performance\n\n## \ud83d\udd12 Security & Privacy\n\n- **Secure Key Storage**: API keys encrypted at rest\n- **No Data Retention**: Your prompts and responses are not stored\n- **Audit Logs**: Complete audit trail of API usage\n- **GDPR Compliant**: Full compliance with data protection regulations\n\n## \ud83d\udcd6 API Reference\n\n### GenerativeModel\n\n```python\nclass GenerativeModel:\n def __init__(self, model_name: str, generation_config: GenerationConfig = None)\n def generate_content(self, prompt: str, **kwargs) -> GenerateContentResponse\n def start_chat(self, history: List = None) -> ChatSession\n def count_tokens(self, prompt: str) -> Dict[str, int]\n```\n\n### ChatSession\n\n```python\nclass ChatSession:\n def send_message(self, message: str, **kwargs) -> GenerateContentResponse\n def get_history(self) -> List[Dict]\n def clear_history(self) -> None\n def delete_conversation(self) -> None\n```\n\n### GenerateContentResponse\n\n```python\nclass GenerateContentResponse:\n text: str # Generated text\n usage_metadata: UsageMetadata # Cost, tokens, latency info\n thinking: Dict # AI reasoning (if available)\n```\n\n## \ud83e\udd1d Support\n\n- **Documentation**: [docs.costkatana.com](https://docs.costkatana.com)\n- **Discord Community**: [discord.gg/costkatana](https://discord.gg/costkatana)\n- **Email Support**: support@costkatana.com\n- **GitHub Issues**: [github.com/cost-katana/python-sdk](https://github.com/cost-katana/python-sdk)\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](LICENSE) for details.\n\n---\n\n**Ready to optimize your AI costs?** Get started at [costkatana.com](https://costkatana.com) \ud83d\ude80# cost-katana-python\n",
"bugtrack_url": null,
"license": null,
"summary": "Unified AI interface with cost optimization and failover",
"version": "1.0.1",
"project_urls": {
"Bug Reports": "https://github.com/Hypothesize-Tech/cost-katana-python/issues",
"Documentation": "https://docs.costkatana.com",
"Homepage": "https://github.com/Hypothesize-Tech/cost-katana-python",
"Source": "https://github.com/Hypothesize-Tech/cost-katana-python"
},
"split_keywords": [
"ai",
" machine learning",
" cost optimization",
" openai",
" anthropic",
" aws bedrock",
" gemini"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "07fefed3d1cb277612d7f22f8153a7b43e1853e14041ad253b039b2cb417be98",
"md5": "9cebbbc793e7db5ddcf21cd1d8b3e6fe",
"sha256": "a15ae7db2b3103bdd7b520360c9157b51bd2a0c79fe025f3d57367419c8c216d"
},
"downloads": -1,
"filename": "cost_katana-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9cebbbc793e7db5ddcf21cd1d8b3e6fe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 18425,
"upload_time": "2025-08-04T13:29:56",
"upload_time_iso_8601": "2025-08-04T13:29:56.495916Z",
"url": "https://files.pythonhosted.org/packages/07/fe/fed3d1cb277612d7f22f8153a7b43e1853e14041ad253b039b2cb417be98/cost_katana-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b68ef074b438e51d384e82171bbf670d3f5ff207ba0fd25637b8b7ed12e24f9b",
"md5": "db5f2927b387914b7860a37622062655",
"sha256": "a0fd22cf5568e8121d755945e5de3107f2fdfc0059f0625ecfbc7b3482bc28df"
},
"downloads": -1,
"filename": "cost_katana-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "db5f2927b387914b7860a37622062655",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 31198,
"upload_time": "2025-08-04T13:29:57",
"upload_time_iso_8601": "2025-08-04T13:29:57.827290Z",
"url": "https://files.pythonhosted.org/packages/b6/8e/f074b438e51d384e82171bbf670d3f5ff207ba0fd25637b8b7ed12e24f9b/cost_katana-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-04 13:29:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Hypothesize-Tech",
"github_project": "cost-katana-python",
"github_not_found": true,
"lcname": "cost-katana"
}