<div align="center">
<img src="https://evolvis.ai/wp-content/uploads/2025/08/evie-solutions-03.png" alt="Evolvis AI - Evie Solutions Logo" width="400">
</div>
# Evolvishub Text Classification LLM
[](https://badge.fury.io/py/evolvishub-text-classification-llm)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/psf/black)
**Enterprise-grade text classification library with 11+ LLM providers, streaming, monitoring, and advanced workflows**
## Download Statistics
[](https://pepy.tech/project/evolvishub-text-classification-llm)
[](https://pepy.tech/project/evolvishub-text-classification-llm)
[](https://pepy.tech/project/evolvishub-text-classification-llm)
[](https://pypi.org/project/evolvishub-text-classification-llm/)
[](https://pypi.org/project/evolvishub-text-classification-llm/)
[](https://pypi.org/project/evolvishub-text-classification-llm/)
## Overview
Evolvishub Text Classification LLM is a comprehensive, enterprise-ready Python library designed for production-scale text classification tasks. Built by Evolvis AI, this proprietary solution provides seamless integration with 11+ leading LLM providers, advanced monitoring capabilities, and professional-grade architecture suitable for mission-critical applications.
## Key Features
### Core Capabilities
- **11+ LLM Providers**: OpenAI, Anthropic, Google, Cohere, Mistral, Replicate, HuggingFace, Azure OpenAI, AWS Bedrock, Ollama, and Custom providers
- **Streaming Support**: Real-time text generation with WebSocket support
- **Async/Await**: Full asynchronous support for high-performance applications
- **Batch Processing**: Efficient processing of large datasets with configurable concurrency
- **Smart Caching**: Semantic caching with Redis and in-memory options
- **Comprehensive Monitoring**: Built-in health checks, metrics collection, and observability
- **Enterprise Security**: Authentication, rate limiting, and audit logging
- **Workflow Templates**: Pre-built workflows for common classification scenarios
### Advanced Features
- **Provider Fallback**: Automatic failover between providers for reliability
- **Cost Optimization**: Intelligent routing based on cost and performance metrics
- **Fine-tuning Support**: Custom model training and deployment capabilities
- **Multimodal Support**: Text, image, and document processing
- **LangGraph Integration**: Complex workflow orchestration
- **Real-time Streaming**: WebSocket-based real-time classification
## Installation
### Basic Installation
```bash
pip install evolvishub-text-classification-llm
```
### Provider-Specific Installation
```bash
# Install with specific providers
pip install evolvishub-text-classification-llm[openai,anthropic]
# Install with cloud providers
pip install evolvishub-text-classification-llm[azure_openai,aws_bedrock]
# Install with local inference
pip install evolvishub-text-classification-llm[huggingface,ollama]
# Full installation (all providers)
pip install evolvishub-text-classification-llm[all]
```
### Development Installation
```bash
pip install evolvishub-text-classification-llm[dev]
```
## Quick Start
### Basic Classification
```python
import asyncio
from evolvishub_text_classification_llm import create_engine
from evolvishub_text_classification_llm.core.schemas import ProviderConfig, ProviderType, WorkflowConfig
# Configure your workflow
config = WorkflowConfig(
name="sentiment_analysis",
description="Analyze sentiment of customer reviews",
providers=[
ProviderConfig(
provider_type=ProviderType.OPENAI,
api_key="your-openai-api-key",
model="gpt-4",
max_tokens=150,
temperature=0.1
)
]
)
async def main():
engine = create_engine(config)
result = await engine.classify(
text="This product is absolutely amazing! I love it.",
categories=["positive", "negative", "neutral"]
)
print(f"Classification: {result.category}")
print(f"Confidence: {result.confidence}")
asyncio.run(main())
```
## Provider Configuration
### OpenAI GPT Models
```python
from evolvishub_text_classification_llm.core.schemas import ProviderConfig, ProviderType
openai_config = ProviderConfig(
provider_type=ProviderType.OPENAI,
api_key="your-openai-api-key",
model="gpt-4",
max_tokens=150,
temperature=0.1,
timeout_seconds=30
)
```
### Anthropic Claude
```python
anthropic_config = ProviderConfig(
provider_type=ProviderType.ANTHROPIC,
api_key="your-anthropic-api-key",
model="claude-3-sonnet-20240229",
max_tokens=150,
temperature=0.1
)
```
### Google Gemini
```python
google_config = ProviderConfig(
provider_type=ProviderType.GOOGLE,
api_key="your-google-api-key",
model="gemini-pro",
max_tokens=150,
temperature=0.1
)
```
### Cohere
```python
cohere_config = ProviderConfig(
provider_type=ProviderType.COHERE,
api_key="your-cohere-api-key",
model="command",
max_tokens=150,
temperature=0.1
)
```
### Azure OpenAI
```python
azure_config = ProviderConfig(
provider_type=ProviderType.AZURE_OPENAI,
api_key="your-azure-api-key",
azure_endpoint="https://your-resource.openai.azure.com/",
api_version="2024-02-15-preview",
deployment_name="gpt-4",
max_tokens=150
)
```
### AWS Bedrock
```python
bedrock_config = ProviderConfig(
provider_type=ProviderType.AWS_BEDROCK,
aws_access_key_id="your-access-key",
aws_secret_access_key="your-secret-key",
aws_region="us-east-1",
model="anthropic.claude-3-sonnet-20240229-v1:0",
max_tokens=150
)
```
### HuggingFace Transformers
```python
huggingface_config = ProviderConfig(
provider_type=ProviderType.HUGGINGFACE,
model="microsoft/DialoGPT-medium",
device="cuda", # or "cpu"
max_tokens=150,
temperature=0.1,
load_in_8bit=True # For memory optimization
)
```
### Ollama (Local Inference)
```python
ollama_config = ProviderConfig(
provider_type=ProviderType.OLLAMA,
base_url="http://localhost:11434",
model="llama2",
max_tokens=150,
temperature=0.1
)
```
### Mistral AI
```python
mistral_config = ProviderConfig(
provider_type=ProviderType.MISTRAL,
api_key="your-mistral-api-key",
model="mistral-large-latest",
max_tokens=150,
temperature=0.1
)
```
### Replicate
```python
replicate_config = ProviderConfig(
provider_type=ProviderType.REPLICATE,
api_key="your-replicate-api-key",
model="meta/llama-2-70b-chat",
max_tokens=150,
temperature=0.1
)
```
### Custom Provider
```python
custom_config = ProviderConfig(
provider_type=ProviderType.CUSTOM,
api_key="your-api-key",
base_url="https://your-api-endpoint.com",
model="your-model",
request_format="openai", # or "anthropic", "custom"
response_format="openai",
max_tokens=150
)
```
## Batch Processing
```python
from evolvishub_text_classification_llm import create_batch_processor
async def batch_example():
processor = create_batch_processor(config)
texts = [
"This is great!",
"I hate this product.",
"It's okay, nothing special."
]
results = await processor.process_batch(
texts=texts,
categories=["positive", "negative", "neutral"],
batch_size=10,
max_workers=4
)
for text, result in zip(texts, results):
print(f"'{text}' -> {result.category} ({result.confidence:.2f})")
```
## Monitoring and Health Checks
```python
from evolvishub_text_classification_llm import HealthChecker, MetricsCollector
async def monitoring_example():
# Health monitoring
health_checker = HealthChecker()
health_checker.register_provider("openai", openai_provider)
health_status = await health_checker.perform_health_check()
print(f"System health: {health_status.overall_status}")
# Metrics collection
metrics = MetricsCollector()
metrics.record_counter("requests_total", 1)
metrics.record_histogram("response_time_ms", 150.5)
# Export metrics
prometheus_metrics = metrics.export_metrics("prometheus")
print(prometheus_metrics)
```
## Streaming
```python
async def streaming_example():
engine = create_engine(config)
async for chunk in engine.classify_stream(
text="Analyze this long document...",
categories=["technical", "business", "personal"]
):
print(f"Partial result: {chunk}")
```
## Configuration
### Environment Variables
```bash
# Provider API Keys
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
export COHERE_API_KEY="your-cohere-key"
# Optional: Redis for caching
export REDIS_URL="redis://localhost:6379"
# Optional: Monitoring
export PROMETHEUS_PORT="8000"
```
### Advanced Configuration
```python
from evolvishub_text_classification_llm.core.config import LibraryConfig
config = LibraryConfig(
# Caching
enable_caching=True,
cache_backend="redis",
cache_ttl_seconds=3600,
# Monitoring
enable_monitoring=True,
metrics_port=8000,
health_check_interval=60,
# Performance
max_concurrent_requests=100,
request_timeout_seconds=30,
# Security
enable_audit_logging=True,
rate_limit_requests_per_minute=1000
)
```
## Advanced Features
### Provider Fallback
```python
# Configure multiple providers with fallback
config = WorkflowConfig(
name="robust_classification",
providers=[
ProviderConfig(provider_type=ProviderType.OPENAI, priority=1),
ProviderConfig(provider_type=ProviderType.ANTHROPIC, priority=2),
ProviderConfig(provider_type=ProviderType.COHERE, priority=3)
],
fallback_enabled=True
)
```
### Cost Optimization
```python
# Optimize for cost vs performance
config = WorkflowConfig(
name="cost_optimized",
optimization_strategy="cost", # or "performance", "balanced"
max_cost_per_request=0.01
)
```
### Custom Workflows
```python
from evolvishub_text_classification_llm import WorkflowBuilder
builder = WorkflowBuilder()
workflow = (builder
.add_preprocessing("clean_text")
.add_classification("sentiment")
.add_postprocessing("confidence_threshold", min_confidence=0.8)
.build())
result = await workflow.execute("Your text here")
```
## Troubleshooting
### Common Issues
**1. Provider Authentication Errors**
```python
# Verify API keys are set correctly
from evolvishub_text_classification_llm import ProviderFactory
if not ProviderFactory.is_provider_available("openai"):
print("OpenAI provider not available - check API key")
```
**2. Rate Limiting**
```python
# Configure rate limiting and retries
config = ProviderConfig(
provider_type=ProviderType.OPENAI,
rate_limit_requests_per_minute=60,
max_retries=3,
retry_delay_seconds=1
)
```
**3. Memory Issues with Large Batches**
```python
# Process in smaller chunks
processor = create_batch_processor(config)
results = await processor.process_batch(
texts=large_text_list,
batch_size=10, # Reduce batch size
max_workers=2 # Reduce concurrency
)
```
### Performance Optimization
**1. Enable Caching**
```python
# Redis caching for better performance
config.enable_caching = True
config.cache_backend = "redis"
config.cache_ttl_seconds = 3600
```
**2. Use Appropriate Models**
```python
# For simple tasks, use faster models
fast_config = ProviderConfig(
provider_type=ProviderType.OPENAI,
model="gpt-3.5-turbo", # Faster than gpt-4
max_tokens=50 # Reduce for simple classifications
)
```
**3. Batch Processing**
```python
# Process multiple texts together
results = await processor.process_batch(
texts=texts,
batch_size=20, # Optimal batch size
max_workers=4 # Parallel processing
)
```
## API Reference
### Core Classes
- `ClassificationEngine`: Main engine for text classification
- `BatchProcessor`: Batch processing capabilities
- `WorkflowBuilder`: Build custom classification workflows
- `ProviderFactory`: Manage and create LLM providers
- `HealthChecker`: Monitor system health
- `MetricsCollector`: Collect and export metrics
### Provider Types
- `ProviderType.OPENAI`: OpenAI GPT models
- `ProviderType.ANTHROPIC`: Anthropic Claude models
- `ProviderType.GOOGLE`: Google Gemini/PaLM models
- `ProviderType.COHERE`: Cohere Command models
- `ProviderType.MISTRAL`: Mistral AI models
- `ProviderType.REPLICATE`: Replicate hosted models
- `ProviderType.HUGGINGFACE`: HuggingFace Transformers
- `ProviderType.AZURE_OPENAI`: Azure OpenAI Service
- `ProviderType.AWS_BEDROCK`: AWS Bedrock models
- `ProviderType.OLLAMA`: Local Ollama models
- `ProviderType.CUSTOM`: Custom HTTP-based providers
### Convenience Functions
- `create_engine(config)`: Create a classification engine
- `create_batch_processor(config)`: Create a batch processor
- `get_supported_providers()`: List available providers
- `get_features()`: List enabled features
## Enterprise Support
For enterprise customers, we offer:
- **Priority Support**: 24/7 technical support
- **Custom Integrations**: Tailored solutions for your infrastructure
- **On-Premise Deployment**: Deploy in your own environment
- **Advanced Security**: SOC2, HIPAA, and GDPR compliance
- **Custom Models**: Fine-tuning and custom model development
- **Professional Services**: Implementation and consulting
Contact us at enterprise@evolvis.ai for more information.
## License
This software is proprietary and owned by Evolvis AI. See the [LICENSE](LICENSE) file for details.
**IMPORTANT**: This is NOT open source software. Usage is subject to the terms and conditions specified in the license agreement.
## Company Information
**Evolvis AI**
Website: https://evolvis.ai
Email: info@evolvis.ai
**Author**
Alban Maxhuni, PhD
Email: a.maxhuni@evolvis.ai
## Support
For technical support, licensing inquiries, or enterprise solutions:
- **Documentation**: https://docs.evolvis.ai/text-classification-llm
- **Enterprise Sales**: m.miralles@evolvis.ai
- **Technical Support**: support@evolvis.ai
- **General Inquiries**: info@evolvis.ai
---
Copyright (c) 2025 Evolvis AI. All rights reserved.
```
Raw data
{
"_id": null,
"home_page": "https://github.com/evolvishub/text-classification-llm",
"name": "evolvishub-text-classification-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "text-classification, llm, nlp, machine-learning, ai, openai, anthropic, google, huggingface, transformers, gpt, claude, gemini, cohere, mistral, replicate, azure, aws, bedrock, ollama, enterprise, proprietary, evolvis, async, batch-processing, streaming, real-time, workflows, monitoring, health-checks, metrics, professional",
"author": "Alban Maxhuni, PhD",
"author_email": "\"Alban Maxhuni, PhD\" <a.maxhuni@evolvis.ai>",
"download_url": "https://files.pythonhosted.org/packages/9c/fe/bea0c4d187ac0222dbd1b5f28990168153f13414b18f7180bc29450f0639/evolvishub_text_classification_llm-1.0.3.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img src=\"https://evolvis.ai/wp-content/uploads/2025/08/evie-solutions-03.png\" alt=\"Evolvis AI - Evie Solutions Logo\" width=\"400\">\n</div>\n\n# Evolvishub Text Classification LLM\n\n[](https://badge.fury.io/py/evolvishub-text-classification-llm)\n[](https://www.python.org/downloads/)\n[](LICENSE)\n[](https://github.com/psf/black)\n\n**Enterprise-grade text classification library with 11+ LLM providers, streaming, monitoring, and advanced workflows**\n\n## Download Statistics\n\n[](https://pepy.tech/project/evolvishub-text-classification-llm)\n[](https://pepy.tech/project/evolvishub-text-classification-llm)\n[](https://pepy.tech/project/evolvishub-text-classification-llm)\n\n[](https://pypi.org/project/evolvishub-text-classification-llm/)\n[](https://pypi.org/project/evolvishub-text-classification-llm/)\n[](https://pypi.org/project/evolvishub-text-classification-llm/)\n\n## Overview\n\nEvolvishub Text Classification LLM is a comprehensive, enterprise-ready Python library designed for production-scale text classification tasks. Built by Evolvis AI, this proprietary solution provides seamless integration with 11+ leading LLM providers, advanced monitoring capabilities, and professional-grade architecture suitable for mission-critical applications.\n\n## Key Features\n\n### Core Capabilities\n- **11+ LLM Providers**: OpenAI, Anthropic, Google, Cohere, Mistral, Replicate, HuggingFace, Azure OpenAI, AWS Bedrock, Ollama, and Custom providers\n- **Streaming Support**: Real-time text generation with WebSocket support\n- **Async/Await**: Full asynchronous support for high-performance applications\n- **Batch Processing**: Efficient processing of large datasets with configurable concurrency\n- **Smart Caching**: Semantic caching with Redis and in-memory options\n- **Comprehensive Monitoring**: Built-in health checks, metrics collection, and observability\n- **Enterprise Security**: Authentication, rate limiting, and audit logging\n- **Workflow Templates**: Pre-built workflows for common classification scenarios\n\n### Advanced Features\n- **Provider Fallback**: Automatic failover between providers for reliability\n- **Cost Optimization**: Intelligent routing based on cost and performance metrics\n- **Fine-tuning Support**: Custom model training and deployment capabilities\n- **Multimodal Support**: Text, image, and document processing\n- **LangGraph Integration**: Complex workflow orchestration\n- **Real-time Streaming**: WebSocket-based real-time classification\n\n## Installation\n\n### Basic Installation\n\n```bash\npip install evolvishub-text-classification-llm\n```\n\n### Provider-Specific Installation\n\n```bash\n# Install with specific providers\npip install evolvishub-text-classification-llm[openai,anthropic]\n\n# Install with cloud providers\npip install evolvishub-text-classification-llm[azure_openai,aws_bedrock]\n\n# Install with local inference\npip install evolvishub-text-classification-llm[huggingface,ollama]\n\n# Full installation (all providers)\npip install evolvishub-text-classification-llm[all]\n```\n\n### Development Installation\n\n```bash\npip install evolvishub-text-classification-llm[dev]\n```\n\n## Quick Start\n\n### Basic Classification\n\n```python\nimport asyncio\nfrom evolvishub_text_classification_llm import create_engine\nfrom evolvishub_text_classification_llm.core.schemas import ProviderConfig, ProviderType, WorkflowConfig\n\n# Configure your workflow\nconfig = WorkflowConfig(\n name=\"sentiment_analysis\",\n description=\"Analyze sentiment of customer reviews\",\n providers=[\n ProviderConfig(\n provider_type=ProviderType.OPENAI,\n api_key=\"your-openai-api-key\",\n model=\"gpt-4\",\n max_tokens=150,\n temperature=0.1\n )\n ]\n)\n\nasync def main():\n engine = create_engine(config)\n \n result = await engine.classify(\n text=\"This product is absolutely amazing! I love it.\",\n categories=[\"positive\", \"negative\", \"neutral\"]\n )\n \n print(f\"Classification: {result.category}\")\n print(f\"Confidence: {result.confidence}\")\n\nasyncio.run(main())\n```\n\n## Provider Configuration\n\n### OpenAI GPT Models\n\n```python\nfrom evolvishub_text_classification_llm.core.schemas import ProviderConfig, ProviderType\n\nopenai_config = ProviderConfig(\n provider_type=ProviderType.OPENAI,\n api_key=\"your-openai-api-key\",\n model=\"gpt-4\",\n max_tokens=150,\n temperature=0.1,\n timeout_seconds=30\n)\n```\n\n### Anthropic Claude\n\n```python\nanthropic_config = ProviderConfig(\n provider_type=ProviderType.ANTHROPIC,\n api_key=\"your-anthropic-api-key\",\n model=\"claude-3-sonnet-20240229\",\n max_tokens=150,\n temperature=0.1\n)\n```\n\n### Google Gemini\n\n```python\ngoogle_config = ProviderConfig(\n provider_type=ProviderType.GOOGLE,\n api_key=\"your-google-api-key\",\n model=\"gemini-pro\",\n max_tokens=150,\n temperature=0.1\n)\n```\n\n### Cohere\n\n```python\ncohere_config = ProviderConfig(\n provider_type=ProviderType.COHERE,\n api_key=\"your-cohere-api-key\",\n model=\"command\",\n max_tokens=150,\n temperature=0.1\n)\n```\n\n### Azure OpenAI\n\n```python\nazure_config = ProviderConfig(\n provider_type=ProviderType.AZURE_OPENAI,\n api_key=\"your-azure-api-key\",\n azure_endpoint=\"https://your-resource.openai.azure.com/\",\n api_version=\"2024-02-15-preview\",\n deployment_name=\"gpt-4\",\n max_tokens=150\n)\n```\n\n### AWS Bedrock\n\n```python\nbedrock_config = ProviderConfig(\n provider_type=ProviderType.AWS_BEDROCK,\n aws_access_key_id=\"your-access-key\",\n aws_secret_access_key=\"your-secret-key\",\n aws_region=\"us-east-1\",\n model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n max_tokens=150\n)\n```\n\n### HuggingFace Transformers\n\n```python\nhuggingface_config = ProviderConfig(\n provider_type=ProviderType.HUGGINGFACE,\n model=\"microsoft/DialoGPT-medium\",\n device=\"cuda\", # or \"cpu\"\n max_tokens=150,\n temperature=0.1,\n load_in_8bit=True # For memory optimization\n)\n```\n\n### Ollama (Local Inference)\n\n```python\nollama_config = ProviderConfig(\n provider_type=ProviderType.OLLAMA,\n base_url=\"http://localhost:11434\",\n model=\"llama2\",\n max_tokens=150,\n temperature=0.1\n)\n```\n\n### Mistral AI\n\n```python\nmistral_config = ProviderConfig(\n provider_type=ProviderType.MISTRAL,\n api_key=\"your-mistral-api-key\",\n model=\"mistral-large-latest\",\n max_tokens=150,\n temperature=0.1\n)\n```\n\n### Replicate\n\n```python\nreplicate_config = ProviderConfig(\n provider_type=ProviderType.REPLICATE,\n api_key=\"your-replicate-api-key\",\n model=\"meta/llama-2-70b-chat\",\n max_tokens=150,\n temperature=0.1\n)\n```\n\n### Custom Provider\n\n```python\ncustom_config = ProviderConfig(\n provider_type=ProviderType.CUSTOM,\n api_key=\"your-api-key\",\n base_url=\"https://your-api-endpoint.com\",\n model=\"your-model\",\n request_format=\"openai\", # or \"anthropic\", \"custom\"\n response_format=\"openai\",\n max_tokens=150\n)\n```\n\n## Batch Processing\n\n```python\nfrom evolvishub_text_classification_llm import create_batch_processor\n\nasync def batch_example():\n processor = create_batch_processor(config)\n\n texts = [\n \"This is great!\",\n \"I hate this product.\",\n \"It's okay, nothing special.\"\n ]\n\n results = await processor.process_batch(\n texts=texts,\n categories=[\"positive\", \"negative\", \"neutral\"],\n batch_size=10,\n max_workers=4\n )\n\n for text, result in zip(texts, results):\n print(f\"'{text}' -> {result.category} ({result.confidence:.2f})\")\n```\n\n## Monitoring and Health Checks\n\n```python\nfrom evolvishub_text_classification_llm import HealthChecker, MetricsCollector\n\nasync def monitoring_example():\n # Health monitoring\n health_checker = HealthChecker()\n health_checker.register_provider(\"openai\", openai_provider)\n\n health_status = await health_checker.perform_health_check()\n print(f\"System health: {health_status.overall_status}\")\n\n # Metrics collection\n metrics = MetricsCollector()\n metrics.record_counter(\"requests_total\", 1)\n metrics.record_histogram(\"response_time_ms\", 150.5)\n\n # Export metrics\n prometheus_metrics = metrics.export_metrics(\"prometheus\")\n print(prometheus_metrics)\n```\n\n## Streaming\n\n```python\nasync def streaming_example():\n engine = create_engine(config)\n\n async for chunk in engine.classify_stream(\n text=\"Analyze this long document...\",\n categories=[\"technical\", \"business\", \"personal\"]\n ):\n print(f\"Partial result: {chunk}\")\n```\n\n## Configuration\n\n### Environment Variables\n\n```bash\n# Provider API Keys\nexport OPENAI_API_KEY=\"your-openai-key\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-key\"\nexport GOOGLE_API_KEY=\"your-google-key\"\nexport COHERE_API_KEY=\"your-cohere-key\"\n\n# Optional: Redis for caching\nexport REDIS_URL=\"redis://localhost:6379\"\n\n# Optional: Monitoring\nexport PROMETHEUS_PORT=\"8000\"\n```\n\n### Advanced Configuration\n\n```python\nfrom evolvishub_text_classification_llm.core.config import LibraryConfig\n\nconfig = LibraryConfig(\n # Caching\n enable_caching=True,\n cache_backend=\"redis\",\n cache_ttl_seconds=3600,\n\n # Monitoring\n enable_monitoring=True,\n metrics_port=8000,\n health_check_interval=60,\n\n # Performance\n max_concurrent_requests=100,\n request_timeout_seconds=30,\n\n # Security\n enable_audit_logging=True,\n rate_limit_requests_per_minute=1000\n)\n```\n\n## Advanced Features\n\n### Provider Fallback\n\n```python\n# Configure multiple providers with fallback\nconfig = WorkflowConfig(\n name=\"robust_classification\",\n providers=[\n ProviderConfig(provider_type=ProviderType.OPENAI, priority=1),\n ProviderConfig(provider_type=ProviderType.ANTHROPIC, priority=2),\n ProviderConfig(provider_type=ProviderType.COHERE, priority=3)\n ],\n fallback_enabled=True\n)\n```\n\n### Cost Optimization\n\n```python\n# Optimize for cost vs performance\nconfig = WorkflowConfig(\n name=\"cost_optimized\",\n optimization_strategy=\"cost\", # or \"performance\", \"balanced\"\n max_cost_per_request=0.01\n)\n```\n\n### Custom Workflows\n\n```python\nfrom evolvishub_text_classification_llm import WorkflowBuilder\n\nbuilder = WorkflowBuilder()\nworkflow = (builder\n .add_preprocessing(\"clean_text\")\n .add_classification(\"sentiment\")\n .add_postprocessing(\"confidence_threshold\", min_confidence=0.8)\n .build())\n\nresult = await workflow.execute(\"Your text here\")\n```\n\n## Troubleshooting\n\n### Common Issues\n\n**1. Provider Authentication Errors**\n```python\n# Verify API keys are set correctly\nfrom evolvishub_text_classification_llm import ProviderFactory\n\nif not ProviderFactory.is_provider_available(\"openai\"):\n print(\"OpenAI provider not available - check API key\")\n```\n\n**2. Rate Limiting**\n```python\n# Configure rate limiting and retries\nconfig = ProviderConfig(\n provider_type=ProviderType.OPENAI,\n rate_limit_requests_per_minute=60,\n max_retries=3,\n retry_delay_seconds=1\n)\n```\n\n**3. Memory Issues with Large Batches**\n```python\n# Process in smaller chunks\nprocessor = create_batch_processor(config)\nresults = await processor.process_batch(\n texts=large_text_list,\n batch_size=10, # Reduce batch size\n max_workers=2 # Reduce concurrency\n)\n```\n\n### Performance Optimization\n\n**1. Enable Caching**\n```python\n# Redis caching for better performance\nconfig.enable_caching = True\nconfig.cache_backend = \"redis\"\nconfig.cache_ttl_seconds = 3600\n```\n\n**2. Use Appropriate Models**\n```python\n# For simple tasks, use faster models\nfast_config = ProviderConfig(\n provider_type=ProviderType.OPENAI,\n model=\"gpt-3.5-turbo\", # Faster than gpt-4\n max_tokens=50 # Reduce for simple classifications\n)\n```\n\n**3. Batch Processing**\n```python\n# Process multiple texts together\nresults = await processor.process_batch(\n texts=texts,\n batch_size=20, # Optimal batch size\n max_workers=4 # Parallel processing\n)\n```\n\n## API Reference\n\n### Core Classes\n\n- `ClassificationEngine`: Main engine for text classification\n- `BatchProcessor`: Batch processing capabilities\n- `WorkflowBuilder`: Build custom classification workflows\n- `ProviderFactory`: Manage and create LLM providers\n- `HealthChecker`: Monitor system health\n- `MetricsCollector`: Collect and export metrics\n\n### Provider Types\n\n- `ProviderType.OPENAI`: OpenAI GPT models\n- `ProviderType.ANTHROPIC`: Anthropic Claude models\n- `ProviderType.GOOGLE`: Google Gemini/PaLM models\n- `ProviderType.COHERE`: Cohere Command models\n- `ProviderType.MISTRAL`: Mistral AI models\n- `ProviderType.REPLICATE`: Replicate hosted models\n- `ProviderType.HUGGINGFACE`: HuggingFace Transformers\n- `ProviderType.AZURE_OPENAI`: Azure OpenAI Service\n- `ProviderType.AWS_BEDROCK`: AWS Bedrock models\n- `ProviderType.OLLAMA`: Local Ollama models\n- `ProviderType.CUSTOM`: Custom HTTP-based providers\n\n### Convenience Functions\n\n- `create_engine(config)`: Create a classification engine\n- `create_batch_processor(config)`: Create a batch processor\n- `get_supported_providers()`: List available providers\n- `get_features()`: List enabled features\n\n## Enterprise Support\n\nFor enterprise customers, we offer:\n\n- **Priority Support**: 24/7 technical support\n- **Custom Integrations**: Tailored solutions for your infrastructure\n- **On-Premise Deployment**: Deploy in your own environment\n- **Advanced Security**: SOC2, HIPAA, and GDPR compliance\n- **Custom Models**: Fine-tuning and custom model development\n- **Professional Services**: Implementation and consulting\n\nContact us at enterprise@evolvis.ai for more information.\n\n## License\n\nThis software is proprietary and owned by Evolvis AI. See the [LICENSE](LICENSE) file for details.\n\n**IMPORTANT**: This is NOT open source software. Usage is subject to the terms and conditions specified in the license agreement.\n\n## Company Information\n\n**Evolvis AI**\nWebsite: https://evolvis.ai\nEmail: info@evolvis.ai\n\n**Author**\nAlban Maxhuni, PhD\nEmail: a.maxhuni@evolvis.ai\n\n## Support\n\nFor technical support, licensing inquiries, or enterprise solutions:\n\n- **Documentation**: https://docs.evolvis.ai/text-classification-llm\n- **Enterprise Sales**: m.miralles@evolvis.ai\n- **Technical Support**: support@evolvis.ai\n- **General Inquiries**: info@evolvis.ai\n\n---\n\nCopyright (c) 2025 Evolvis AI. All rights reserved.\n```\n",
"bugtrack_url": null,
"license": "Proprietary",
"summary": "Enterprise-grade text classification library with 11+ LLM providers, streaming, monitoring, and advanced workflows",
"version": "1.0.3",
"project_urls": {
"Bug Tracker": "https://github.com/evolvisai/metcal/issues",
"Changelog": "https://github.com/evolvisai/metcal/blob/main/shared/libs/evolvishub-text-classification-llm/CHANGELOG.md",
"Company": "https://evolvis.ai",
"Documentation": "https://docs.evolvis.ai/text-classification-llm",
"Homepage": "https://evolvis.ai",
"Repository": "https://github.com/evolvisai/metcal"
},
"split_keywords": [
"text-classification",
" llm",
" nlp",
" machine-learning",
" ai",
" openai",
" anthropic",
" google",
" huggingface",
" transformers",
" gpt",
" claude",
" gemini",
" cohere",
" mistral",
" replicate",
" azure",
" aws",
" bedrock",
" ollama",
" enterprise",
" proprietary",
" evolvis",
" async",
" batch-processing",
" streaming",
" real-time",
" workflows",
" monitoring",
" health-checks",
" metrics",
" professional"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d898c832742eb317b7b62d19981d54965c8ca2776d3d0d8d7f685b1cc7a7b207",
"md5": "5ace170a858e627b8d3b432607430c79",
"sha256": "18e826fb0cb8ab09c3fb20a3dfec47d372dc9ec410ca63e16335a3c6cd9cd8bc"
},
"downloads": -1,
"filename": "evolvishub_text_classification_llm-1.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5ace170a858e627b8d3b432607430c79",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 97891,
"upload_time": "2025-11-02T20:10:50",
"upload_time_iso_8601": "2025-11-02T20:10:50.141741Z",
"url": "https://files.pythonhosted.org/packages/d8/98/c832742eb317b7b62d19981d54965c8ca2776d3d0d8d7f685b1cc7a7b207/evolvishub_text_classification_llm-1.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9cfebea0c4d187ac0222dbd1b5f28990168153f13414b18f7180bc29450f0639",
"md5": "221fd7163dab6bfae83f9893b6006c1f",
"sha256": "6cc4bf133c5fb3efa728c4fa5088bfd5a55a7b8097b6d75368f56d614cb730c1"
},
"downloads": -1,
"filename": "evolvishub_text_classification_llm-1.0.3.tar.gz",
"has_sig": false,
"md5_digest": "221fd7163dab6bfae83f9893b6006c1f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 154340,
"upload_time": "2025-11-02T20:10:53",
"upload_time_iso_8601": "2025-11-02T20:10:53.734656Z",
"url": "https://files.pythonhosted.org/packages/9c/fe/bea0c4d187ac0222dbd1b5f28990168153f13414b18f7180bc29450f0639/evolvishub_text_classification_llm-1.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-02 20:10:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "evolvishub",
"github_project": "text-classification-llm",
"github_not_found": true,
"lcname": "evolvishub-text-classification-llm"
}