# Context Compressor
[](https://python.org)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)
**AI-powered text compression for RAG systems and API calls to reduce token usage and costs while preserving semantic meaning.**
## π Features
- **Intelligent Compression**: Multiple compression strategies including extractive, abstractive, semantic, and hybrid approaches
- **Quality Evaluation**: Comprehensive quality metrics including ROUGE scores, semantic similarity, and entity preservation
- **Query-Aware**: Context-aware compression that prioritizes relevant content based on user queries
- **Batch Processing**: Efficient parallel processing of multiple texts
- **Caching System**: In-memory caching with TTL for improved performance
- **Framework Integrations**: Easy integration with LangChain, LlamaIndex, and OpenAI
- **REST API**: FastAPI-based microservice for easy deployment
- **Extensible**: Plugin system for custom compression strategies
## π¦ Installation
### Basic Installation
```bash
pip install context-compressor
```
### With ML Dependencies (for advanced strategies)
```bash
pip install "context-compressor[ml]"
```
### With API Dependencies (for REST API)
```bash
pip install "context-compressor[api]"
```
### Full Installation (all features)
```bash
pip install "context-compressor[full]"
```
### Development Installation
```bash
git clone https://github.com/context-compressor/context-compressor.git
cd context-compressor
pip install -e ".[dev]"
```
## π Quick Start
### Basic Usage
```python
from context_compressor import ContextCompressor
# Initialize the compressor
compressor = ContextCompressor()
# Compress text
text = """
Artificial Intelligence (AI) is a broad field of computer science focused on
creating systems that can perform tasks that typically require human intelligence.
These tasks include learning, reasoning, problem-solving, perception, and language
understanding. AI has applications in various domains including healthcare, finance,
transportation, and entertainment. Machine learning, a subset of AI, enables
computers to learn and improve from experience without being explicitly programmed.
"""
result = compressor.compress(text, target_ratio=0.5)
print("Original text:")
print(text)
print(f"\nCompressed text ({result.actual_ratio:.1%} of original):")
print(result.compressed_text)
print(f"\nTokens saved: {result.tokens_saved}")
print(f"Quality score: {result.quality_metrics.overall_score:.2f}")
```
### Query-Aware Compression
```python
# Compress with focus on specific topic
query = "machine learning applications"
result = compressor.compress(
text=text,
target_ratio=0.3,
query=query
)
print(f"Query-focused compression: {result.compressed_text}")
```
### Batch Processing
```python
texts = [
"First document about AI and machine learning...",
"Second document about natural language processing...",
"Third document about computer vision..."
]
batch_result = compressor.compress_batch(
texts=texts,
target_ratio=0.4,
parallel=True
)
print(f"Processed {len(batch_result.results)} texts")
print(f"Average compression ratio: {batch_result.average_compression_ratio:.1%}")
print(f"Total tokens saved: {batch_result.total_tokens_saved}")
```
## π§ Configuration
### Strategy Selection
```python
from context_compressor import ContextCompressor
from context_compressor.strategies import ExtractiveStrategy
# Use specific strategy
extractive_strategy = ExtractiveStrategy(
scoring_method="tfidf",
min_sentence_length=20,
position_bias=0.2
)
compressor = ContextCompressor(strategies=[extractive_strategy])
# Or let the system auto-select
compressor = ContextCompressor(default_strategy="auto")
```
### Quality Evaluation Settings
```python
compressor = ContextCompressor(
enable_quality_evaluation=True,
enable_caching=True,
cache_ttl=3600 # 1 hour
)
result = compressor.compress(text, target_ratio=0.5)
# Access detailed quality metrics
print(f"ROUGE-1: {result.quality_metrics.rouge_1:.3f}")
print(f"ROUGE-2: {result.quality_metrics.rouge_2:.3f}")
print(f"ROUGE-L: {result.quality_metrics.rouge_l:.3f}")
print(f"Semantic similarity: {result.quality_metrics.semantic_similarity:.3f}")
print(f"Entity preservation: {result.quality_metrics.entity_preservation_rate:.3f}")
```
## π― Compression Strategies
### 1. Extractive Strategy (Default)
Selects important sentences based on TF-IDF, position, and query relevance:
```python
from context_compressor.strategies import ExtractiveStrategy
strategy = ExtractiveStrategy(
scoring_method="combined", # "tfidf", "frequency", "position", "combined"
min_sentence_length=10,
position_bias=0.2,
query_weight=0.3
)
```
### 2. Abstractive Strategy (Requires ML dependencies)
Uses transformer models for summarization:
```python
from context_compressor.strategies import AbstractiveStrategy
strategy = AbstractiveStrategy(
model_name="facebook/bart-large-cnn",
max_length=150,
min_length=50
)
```
### 3. Semantic Strategy (Requires ML dependencies)
Groups similar content using embeddings:
```python
from context_compressor.strategies import SemanticStrategy
strategy = SemanticStrategy(
embedding_model="all-MiniLM-L6-v2",
clustering_method="kmeans",
n_clusters="auto"
)
```
### 4. Hybrid Strategy
Combines multiple strategies for optimal results:
```python
from context_compressor.strategies import HybridStrategy
strategy = HybridStrategy(
primary_strategy="extractive",
secondary_strategy="semantic",
combination_method="weighted"
)
```
## π Integrations
### LangChain Integration
```python
from context_compressor.integrations.langchain import ContextCompressorTransformer
# Use as a document transformer
transformer = ContextCompressorTransformer(
compressor=compressor,
target_ratio=0.6
)
# Apply to document chain
compressed_docs = transformer.transform_documents(documents)
```
### OpenAI Integration
```python
from context_compressor.integrations.openai import compress_for_openai
# Compress text before sending to OpenAI API
compressed_prompt = compress_for_openai(
text=long_context,
target_ratio=0.4,
model="gpt-4" # Automatically uses appropriate tokenizer
)
```
## π REST API
Start the API server:
```bash
uvicorn context_compressor.api.main:app --reload
```
### API Endpoints
#### Compress Text
```bash
curl -X POST "http://localhost:8000/compress" \
-H "Content-Type: application/json" \
-d '{
"text": "Your long text here...",
"target_ratio": 0.5,
"strategy": "extractive",
"query": "optional query"
}'
```
#### Batch Compression
```bash
curl -X POST "http://localhost:8000/compress/batch" \
-H "Content-Type: application/json" \
-d '{
"texts": ["Text 1...", "Text 2...", "Text 3..."],
"target_ratio": 0.4,
"parallel": true
}'
```
#### List Available Strategies
```bash
curl "http://localhost:8000/strategies"
```
### API Documentation
Visit `http://localhost:8000/docs` for interactive API documentation.
## π Quality Metrics
The system provides comprehensive quality evaluation:
- **Semantic Similarity**: Measures content preservation using word embeddings
- **ROUGE Scores**: Standard summarization metrics (ROUGE-1, ROUGE-2, ROUGE-L)
- **Entity Preservation**: Tracks retention of named entities, numbers, dates
- **Readability**: Flesch Reading Ease score for text readability
- **Overall Score**: Weighted combination of all metrics
## ποΈ Advanced Configuration
### Custom Strategy Development
```python
from context_compressor.strategies.base import CompressionStrategy
from context_compressor.core.models import StrategyMetadata
class CustomStrategy(CompressionStrategy):
def _create_metadata(self) -> StrategyMetadata:
return StrategyMetadata(
name="custom",
description="Custom compression strategy",
version="1.0.0",
author="Your Name"
)
def _compress_text(self, text: str, target_ratio: float, **kwargs) -> str:
# Implement your compression logic
return compressed_text
# Register and use
compressor.register_strategy(CustomStrategy())
```
### Cache Configuration
```python
from context_compressor.utils.cache import CacheManager
# Custom cache manager
cache_manager = CacheManager(
ttl=7200, # 2 hours
max_size=2000,
cleanup_interval=600 # 10 minutes
)
compressor = ContextCompressor(cache_manager=cache_manager)
```
## π Performance Optimization
### Batch Processing Tips
```python
# For large batches, adjust worker count
batch_result = compressor.compress_batch(
texts=large_text_list,
target_ratio=0.5,
parallel=True,
max_workers=8 # Adjust based on your system
)
```
### Memory Management
```python
# For memory-constrained environments
compressor = ContextCompressor(
enable_caching=False, # Disable caching
enable_quality_evaluation=False # Skip quality evaluation
)
```
## π§ͺ Testing
Run the test suite:
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=context_compressor
# Run only unit tests
pytest -m "not integration"
# Run specific test file
pytest tests/test_compressor.py
```
## π Examples
Check out the `examples/` directory for comprehensive usage examples:
- `examples/basic_usage.py` - Basic compression examples
- `examples/batch_processing.py` - Batch processing examples
- `examples/quality_evaluation.py` - Quality metrics examples
- `examples/custom_strategy.py` - Custom strategy development
- `examples/integration_examples.py` - Framework integration examples
- `examples/api_client.py` - REST API client examples
## π€ Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Development Setup
```bash
git clone https://github.com/context-compressor/context-compressor.git
cd context-compressor
pip install -e ".[dev]"
pre-commit install
```
### Running Tests
```bash
pytest
black .
isort .
flake8 .
mypy src/
```
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Support
- **Documentation**: [https://context-compressor.readthedocs.io](https://context-compressor.readthedocs.io)
- **Issues**: [GitHub Issues](https://github.com/context-compressor/context-compressor/issues)
- **Discussions**: [GitHub Discussions](https://github.com/context-compressor/context-compressor/discussions)
## πΊοΈ Roadmap
- [ ] Additional compression strategies (neural, attention-based)
- [ ] Multi-language support
- [ ] Integration with more LLM providers
- [ ] GUI interface
- [ ] Cloud deployment templates
- [ ] Performance benchmarking suite
## π Citation
If you use Context Compressor in your research, please cite:
```bibtex
@software{context_compressor,
title={Context Compressor: AI-Powered Text Compression for RAG Systems},
author={Context Compressor Team},
url={https://github.com/context-compressor/context-compressor},
year={2024}
}
```
---
**Made with β€οΈ for the AI community**
Raw data
{
"_id": null,
"home_page": null,
"name": "ai-context-compressor",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "AI Context Compressor Team <hello@example.com>",
"keywords": "ai, nlp, text-compression, rag, tokens, api-optimization, semantic-compression, llm",
"author": null,
"author_email": "AI Context Compressor Team <hello@example.com>",
"download_url": "https://files.pythonhosted.org/packages/70/38/05ab3f9bddabf708e12ad904df9d97643ecc74ec2bebce70023e675c28e6/ai_context_compressor-0.1.0.tar.gz",
"platform": null,
"description": "# Context Compressor\n\n[](https://python.org)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/psf/black)\n\n**AI-powered text compression for RAG systems and API calls to reduce token usage and costs while preserving semantic meaning.**\n\n## \ud83d\ude80 Features\n\n- **Intelligent Compression**: Multiple compression strategies including extractive, abstractive, semantic, and hybrid approaches\n- **Quality Evaluation**: Comprehensive quality metrics including ROUGE scores, semantic similarity, and entity preservation\n- **Query-Aware**: Context-aware compression that prioritizes relevant content based on user queries\n- **Batch Processing**: Efficient parallel processing of multiple texts\n- **Caching System**: In-memory caching with TTL for improved performance\n- **Framework Integrations**: Easy integration with LangChain, LlamaIndex, and OpenAI\n- **REST API**: FastAPI-based microservice for easy deployment\n- **Extensible**: Plugin system for custom compression strategies\n\n## \ud83d\udce6 Installation\n\n### Basic Installation\n\n```bash\npip install context-compressor\n```\n\n### With ML Dependencies (for advanced strategies)\n\n```bash\npip install \"context-compressor[ml]\"\n```\n\n### With API Dependencies (for REST API)\n\n```bash\npip install \"context-compressor[api]\"\n```\n\n### Full Installation (all features)\n\n```bash\npip install \"context-compressor[full]\"\n```\n\n### Development Installation\n\n```bash\ngit clone https://github.com/context-compressor/context-compressor.git\ncd context-compressor\npip install -e \".[dev]\"\n```\n\n## \ud83c\udfc1 Quick Start\n\n### Basic Usage\n\n```python\nfrom context_compressor import ContextCompressor\n\n# Initialize the compressor\ncompressor = ContextCompressor()\n\n# Compress text\ntext = \"\"\"\nArtificial Intelligence (AI) is a broad field of computer science focused on \ncreating systems that can perform tasks that typically require human intelligence. \nThese tasks include learning, reasoning, problem-solving, perception, and language \nunderstanding. AI has applications in various domains including healthcare, finance, \ntransportation, and entertainment. Machine learning, a subset of AI, enables \ncomputers to learn and improve from experience without being explicitly programmed.\n\"\"\"\n\nresult = compressor.compress(text, target_ratio=0.5)\n\nprint(\"Original text:\")\nprint(text)\nprint(f\"\\nCompressed text ({result.actual_ratio:.1%} of original):\")\nprint(result.compressed_text)\nprint(f\"\\nTokens saved: {result.tokens_saved}\")\nprint(f\"Quality score: {result.quality_metrics.overall_score:.2f}\")\n```\n\n### Query-Aware Compression\n\n```python\n# Compress with focus on specific topic\nquery = \"machine learning applications\"\n\nresult = compressor.compress(\n text=text,\n target_ratio=0.3,\n query=query\n)\n\nprint(f\"Query-focused compression: {result.compressed_text}\")\n```\n\n### Batch Processing\n\n```python\ntexts = [\n \"First document about AI and machine learning...\",\n \"Second document about natural language processing...\",\n \"Third document about computer vision...\"\n]\n\nbatch_result = compressor.compress_batch(\n texts=texts,\n target_ratio=0.4,\n parallel=True\n)\n\nprint(f\"Processed {len(batch_result.results)} texts\")\nprint(f\"Average compression ratio: {batch_result.average_compression_ratio:.1%}\")\nprint(f\"Total tokens saved: {batch_result.total_tokens_saved}\")\n```\n\n## \ud83d\udd27 Configuration\n\n### Strategy Selection\n\n```python\nfrom context_compressor import ContextCompressor\nfrom context_compressor.strategies import ExtractiveStrategy\n\n# Use specific strategy\nextractive_strategy = ExtractiveStrategy(\n scoring_method=\"tfidf\",\n min_sentence_length=20,\n position_bias=0.2\n)\n\ncompressor = ContextCompressor(strategies=[extractive_strategy])\n\n# Or let the system auto-select\ncompressor = ContextCompressor(default_strategy=\"auto\")\n```\n\n### Quality Evaluation Settings\n\n```python\ncompressor = ContextCompressor(\n enable_quality_evaluation=True,\n enable_caching=True,\n cache_ttl=3600 # 1 hour\n)\n\nresult = compressor.compress(text, target_ratio=0.5)\n\n# Access detailed quality metrics\nprint(f\"ROUGE-1: {result.quality_metrics.rouge_1:.3f}\")\nprint(f\"ROUGE-2: {result.quality_metrics.rouge_2:.3f}\")\nprint(f\"ROUGE-L: {result.quality_metrics.rouge_l:.3f}\")\nprint(f\"Semantic similarity: {result.quality_metrics.semantic_similarity:.3f}\")\nprint(f\"Entity preservation: {result.quality_metrics.entity_preservation_rate:.3f}\")\n```\n\n## \ud83c\udfaf Compression Strategies\n\n### 1. Extractive Strategy (Default)\n\nSelects important sentences based on TF-IDF, position, and query relevance:\n\n```python\nfrom context_compressor.strategies import ExtractiveStrategy\n\nstrategy = ExtractiveStrategy(\n scoring_method=\"combined\", # \"tfidf\", \"frequency\", \"position\", \"combined\"\n min_sentence_length=10,\n position_bias=0.2,\n query_weight=0.3\n)\n```\n\n### 2. Abstractive Strategy (Requires ML dependencies)\n\nUses transformer models for summarization:\n\n```python\nfrom context_compressor.strategies import AbstractiveStrategy\n\nstrategy = AbstractiveStrategy(\n model_name=\"facebook/bart-large-cnn\",\n max_length=150,\n min_length=50\n)\n```\n\n### 3. Semantic Strategy (Requires ML dependencies)\n\nGroups similar content using embeddings:\n\n```python\nfrom context_compressor.strategies import SemanticStrategy\n\nstrategy = SemanticStrategy(\n embedding_model=\"all-MiniLM-L6-v2\",\n clustering_method=\"kmeans\",\n n_clusters=\"auto\"\n)\n```\n\n### 4. Hybrid Strategy\n\nCombines multiple strategies for optimal results:\n\n```python\nfrom context_compressor.strategies import HybridStrategy\n\nstrategy = HybridStrategy(\n primary_strategy=\"extractive\",\n secondary_strategy=\"semantic\",\n combination_method=\"weighted\"\n)\n```\n\n## \ud83d\udd0c Integrations\n\n### LangChain Integration\n\n```python\nfrom context_compressor.integrations.langchain import ContextCompressorTransformer\n\n# Use as a document transformer\ntransformer = ContextCompressorTransformer(\n compressor=compressor,\n target_ratio=0.6\n)\n\n# Apply to document chain\ncompressed_docs = transformer.transform_documents(documents)\n```\n\n### OpenAI Integration\n\n```python\nfrom context_compressor.integrations.openai import compress_for_openai\n\n# Compress text before sending to OpenAI API\ncompressed_prompt = compress_for_openai(\n text=long_context,\n target_ratio=0.4,\n model=\"gpt-4\" # Automatically uses appropriate tokenizer\n)\n```\n\n## \ud83c\udf10 REST API\n\nStart the API server:\n\n```bash\nuvicorn context_compressor.api.main:app --reload\n```\n\n### API Endpoints\n\n#### Compress Text\n\n```bash\ncurl -X POST \"http://localhost:8000/compress\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"text\": \"Your long text here...\",\n \"target_ratio\": 0.5,\n \"strategy\": \"extractive\",\n \"query\": \"optional query\"\n }'\n```\n\n#### Batch Compression\n\n```bash\ncurl -X POST \"http://localhost:8000/compress/batch\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"texts\": [\"Text 1...\", \"Text 2...\", \"Text 3...\"],\n \"target_ratio\": 0.4,\n \"parallel\": true\n }'\n```\n\n#### List Available Strategies\n\n```bash\ncurl \"http://localhost:8000/strategies\"\n```\n\n### API Documentation\n\nVisit `http://localhost:8000/docs` for interactive API documentation.\n\n## \ud83d\udcca Quality Metrics\n\nThe system provides comprehensive quality evaluation:\n\n- **Semantic Similarity**: Measures content preservation using word embeddings\n- **ROUGE Scores**: Standard summarization metrics (ROUGE-1, ROUGE-2, ROUGE-L)\n- **Entity Preservation**: Tracks retention of named entities, numbers, dates\n- **Readability**: Flesch Reading Ease score for text readability\n- **Overall Score**: Weighted combination of all metrics\n\n## \ud83c\udf9b\ufe0f Advanced Configuration\n\n### Custom Strategy Development\n\n```python\nfrom context_compressor.strategies.base import CompressionStrategy\nfrom context_compressor.core.models import StrategyMetadata\n\nclass CustomStrategy(CompressionStrategy):\n def _create_metadata(self) -> StrategyMetadata:\n return StrategyMetadata(\n name=\"custom\",\n description=\"Custom compression strategy\",\n version=\"1.0.0\",\n author=\"Your Name\"\n )\n \n def _compress_text(self, text: str, target_ratio: float, **kwargs) -> str:\n # Implement your compression logic\n return compressed_text\n\n# Register and use\ncompressor.register_strategy(CustomStrategy())\n```\n\n### Cache Configuration\n\n```python\nfrom context_compressor.utils.cache import CacheManager\n\n# Custom cache manager\ncache_manager = CacheManager(\n ttl=7200, # 2 hours\n max_size=2000,\n cleanup_interval=600 # 10 minutes\n)\n\ncompressor = ContextCompressor(cache_manager=cache_manager)\n```\n\n## \ud83d\udcc8 Performance Optimization\n\n### Batch Processing Tips\n\n```python\n# For large batches, adjust worker count\nbatch_result = compressor.compress_batch(\n texts=large_text_list,\n target_ratio=0.5,\n parallel=True,\n max_workers=8 # Adjust based on your system\n)\n```\n\n### Memory Management\n\n```python\n# For memory-constrained environments\ncompressor = ContextCompressor(\n enable_caching=False, # Disable caching\n enable_quality_evaluation=False # Skip quality evaluation\n)\n```\n\n## \ud83e\uddea Testing\n\nRun the test suite:\n\n```bash\n# Run all tests\npytest\n\n# Run with coverage\npytest --cov=context_compressor\n\n# Run only unit tests\npytest -m \"not integration\"\n\n# Run specific test file\npytest tests/test_compressor.py\n```\n\n## \ud83d\udcda Examples\n\nCheck out the `examples/` directory for comprehensive usage examples:\n\n- `examples/basic_usage.py` - Basic compression examples\n- `examples/batch_processing.py` - Batch processing examples\n- `examples/quality_evaluation.py` - Quality metrics examples\n- `examples/custom_strategy.py` - Custom strategy development\n- `examples/integration_examples.py` - Framework integration examples\n- `examples/api_client.py` - REST API client examples\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Development Setup\n\n```bash\ngit clone https://github.com/context-compressor/context-compressor.git\ncd context-compressor\npip install -e \".[dev]\"\npre-commit install\n```\n\n### Running Tests\n\n```bash\npytest\nblack .\nisort .\nflake8 .\nmypy src/\n```\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83c\udd98 Support\n\n- **Documentation**: [https://context-compressor.readthedocs.io](https://context-compressor.readthedocs.io)\n- **Issues**: [GitHub Issues](https://github.com/context-compressor/context-compressor/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/context-compressor/context-compressor/discussions)\n\n## \ud83d\uddfa\ufe0f Roadmap\n\n- [ ] Additional compression strategies (neural, attention-based)\n- [ ] Multi-language support\n- [ ] Integration with more LLM providers\n- [ ] GUI interface\n- [ ] Cloud deployment templates\n- [ ] Performance benchmarking suite\n\n## \ud83d\udcd6 Citation\n\nIf you use Context Compressor in your research, please cite:\n\n```bibtex\n@software{context_compressor,\n title={Context Compressor: AI-Powered Text Compression for RAG Systems},\n author={Context Compressor Team},\n url={https://github.com/context-compressor/context-compressor},\n year={2024}\n}\n```\n\n---\n\n**Made with \u2764\ufe0f for the AI community**\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "AI-powered text compression for RAG systems and API calls to reduce token usage and costs",
"version": "0.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/yourusername/ai-context-compressor/issues",
"Changelog": "https://github.com/yourusername/ai-context-compressor/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/yourusername/ai-context-compressor#readme",
"Homepage": "https://github.com/yourusername/ai-context-compressor",
"Repository": "https://github.com/yourusername/ai-context-compressor.git"
},
"split_keywords": [
"ai",
" nlp",
" text-compression",
" rag",
" tokens",
" api-optimization",
" semantic-compression",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "90da91ed5dee60c6d1a87b297baf74db4ec6830e1ea040c1b89136a2e93d875f",
"md5": "242c5cb0ac27dfc6c2f6dda53cf6cc4b",
"sha256": "4663414d521febda23e21e8b71fb5bd9fb5d451a29a3ca5fea37bf10c305e25b"
},
"downloads": -1,
"filename": "ai_context_compressor-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "242c5cb0ac27dfc6c2f6dda53cf6cc4b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 38034,
"upload_time": "2025-08-14T19:53:35",
"upload_time_iso_8601": "2025-08-14T19:53:35.325304Z",
"url": "https://files.pythonhosted.org/packages/90/da/91ed5dee60c6d1a87b297baf74db4ec6830e1ea040c1b89136a2e93d875f/ai_context_compressor-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "703805ab3f9bddabf708e12ad904df9d97643ecc74ec2bebce70023e675c28e6",
"md5": "90a8a7f6e5de879e111c1771d23b9c51",
"sha256": "787ce6741af95c83526de62c30cbcc0a4379167d8ead8fed74adc44d68086697"
},
"downloads": -1,
"filename": "ai_context_compressor-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "90a8a7f6e5de879e111c1771d23b9c51",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 46095,
"upload_time": "2025-08-14T19:53:37",
"upload_time_iso_8601": "2025-08-14T19:53:37.218379Z",
"url": "https://files.pythonhosted.org/packages/70/38/05ab3f9bddabf708e12ad904df9d97643ecc74ec2bebce70023e675c28e6/ai_context_compressor-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-14 19:53:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yourusername",
"github_project": "ai-context-compressor",
"github_not_found": true,
"lcname": "ai-context-compressor"
}