# ranx-k: Korean-optimized ranx IR Evaluation Toolkit 🇰🇷
[](https://badge.fury.io/py/ranx-k)
[](https://pypi.org/project/ranx-k/)
[](https://opensource.org/licenses/MIT)
**[English](README.md) | [한국어](README.ko.md)**
**ranx-k** is a Korean-optimized Information Retrieval (IR) evaluation toolkit that extends the ranx library with Kiwi tokenizer and Korean embeddings. It provides accurate evaluation for RAG (Retrieval-Augmented Generation) systems.
## 🚀 Key Features
- **Korean-optimized**: Accurate tokenization using Kiwi morphological analyzer
- **ranx-based**: Supports proven IR evaluation metrics (Hit@K, NDCG@K, MRR, MAP@K, etc.)
- **LangChain compatible**: Supports LangChain retriever interface standards
- **Multiple evaluation methods**: ROUGE, embedding similarity, semantic similarity-based evaluation
- **Graded relevance support**: Use similarity scores as relevance grades for NDCG calculation
- **Configurable ROUGE types**: Choose between ROUGE-1, ROUGE-2, and ROUGE-L
- **Strict threshold enforcement**: Documents below similarity threshold are correctly treated as retrieval failures
- **Retrieval order preservation**: Accurate evaluation of reranking systems (v0.0.16+)
- **Practical design**: Supports step-by-step evaluation from prototype to production
- **High performance**: 30-80% improvement in Korean evaluation accuracy over existing methods
- **Bilingual output**: English-Korean output support for international accessibility
## 📦 Installation
```bash
pip install ranx-k
```
Or install development version:
```bash
pip install "ranx-k[dev]"
```
## 🔗 Retriever Compatibility
ranx-k supports **LangChain retriever interface**:
```python
# Retriever must implement invoke() method
class YourRetriever:
def invoke(self, query: str) -> List[Document]:
# Return list of Document objects (requires page_content attribute)
pass
# LangChain Document usage example
from langchain.schema import Document
doc = Document(page_content="Text content")
```
> **Note**: LangChain is distributed under the MIT License. See [documentation](docs/en/quickstart.md#langchain-license) for details.
## 🔧 Quick Start
### Basic Usage
```python
from ranx_k.evaluation import simple_kiwi_rouge_evaluation
# Simple Kiwi ROUGE evaluation
results = simple_kiwi_rouge_evaluation(
retriever=your_retriever,
questions=your_questions,
reference_contexts=your_reference_contexts,
k=5
)
print(f"ROUGE-1: {results['kiwi_rouge1@5']:.3f}")
print(f"ROUGE-2: {results['kiwi_rouge2@5']:.3f}")
print(f"ROUGE-L: {results['kiwi_rougeL@5']:.3f}")
```
### Enhanced Evaluation (Rouge Score + Kiwi)
```python
from ranx_k.evaluation import rouge_kiwi_enhanced_evaluation
# Proven rouge_score library + Kiwi tokenizer
results = rouge_kiwi_enhanced_evaluation(
retriever=your_retriever,
questions=your_questions,
reference_contexts=your_reference_contexts,
k=5,
tokenize_method='morphs', # 'morphs' or 'nouns'
use_stopwords=True
)
```
### Semantic Similarity-based ranx Evaluation
```python
from ranx_k.evaluation import evaluate_with_ranx_similarity
# Reference-based evaluation (recommended for accurate recall)
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=your_questions,
reference_contexts=your_reference_contexts,
k=5,
method='embedding',
similarity_threshold=0.6,
use_graded_relevance=False, # Binary relevance (default)
evaluation_mode='reference_based' # Evaluates against all reference docs
)
print(f"Hit@5: {results['hit_rate@5']:.3f}")
print(f"NDCG@5: {results['ndcg@5']:.3f}")
print(f"MRR: {results['mrr']:.3f}")
print(f"MAP@5: {results['map@5']:.3f}")
```
#### Using Different Embedding Models
```python
# OpenAI embedding model (requires API key)
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=your_questions,
reference_contexts=your_reference_contexts,
k=5,
method='openai',
similarity_threshold=0.7,
embedding_model="text-embedding-3-small"
)
# Latest BGE-M3 model (excellent for Korean)
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=your_questions,
reference_contexts=your_reference_contexts,
k=5,
method='embedding',
similarity_threshold=0.6,
embedding_model="BAAI/bge-m3"
)
# Korean-specialized Kiwi ROUGE method with configurable ROUGE types
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=your_questions,
reference_contexts=your_reference_contexts,
k=5,
method='kiwi_rouge',
similarity_threshold=0.3, # Lower threshold recommended for Kiwi ROUGE
rouge_type='rougeL', # Choose 'rouge1', 'rouge2', or 'rougeL'
tokenize_method='morphs', # Choose 'morphs' or 'nouns'
use_stopwords=True # Configure stopword filtering
)
```
### Comprehensive Evaluation
```python
from ranx_k.evaluation import comprehensive_evaluation_comparison
# Compare all evaluation methods
comparison = comprehensive_evaluation_comparison(
retriever=your_retriever,
questions=your_questions,
reference_contexts=your_reference_contexts,
k=5
)
```
## 📊 Evaluation Methods
### 1. Kiwi ROUGE Evaluation
- **Advantages**: Fast speed, intuitive interpretation
- **Use case**: Prototyping, quick feedback
### 2. Enhanced ROUGE (Rouge Score + Kiwi)
- **Advantages**: Proven library, stability
- **Use case**: Production environment, reliability-critical evaluation
### 3. Semantic Similarity-based ranx
- **Advantages**: Traditional IR metrics, semantic similarity
- **Use case**: Research, benchmarking, detailed analysis
## 🎯 Performance Improvement Examples
```python
# Existing method (English tokenizer)
basic_rouge1 = 0.234
# ranx-k (Kiwi tokenizer)
ranxk_rouge1 = 0.421 # +79.9% improvement!
```
## 📊 Recommended Embedding Models
| Model | Use Case | Threshold | Features |
|-------|----------|-----------|----------|
| `paraphrase-multilingual-MiniLM-L12-v2` | Default | 0.6 | Fast, lightweight |
| `text-embedding-3-small` (OpenAI) | Accuracy | 0.7 | High accuracy, cost-effective |
| `BAAI/bge-m3` | Korean | 0.6 | Latest, excellent multilingual |
| `text-embedding-3-large` (OpenAI) | Premium | 0.8 | Highest performance |
## 📈 Score Interpretation Guide
| Score Range | Assessment | Recommended Action |
|-------------|------------|-------------------|
| 0.7+ | 🟢 Excellent | Maintain current settings |
| 0.5~0.7 | 🟡 Good | Consider fine-tuning |
| 0.3~0.5 | 🟠 Average | Improvement needed |
| 0.3- | 🔴 Poor | Major revision required |
## 🔍 Advanced Usage
### Graded Relevance Mode
```python
# Graded relevance mode - uses similarity scores as relevance grades
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=questions,
reference_contexts=references,
method='embedding',
similarity_threshold=0.6,
use_graded_relevance=True # Uses similarity scores as relevance grades
)
print(f"NDCG@5: {results['ndcg@5']:.3f}")
```
> **Note on Graded Relevance**: The `use_graded_relevance` parameter primarily affects NDCG (Normalized Discounted Cumulative Gain) calculation. Other metrics like Hit@K, MRR, and MAP treat relevance as binary in the ranx library. Use graded relevance when you need to distinguish between different levels of document relevance quality.
### Custom Embedding Models
```python
# Use custom embedding model
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=questions,
reference_contexts=references,
method='embedding',
embedding_model="your-custom-model-name",
similarity_threshold=0.6,
use_graded_relevance=True
)
```
### Configurable ROUGE Types
```python
# Compare different ROUGE metrics
for rouge_type in ['rouge1', 'rouge2', 'rougeL']:
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=questions,
reference_contexts=references,
method='kiwi_rouge',
rouge_type=rouge_type,
tokenize_method='morphs',
similarity_threshold=0.3
)
print(f"{rouge_type.upper()}: Hit@5 = {results['hit_rate@5']:.3f}")
```
### Threshold Sensitivity Analysis
```python
# Analyze how different thresholds affect evaluation
thresholds = [0.3, 0.5, 0.7]
for threshold in thresholds:
results = evaluate_with_ranx_similarity(
retriever=your_retriever,
questions=questions,
reference_contexts=references,
similarity_threshold=threshold
)
print(f"Threshold {threshold}: Hit@5={results['hit_rate@5']:.3f}, NDCG@5={results['ndcg@5']:.3f}")
```
## 📚 Examples
- [Basic Tokenizer Example](examples/basic_tokenizer.py)
- [BGE-M3 Evaluation Example](examples/bge_m3_evaluation.py)
- [Embedding Models Comparison](examples/embedding_models_comparison.py)
- [Comprehensive Comparison](examples/comprehensive_comparison.py)
## 🤝 Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Built on top of [ranx](https://github.com/AmenRa/ranx) by Elias Bassani
- Korean morphological analysis powered by [Kiwi](https://github.com/bab2min/kiwipiepy)
- Embedding support via [sentence-transformers](https://github.com/UKPLab/sentence-transformers)
## 📞 Support
- 🐛 Issue Tracker: Please submit issues on GitHub
- 📧 Email: ontofinance@gmail.com
---
**ranx-k** - Empowering Korean RAG evaluation with precision and ease!
Raw data
{
"_id": null,
"home_page": null,
"name": "ranx-k",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Pandas Studio <ontofinance@gmail.com>",
"keywords": "ranx, korean, nlp, evaluation, retrieval, kiwi, rouge, ir-evaluation",
"author": null,
"author_email": "Pandas Studio <ontofinance@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/27/fc/c347b62a7779ef975aefd93913a5a06bcd83ea7af14948aa9fcaed1323e7/ranx_k-0.0.17.tar.gz",
"platform": null,
"description": "# ranx-k: Korean-optimized ranx IR Evaluation Toolkit \ud83c\uddf0\ud83c\uddf7\n\n[](https://badge.fury.io/py/ranx-k)\n[](https://pypi.org/project/ranx-k/)\n[](https://opensource.org/licenses/MIT)\n\n**[English](README.md) | [\ud55c\uad6d\uc5b4](README.ko.md)**\n\n**ranx-k** is a Korean-optimized Information Retrieval (IR) evaluation toolkit that extends the ranx library with Kiwi tokenizer and Korean embeddings. It provides accurate evaluation for RAG (Retrieval-Augmented Generation) systems.\n\n## \ud83d\ude80 Key Features\n\n- **Korean-optimized**: Accurate tokenization using Kiwi morphological analyzer\n- **ranx-based**: Supports proven IR evaluation metrics (Hit@K, NDCG@K, MRR, MAP@K, etc.)\n- **LangChain compatible**: Supports LangChain retriever interface standards\n- **Multiple evaluation methods**: ROUGE, embedding similarity, semantic similarity-based evaluation\n- **Graded relevance support**: Use similarity scores as relevance grades for NDCG calculation\n- **Configurable ROUGE types**: Choose between ROUGE-1, ROUGE-2, and ROUGE-L\n- **Strict threshold enforcement**: Documents below similarity threshold are correctly treated as retrieval failures\n- **Retrieval order preservation**: Accurate evaluation of reranking systems (v0.0.16+)\n- **Practical design**: Supports step-by-step evaluation from prototype to production\n- **High performance**: 30-80% improvement in Korean evaluation accuracy over existing methods\n- **Bilingual output**: English-Korean output support for international accessibility\n\n## \ud83d\udce6 Installation\n\n```bash\npip install ranx-k\n```\n\nOr install development version:\n\n```bash\npip install \"ranx-k[dev]\"\n```\n\n## \ud83d\udd17 Retriever Compatibility\n\nranx-k supports **LangChain retriever interface**:\n\n```python\n# Retriever must implement invoke() method\nclass YourRetriever:\n def invoke(self, query: str) -> List[Document]:\n # Return list of Document objects (requires page_content attribute)\n pass\n\n# LangChain Document usage example\nfrom langchain.schema import Document\ndoc = Document(page_content=\"Text content\")\n```\n\n> **Note**: LangChain is distributed under the MIT License. See [documentation](docs/en/quickstart.md#langchain-license) for details.\n\n## \ud83d\udd27 Quick Start\n\n### Basic Usage\n\n```python\nfrom ranx_k.evaluation import simple_kiwi_rouge_evaluation\n\n# Simple Kiwi ROUGE evaluation\nresults = simple_kiwi_rouge_evaluation(\n retriever=your_retriever,\n questions=your_questions,\n reference_contexts=your_reference_contexts,\n k=5\n)\n\nprint(f\"ROUGE-1: {results['kiwi_rouge1@5']:.3f}\")\nprint(f\"ROUGE-2: {results['kiwi_rouge2@5']:.3f}\")\nprint(f\"ROUGE-L: {results['kiwi_rougeL@5']:.3f}\")\n```\n\n### Enhanced Evaluation (Rouge Score + Kiwi)\n\n```python\nfrom ranx_k.evaluation import rouge_kiwi_enhanced_evaluation\n\n# Proven rouge_score library + Kiwi tokenizer\nresults = rouge_kiwi_enhanced_evaluation(\n retriever=your_retriever,\n questions=your_questions,\n reference_contexts=your_reference_contexts,\n k=5,\n tokenize_method='morphs', # 'morphs' or 'nouns'\n use_stopwords=True\n)\n```\n\n### Semantic Similarity-based ranx Evaluation\n\n```python\nfrom ranx_k.evaluation import evaluate_with_ranx_similarity\n\n# Reference-based evaluation (recommended for accurate recall)\nresults = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=your_questions,\n reference_contexts=your_reference_contexts,\n k=5,\n method='embedding',\n similarity_threshold=0.6,\n use_graded_relevance=False, # Binary relevance (default)\n evaluation_mode='reference_based' # Evaluates against all reference docs\n)\n\nprint(f\"Hit@5: {results['hit_rate@5']:.3f}\")\nprint(f\"NDCG@5: {results['ndcg@5']:.3f}\")\nprint(f\"MRR: {results['mrr']:.3f}\")\nprint(f\"MAP@5: {results['map@5']:.3f}\")\n```\n\n#### Using Different Embedding Models\n\n```python\n# OpenAI embedding model (requires API key)\nresults = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=your_questions,\n reference_contexts=your_reference_contexts,\n k=5,\n method='openai',\n similarity_threshold=0.7,\n embedding_model=\"text-embedding-3-small\"\n)\n\n# Latest BGE-M3 model (excellent for Korean)\nresults = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=your_questions,\n reference_contexts=your_reference_contexts,\n k=5,\n method='embedding',\n similarity_threshold=0.6,\n embedding_model=\"BAAI/bge-m3\"\n)\n\n# Korean-specialized Kiwi ROUGE method with configurable ROUGE types\nresults = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=your_questions,\n reference_contexts=your_reference_contexts,\n k=5,\n method='kiwi_rouge',\n similarity_threshold=0.3, # Lower threshold recommended for Kiwi ROUGE\n rouge_type='rougeL', # Choose 'rouge1', 'rouge2', or 'rougeL'\n tokenize_method='morphs', # Choose 'morphs' or 'nouns' \n use_stopwords=True # Configure stopword filtering\n)\n```\n\n### Comprehensive Evaluation\n\n```python\nfrom ranx_k.evaluation import comprehensive_evaluation_comparison\n\n# Compare all evaluation methods\ncomparison = comprehensive_evaluation_comparison(\n retriever=your_retriever,\n questions=your_questions,\n reference_contexts=your_reference_contexts,\n k=5\n)\n```\n\n## \ud83d\udcca Evaluation Methods\n\n### 1. Kiwi ROUGE Evaluation\n- **Advantages**: Fast speed, intuitive interpretation\n- **Use case**: Prototyping, quick feedback\n\n### 2. Enhanced ROUGE (Rouge Score + Kiwi)\n- **Advantages**: Proven library, stability\n- **Use case**: Production environment, reliability-critical evaluation\n\n### 3. Semantic Similarity-based ranx\n- **Advantages**: Traditional IR metrics, semantic similarity\n- **Use case**: Research, benchmarking, detailed analysis\n\n## \ud83c\udfaf Performance Improvement Examples\n\n```python\n# Existing method (English tokenizer)\nbasic_rouge1 = 0.234\n\n# ranx-k (Kiwi tokenizer)\nranxk_rouge1 = 0.421 # +79.9% improvement!\n```\n\n## \ud83d\udcca Recommended Embedding Models\n\n| Model | Use Case | Threshold | Features |\n|-------|----------|-----------|----------|\n| `paraphrase-multilingual-MiniLM-L12-v2` | Default | 0.6 | Fast, lightweight |\n| `text-embedding-3-small` (OpenAI) | Accuracy | 0.7 | High accuracy, cost-effective |\n| `BAAI/bge-m3` | Korean | 0.6 | Latest, excellent multilingual |\n| `text-embedding-3-large` (OpenAI) | Premium | 0.8 | Highest performance |\n\n## \ud83d\udcc8 Score Interpretation Guide\n\n| Score Range | Assessment | Recommended Action |\n|-------------|------------|-------------------|\n| 0.7+ | \ud83d\udfe2 Excellent | Maintain current settings |\n| 0.5~0.7 | \ud83d\udfe1 Good | Consider fine-tuning |\n| 0.3~0.5 | \ud83d\udfe0 Average | Improvement needed |\n| 0.3- | \ud83d\udd34 Poor | Major revision required |\n\n## \ud83d\udd0d Advanced Usage\n\n### Graded Relevance Mode\n\n```python\n# Graded relevance mode - uses similarity scores as relevance grades\nresults = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=questions,\n reference_contexts=references,\n method='embedding',\n similarity_threshold=0.6,\n use_graded_relevance=True # Uses similarity scores as relevance grades\n)\n\nprint(f\"NDCG@5: {results['ndcg@5']:.3f}\")\n```\n\n> **Note on Graded Relevance**: The `use_graded_relevance` parameter primarily affects NDCG (Normalized Discounted Cumulative Gain) calculation. Other metrics like Hit@K, MRR, and MAP treat relevance as binary in the ranx library. Use graded relevance when you need to distinguish between different levels of document relevance quality.\n\n### Custom Embedding Models\n\n```python\n# Use custom embedding model\nresults = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=questions,\n reference_contexts=references,\n method='embedding',\n embedding_model=\"your-custom-model-name\",\n similarity_threshold=0.6,\n use_graded_relevance=True\n)\n```\n\n### Configurable ROUGE Types\n\n```python\n# Compare different ROUGE metrics\nfor rouge_type in ['rouge1', 'rouge2', 'rougeL']:\n results = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=questions,\n reference_contexts=references,\n method='kiwi_rouge',\n rouge_type=rouge_type,\n tokenize_method='morphs',\n similarity_threshold=0.3\n )\n print(f\"{rouge_type.upper()}: Hit@5 = {results['hit_rate@5']:.3f}\")\n```\n\n### Threshold Sensitivity Analysis\n\n```python\n# Analyze how different thresholds affect evaluation\nthresholds = [0.3, 0.5, 0.7]\nfor threshold in thresholds:\n results = evaluate_with_ranx_similarity(\n retriever=your_retriever,\n questions=questions,\n reference_contexts=references,\n similarity_threshold=threshold\n )\n print(f\"Threshold {threshold}: Hit@5={results['hit_rate@5']:.3f}, NDCG@5={results['ndcg@5']:.3f}\")\n```\n\n## \ud83d\udcda Examples\n\n- [Basic Tokenizer Example](examples/basic_tokenizer.py)\n- [BGE-M3 Evaluation Example](examples/bge_m3_evaluation.py)\n- [Embedding Models Comparison](examples/embedding_models_comparison.py)\n- [Comprehensive Comparison](examples/comprehensive_comparison.py)\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit issues and pull requests.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Built on top of [ranx](https://github.com/AmenRa/ranx) by Elias Bassani\n- Korean morphological analysis powered by [Kiwi](https://github.com/bab2min/kiwipiepy)\n- Embedding support via [sentence-transformers](https://github.com/UKPLab/sentence-transformers)\n\n## \ud83d\udcde Support\n\n- \ud83d\udc1b Issue Tracker: Please submit issues on GitHub\n- \ud83d\udce7 Email: ontofinance@gmail.com\n\n---\n\n**ranx-k** - Empowering Korean RAG evaluation with precision and ease!\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Korean-optimized RAG evaluation toolkit based on ranx with Kiwi tokenizer and Korean language support",
"version": "0.0.17",
"project_urls": {
"Homepage": "https://github.com/tsdata/rank-k",
"Issues": "https://github.com/tsdata/rank-k/issues",
"Repository": "https://github.com/tsdata/rank-k"
},
"split_keywords": [
"ranx",
" korean",
" nlp",
" evaluation",
" retrieval",
" kiwi",
" rouge",
" ir-evaluation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b522368cfafcad39e582335817871f978fb185a11cbc6977b727751b2f8171c9",
"md5": "e51aec4a7199888660c97998a93f07c6",
"sha256": "7efb8be618b9d1df15fd0f6df84c4f13ae5598f673b1056048e9f9b09e0e0e30"
},
"downloads": -1,
"filename": "ranx_k-0.0.17-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e51aec4a7199888660c97998a93f07c6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 85985,
"upload_time": "2025-08-19T15:17:35",
"upload_time_iso_8601": "2025-08-19T15:17:35.120849Z",
"url": "https://files.pythonhosted.org/packages/b5/22/368cfafcad39e582335817871f978fb185a11cbc6977b727751b2f8171c9/ranx_k-0.0.17-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "27fcc347b62a7779ef975aefd93913a5a06bcd83ea7af14948aa9fcaed1323e7",
"md5": "2365b0eb095a05d1778dc11352431e7f",
"sha256": "1e6a1a31308d0828dcc2ac8b4674991202135b9030d00d1b66ee9674f9230677"
},
"downloads": -1,
"filename": "ranx_k-0.0.17.tar.gz",
"has_sig": false,
"md5_digest": "2365b0eb095a05d1778dc11352431e7f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 71904,
"upload_time": "2025-08-19T15:17:36",
"upload_time_iso_8601": "2025-08-19T15:17:36.601191Z",
"url": "https://files.pythonhosted.org/packages/27/fc/c347b62a7779ef975aefd93913a5a06bcd83ea7af14948aa9fcaed1323e7/ranx_k-0.0.17.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-19 15:17:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tsdata",
"github_project": "rank-k",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "kiwipiepy",
"specs": [
[
">=",
"0.15.0"
]
]
},
{
"name": "rouge-score",
"specs": [
[
">=",
"0.1.2"
]
]
},
{
"name": "sentence-transformers",
"specs": [
[
">=",
"2.2.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.21.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.62.0"
]
]
},
{
"name": "ranx",
"specs": [
[
">=",
"0.3.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"7.0.0"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"4.0.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"22.0.0"
]
]
},
{
"name": "isort",
"specs": [
[
">=",
"5.0.0"
]
]
},
{
"name": "flake8",
"specs": [
[
">=",
"5.0.0"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "sphinx",
"specs": [
[
">=",
"5.0.0"
]
]
},
{
"name": "sphinx-rtd-theme",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "myst-parser",
"specs": [
[
">=",
"0.18.0"
]
]
}
],
"lcname": "ranx-k"
}