krag


Namekrag JSON
Version 0.0.28 PyPI version JSON
download
home_pageNone
SummaryA Python package for RAG performance evaluation
upload_time2024-10-15 02:29:21
maintainerNone
docs_urlNone
authorPandas-Studio
requires_python<3.14,>=3.10
licenseMIT
keywords rag recall mrr map ndcg
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Krag

Krag는 RAG 시스템(Retrieval-Augmented Generation)을 평가하기 위해 설계된 Python 패키지입니다. Hit Rate, Recall, Precision, MRR(Mean Reciprocal Rank), MAP(Mean Average Precision), NDCG(Normalized Discounted Cumulative Gain) 등 다양한 평가 지표를 계산하는 도구를 제공합니다.

## 설치 방법

pip를 사용하여 Krag를 설치할 수 있습니다:

```bash
pip install krag
```

## 사용 예시

다음은 Krag 패키지에서 제공하는 `KragDocument` 및 `OfflineRetrievalEvaluators` 클래스를 사용하는 간단한 예제입니다.

```python
from krag.document import KragDocument as Document
from krag.evaluators import OfflineRetrievalEvaluators, AveragingMethod, MatchingCriteria

# 각 쿼리에 대한 정답 문서 
actual_docs = [
    [Document(page_content="이것은 실제 문서의 내용입니다. 중요한 정보를 포함하고 있습니다."), 
     Document(page_content="두 번째 실제 문서입니다. 추가적인 세부 사항이 있습니다.")],
    [Document(page_content="다른 주제에 관한 실제 문서입니다. 새로운 개념을 소개합니다."), 
     Document(page_content="네 번째 실제 문서로, 이전 개념을 확장합니다.")]
]

predicted_docs = [
    [Document(page_content="이것은 예측된 문서의 내용입니다. 중요한 정보를 다루고 있습니다."), 
     Document(page_content="두 번째 예측 문서는 추가 세부 정보를 제공합니다."),
     Document(page_content="세 번째 예측 문서는 관련이 없을 수 있습니다.")],
    [Document(page_content="다른 주제에 대한 예측 문서입니다. 새로운 아이디어를 제시합니다."), 
     Document(page_content="이 예측 문서는 이전 개념을 더 자세히 설명합니다."), 
     Document(page_content="마지막 예측 문서는 요약을 제공합니다.")]
]

# 평가도구 초기화 
evaluator = OfflineRetrievalEvaluators(
    actual_docs, 
    predicted_docs, 
    match_method="text",
    averaging_method=AveragingMethod.BOTH,
    matching_criteria=MatchingCriteria.PARTIAL
)

# 평가지표 계산 (k=2 예시)
hit_rate = evaluator.calculate_hit_rate(k=2)
mrr = evaluator.calculate_mrr(k=2)
recall = evaluator.calculate_recall(k=2)
precision = evaluator.calculate_precision(k=2)
f1_score = evaluator.calculate_f1_score(k=2)
map_score = evaluator.calculate_map(k=2)
ndcg = evaluator.calculate_ndcg(k=2)

# 결과 출력
print(f"Hit Rate @2: {hit_rate}")
print(f"MRR @2: {mrr}")
print(f"Recall @2: {recall}")
print(f"Precision @2: {precision}")
print(f"F1 Score @2: {f1_score}")
print(f"MAP @2: {map_score}")
print(f"NDCG @2: {ndcg}")

# 결과 시각화
evaluator.visualize_results(k=2)
```

### 주요 기능

1. **문서 매칭**:
   - 평가자는 실제 문서와 예측된 문서를 매칭하기 위한 여러 가지 방법을 제공합니다. 여기에는 정확한 텍스트 매칭과 ROUGE 기반 매칭(`rouge1`, `rouge2`, `rougeL`)이 포함됩니다.

2. **평가지표**:
   - **Hit Rate (적중률)**: 예측된 문서 집합에서 실제 문서가 올바르게 식별된 비율을 측정합니다.
   - **Recall**: 상위 k개의 예측에서 얼마나 많은 관련 문서가 포함되었는지를 평가합니다.
   - **Precision**: 상위 k개의 예측의 정밀도를 평가합니다.
   - **F1 Score**: Precision과 Recall의 조화 평균을 계산합니다.
   - **MRR (Mean Reciprocal Rank, 평균 역순위)**: 첫 번째 관련 문서의 순위의 역수를 평균내어 계산합니다.  
   - **MAP (Mean Average Precision)**: 상위 k위 안에 관련 문서가 등장하는 순위에서의 정밀도를 평균냅니다.    
   - **NDCG (Normalized Discounted Cumulative Gain)**: 관련성 점수를 바탕으로 문서 순서를 고려하여 순위 품질을 평가합니다.

3. **ROUGE 점수 매칭**:
   - `RougeOfflineRetrievalEvaluators` 클래스는 기본 평가자 기능을 확장하여 ROUGE 점수(`rouge1`, `rouge2`, `rougeL`)를 사용한 매칭과 검색 품질 평가를 수행합니다.

4. **결과 시각화**:
   - `visualize_results` 메서드를 사용하여 평가 결과를 그래프로 시각화할 수 있습니다.

#### ROUGE 매칭 사용 예제

```python
from krag.document import KragDocument as Document
from krag.evaluators import RougeOfflineRetrievalEvaluators, AveragingMethod, MatchingCriteria

# ROUGE 매칭을 사용한 평가도구 초기화 
evaluator = RougeOfflineRetrievalEvaluators(
    actual_docs, 
    predicted_docs, 
    match_method="rouge1",
    averaging_method=AveragingMethod.BOTH,
    matching_criteria=MatchingCriteria.PARTIAL,
    threshold=0.5
)

# 평가지표 계산 (k=2 예시)
hit_rate = evaluator.calculate_hit_rate(k=2)
mrr = evaluator.calculate_mrr(k=2)
recall = evaluator.calculate_recall(k=2)
precision = evaluator.calculate_precision(k=2)
f1_score = evaluator.calculate_f1_score(k=2)
map_score = evaluator.calculate_map(k=2)
ndcg = evaluator.calculate_ndcg(k=2)

# 결과 출력
print(f"ROUGE Hit Rate @2: {hit_rate}")
print(f"ROUGE MRR @2: {mrr}")
print(f"ROUGE Recall @2: {recall}")
print(f"ROUGE Precision @2: {precision}")
print(f"ROUGE F1 Score @2: {f1_score}")
print(f"ROUGE MAP @2: {map_score}")
print(f"ROUGE NDCG @2: {ndcg}")

# 결과 시각화
evaluator.visualize_results(k=2)
```

## 라이선스

이 프로젝트는 MIT 라이선스 하에 있습니다 - 자세한 내용은 [MIT 라이선스](https://opensource.org/licenses/MIT)를 참조하세요.

## 연락처

질문이 있으시면 [이메일](mailto:ontofinances@gmail.com)로 연락 주시기 바랍니다.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "krag",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.14,>=3.10",
    "maintainer_email": null,
    "keywords": "RAG, Recall, MRR, mAP, NDCG",
    "author": "Pandas-Studio",
    "author_email": "ontofinance@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/a3/9d/56105a0ae6bb1544057e1822375df1c46f4761a76a79b43ec20082e32548/krag-0.0.28.tar.gz",
    "platform": null,
    "description": "# Krag\n\nKrag\ub294 RAG \uc2dc\uc2a4\ud15c(Retrieval-Augmented Generation)\uc744 \ud3c9\uac00\ud558\uae30 \uc704\ud574 \uc124\uacc4\ub41c Python \ud328\ud0a4\uc9c0\uc785\ub2c8\ub2e4. Hit Rate, Recall, Precision, MRR(Mean Reciprocal Rank), MAP(Mean Average Precision), NDCG(Normalized Discounted Cumulative Gain) \ub4f1 \ub2e4\uc591\ud55c \ud3c9\uac00 \uc9c0\ud45c\ub97c \uacc4\uc0b0\ud558\ub294 \ub3c4\uad6c\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n## \uc124\uce58 \ubc29\ubc95\n\npip\ub97c \uc0ac\uc6a9\ud558\uc5ec Krag\ub97c \uc124\uce58\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4:\n\n```bash\npip install krag\n```\n\n## \uc0ac\uc6a9 \uc608\uc2dc\n\n\ub2e4\uc74c\uc740 Krag \ud328\ud0a4\uc9c0\uc5d0\uc11c \uc81c\uacf5\ud558\ub294 `KragDocument` \ubc0f `OfflineRetrievalEvaluators` \ud074\ub798\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 \uac04\ub2e8\ud55c \uc608\uc81c\uc785\ub2c8\ub2e4.\n\n```python\nfrom krag.document import KragDocument as Document\nfrom krag.evaluators import OfflineRetrievalEvaluators, AveragingMethod, MatchingCriteria\n\n# \uac01 \ucffc\ub9ac\uc5d0 \ub300\ud55c \uc815\ub2f5 \ubb38\uc11c \nactual_docs = [\n    [Document(page_content=\"\uc774\uac83\uc740 \uc2e4\uc81c \ubb38\uc11c\uc758 \ub0b4\uc6a9\uc785\ub2c8\ub2e4. \uc911\uc694\ud55c \uc815\ubcf4\ub97c \ud3ec\ud568\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4.\"), \n     Document(page_content=\"\ub450 \ubc88\uc9f8 \uc2e4\uc81c \ubb38\uc11c\uc785\ub2c8\ub2e4. \ucd94\uac00\uc801\uc778 \uc138\ubd80 \uc0ac\ud56d\uc774 \uc788\uc2b5\ub2c8\ub2e4.\")],\n    [Document(page_content=\"\ub2e4\ub978 \uc8fc\uc81c\uc5d0 \uad00\ud55c \uc2e4\uc81c \ubb38\uc11c\uc785\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 \uac1c\ub150\uc744 \uc18c\uac1c\ud569\ub2c8\ub2e4.\"), \n     Document(page_content=\"\ub124 \ubc88\uc9f8 \uc2e4\uc81c \ubb38\uc11c\ub85c, \uc774\uc804 \uac1c\ub150\uc744 \ud655\uc7a5\ud569\ub2c8\ub2e4.\")]\n]\n\npredicted_docs = [\n    [Document(page_content=\"\uc774\uac83\uc740 \uc608\uce21\ub41c \ubb38\uc11c\uc758 \ub0b4\uc6a9\uc785\ub2c8\ub2e4. \uc911\uc694\ud55c \uc815\ubcf4\ub97c \ub2e4\ub8e8\uace0 \uc788\uc2b5\ub2c8\ub2e4.\"), \n     Document(page_content=\"\ub450 \ubc88\uc9f8 \uc608\uce21 \ubb38\uc11c\ub294 \ucd94\uac00 \uc138\ubd80 \uc815\ubcf4\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4.\"),\n     Document(page_content=\"\uc138 \ubc88\uc9f8 \uc608\uce21 \ubb38\uc11c\ub294 \uad00\ub828\uc774 \uc5c6\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\")],\n    [Document(page_content=\"\ub2e4\ub978 \uc8fc\uc81c\uc5d0 \ub300\ud55c \uc608\uce21 \ubb38\uc11c\uc785\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 \uc544\uc774\ub514\uc5b4\ub97c \uc81c\uc2dc\ud569\ub2c8\ub2e4.\"), \n     Document(page_content=\"\uc774 \uc608\uce21 \ubb38\uc11c\ub294 \uc774\uc804 \uac1c\ub150\uc744 \ub354 \uc790\uc138\ud788 \uc124\uba85\ud569\ub2c8\ub2e4.\"), \n     Document(page_content=\"\ub9c8\uc9c0\ub9c9 \uc608\uce21 \ubb38\uc11c\ub294 \uc694\uc57d\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\")]\n]\n\n# \ud3c9\uac00\ub3c4\uad6c \ucd08\uae30\ud654 \nevaluator = OfflineRetrievalEvaluators(\n    actual_docs, \n    predicted_docs, \n    match_method=\"text\",\n    averaging_method=AveragingMethod.BOTH,\n    matching_criteria=MatchingCriteria.PARTIAL\n)\n\n# \ud3c9\uac00\uc9c0\ud45c \uacc4\uc0b0 (k=2 \uc608\uc2dc)\nhit_rate = evaluator.calculate_hit_rate(k=2)\nmrr = evaluator.calculate_mrr(k=2)\nrecall = evaluator.calculate_recall(k=2)\nprecision = evaluator.calculate_precision(k=2)\nf1_score = evaluator.calculate_f1_score(k=2)\nmap_score = evaluator.calculate_map(k=2)\nndcg = evaluator.calculate_ndcg(k=2)\n\n# \uacb0\uacfc \ucd9c\ub825\nprint(f\"Hit Rate @2: {hit_rate}\")\nprint(f\"MRR @2: {mrr}\")\nprint(f\"Recall @2: {recall}\")\nprint(f\"Precision @2: {precision}\")\nprint(f\"F1 Score @2: {f1_score}\")\nprint(f\"MAP @2: {map_score}\")\nprint(f\"NDCG @2: {ndcg}\")\n\n# \uacb0\uacfc \uc2dc\uac01\ud654\nevaluator.visualize_results(k=2)\n```\n\n### \uc8fc\uc694 \uae30\ub2a5\n\n1. **\ubb38\uc11c \ub9e4\uce6d**:\n   - \ud3c9\uac00\uc790\ub294 \uc2e4\uc81c \ubb38\uc11c\uc640 \uc608\uce21\ub41c \ubb38\uc11c\ub97c \ub9e4\uce6d\ud558\uae30 \uc704\ud55c \uc5ec\ub7ec \uac00\uc9c0 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc5ec\uae30\uc5d0\ub294 \uc815\ud655\ud55c \ud14d\uc2a4\ud2b8 \ub9e4\uce6d\uacfc ROUGE \uae30\ubc18 \ub9e4\uce6d(`rouge1`, `rouge2`, `rougeL`)\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n2. **\ud3c9\uac00\uc9c0\ud45c**:\n   - **Hit Rate (\uc801\uc911\ub960)**: \uc608\uce21\ub41c \ubb38\uc11c \uc9d1\ud569\uc5d0\uc11c \uc2e4\uc81c \ubb38\uc11c\uac00 \uc62c\ubc14\ub974\uac8c \uc2dd\ubcc4\ub41c \ube44\uc728\uc744 \uce21\uc815\ud569\ub2c8\ub2e4.\n   - **Recall**: \uc0c1\uc704 k\uac1c\uc758 \uc608\uce21\uc5d0\uc11c \uc5bc\ub9c8\ub098 \ub9ce\uc740 \uad00\ub828 \ubb38\uc11c\uac00 \ud3ec\ud568\ub418\uc5c8\ub294\uc9c0\ub97c \ud3c9\uac00\ud569\ub2c8\ub2e4.\n   - **Precision**: \uc0c1\uc704 k\uac1c\uc758 \uc608\uce21\uc758 \uc815\ubc00\ub3c4\ub97c \ud3c9\uac00\ud569\ub2c8\ub2e4.\n   - **F1 Score**: Precision\uacfc Recall\uc758 \uc870\ud654 \ud3c9\uade0\uc744 \uacc4\uc0b0\ud569\ub2c8\ub2e4.\n   - **MRR (Mean Reciprocal Rank, \ud3c9\uade0 \uc5ed\uc21c\uc704)**: \uccab \ubc88\uc9f8 \uad00\ub828 \ubb38\uc11c\uc758 \uc21c\uc704\uc758 \uc5ed\uc218\ub97c \ud3c9\uade0\ub0b4\uc5b4 \uacc4\uc0b0\ud569\ub2c8\ub2e4.  \n   - **MAP (Mean Average Precision)**: \uc0c1\uc704 k\uc704 \uc548\uc5d0 \uad00\ub828 \ubb38\uc11c\uac00 \ub4f1\uc7a5\ud558\ub294 \uc21c\uc704\uc5d0\uc11c\uc758 \uc815\ubc00\ub3c4\ub97c \ud3c9\uade0\ub0c5\ub2c8\ub2e4.    \n   - **NDCG (Normalized Discounted Cumulative Gain)**: \uad00\ub828\uc131 \uc810\uc218\ub97c \ubc14\ud0d5\uc73c\ub85c \ubb38\uc11c \uc21c\uc11c\ub97c \uace0\ub824\ud558\uc5ec \uc21c\uc704 \ud488\uc9c8\uc744 \ud3c9\uac00\ud569\ub2c8\ub2e4.\n\n3. **ROUGE \uc810\uc218 \ub9e4\uce6d**:\n   - `RougeOfflineRetrievalEvaluators` \ud074\ub798\uc2a4\ub294 \uae30\ubcf8 \ud3c9\uac00\uc790 \uae30\ub2a5\uc744 \ud655\uc7a5\ud558\uc5ec ROUGE \uc810\uc218(`rouge1`, `rouge2`, `rougeL`)\ub97c \uc0ac\uc6a9\ud55c \ub9e4\uce6d\uacfc \uac80\uc0c9 \ud488\uc9c8 \ud3c9\uac00\ub97c \uc218\ud589\ud569\ub2c8\ub2e4.\n\n4. **\uacb0\uacfc \uc2dc\uac01\ud654**:\n   - `visualize_results` \uba54\uc11c\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud3c9\uac00 \uacb0\uacfc\ub97c \uadf8\ub798\ud504\ub85c \uc2dc\uac01\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### ROUGE \ub9e4\uce6d \uc0ac\uc6a9 \uc608\uc81c\n\n```python\nfrom krag.document import KragDocument as Document\nfrom krag.evaluators import RougeOfflineRetrievalEvaluators, AveragingMethod, MatchingCriteria\n\n# ROUGE \ub9e4\uce6d\uc744 \uc0ac\uc6a9\ud55c \ud3c9\uac00\ub3c4\uad6c \ucd08\uae30\ud654 \nevaluator = RougeOfflineRetrievalEvaluators(\n    actual_docs, \n    predicted_docs, \n    match_method=\"rouge1\",\n    averaging_method=AveragingMethod.BOTH,\n    matching_criteria=MatchingCriteria.PARTIAL,\n    threshold=0.5\n)\n\n# \ud3c9\uac00\uc9c0\ud45c \uacc4\uc0b0 (k=2 \uc608\uc2dc)\nhit_rate = evaluator.calculate_hit_rate(k=2)\nmrr = evaluator.calculate_mrr(k=2)\nrecall = evaluator.calculate_recall(k=2)\nprecision = evaluator.calculate_precision(k=2)\nf1_score = evaluator.calculate_f1_score(k=2)\nmap_score = evaluator.calculate_map(k=2)\nndcg = evaluator.calculate_ndcg(k=2)\n\n# \uacb0\uacfc \ucd9c\ub825\nprint(f\"ROUGE Hit Rate @2: {hit_rate}\")\nprint(f\"ROUGE MRR @2: {mrr}\")\nprint(f\"ROUGE Recall @2: {recall}\")\nprint(f\"ROUGE Precision @2: {precision}\")\nprint(f\"ROUGE F1 Score @2: {f1_score}\")\nprint(f\"ROUGE MAP @2: {map_score}\")\nprint(f\"ROUGE NDCG @2: {ndcg}\")\n\n# \uacb0\uacfc \uc2dc\uac01\ud654\nevaluator.visualize_results(k=2)\n```\n\n## \ub77c\uc774\uc120\uc2a4\n\n\uc774 \ud504\ub85c\uc81d\ud2b8\ub294 MIT \ub77c\uc774\uc120\uc2a4 \ud558\uc5d0 \uc788\uc2b5\ub2c8\ub2e4 - \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [MIT \ub77c\uc774\uc120\uc2a4](https://opensource.org/licenses/MIT)\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\n## \uc5f0\ub77d\ucc98\n\n\uc9c8\ubb38\uc774 \uc788\uc73c\uc2dc\uba74 [\uc774\uba54\uc77c](mailto:ontofinances@gmail.com)\ub85c \uc5f0\ub77d \uc8fc\uc2dc\uae30 \ubc14\ub78d\ub2c8\ub2e4.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Python package for RAG performance evaluation",
    "version": "0.0.28",
    "project_urls": null,
    "split_keywords": [
        "rag",
        " recall",
        " mrr",
        " map",
        " ndcg"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bee22efe0d6ea99c138101bdb9b8457449b53c0921564029398ae0ab7a65e606",
                "md5": "b84fe97f67a09e24c791ee7aa89c269c",
                "sha256": "8c764e0ab1587b81fe3521631e40667c66c352aad90f8cb0012544c19b5d8a98"
            },
            "downloads": -1,
            "filename": "krag-0.0.28-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b84fe97f67a09e24c791ee7aa89c269c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.14,>=3.10",
            "size": 11108,
            "upload_time": "2024-10-15T02:29:19",
            "upload_time_iso_8601": "2024-10-15T02:29:19.808849Z",
            "url": "https://files.pythonhosted.org/packages/be/e2/2efe0d6ea99c138101bdb9b8457449b53c0921564029398ae0ab7a65e606/krag-0.0.28-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a39d56105a0ae6bb1544057e1822375df1c46f4761a76a79b43ec20082e32548",
                "md5": "6f1f1d497bfec8be6d3818e09bf3666c",
                "sha256": "d859ad290d84d9484f5179a8188a7b17bf25a0bcaf4bb2efa049c85da6c1999f"
            },
            "downloads": -1,
            "filename": "krag-0.0.28.tar.gz",
            "has_sig": false,
            "md5_digest": "6f1f1d497bfec8be6d3818e09bf3666c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.14,>=3.10",
            "size": 10780,
            "upload_time": "2024-10-15T02:29:21",
            "upload_time_iso_8601": "2024-10-15T02:29:21.597547Z",
            "url": "https://files.pythonhosted.org/packages/a3/9d/56105a0ae6bb1544057e1822375df1c46f4761a76a79b43ec20082e32548/krag-0.0.28.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-15 02:29:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "krag"
}
        
Elapsed time: 0.34453s