raga-llm-eval


Nameraga-llm-eval JSON
Version 2.1.0 PyPI version JSON
download
home_pageNone
SummaryPackage for LLM Evaluation
upload_time2024-04-18 11:23:34
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords ragaai raga llm testing llm-eval
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img src="https://github.com/aristotle-ai/raga-llm-eval/blob/v2/docs/assets/logo-lg_white.png" alt="RagaAI - Logo" width="100%">
</p>

<h1 align="center">
    Raga LLM Hub
</h1>

<h3 align="center">
    <a href="https://raga.ai">Raga AI</a> |
    <a href="https://docs.raga.ai/raga-llm-hub">Documentation</a> |
    <a href="https://docs.raga.ai/raga-llm-hub/quickstart">Getting Started</a> 

</h3>


<div align="center">


[![PyPI - Version](https://img.shields.io/pypi/v/raga-llm-eval?label=PyPI%20Package)](https://badge.fury.io/py/raga-llm-eval) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1PQGqDGdcSUxhSvpSQYX8ZdHf5r90WSYf?usp=sharing)
</a> [![Python Compatibility](https://img.shields.io/pypi/pyversions/raga-llm-eval)](https://pypi.org/project/raga-llm-eval/) []()

</div>


Welcome to Raga LLM Eval, a comprehensive evaluation toolkit for Language and Learning Models (LLMs). This toolkit provides a suite of tests to evaluate various aspects of language model performance, including relevance, understanding, coherence, toxicity, and more.

## Installation

### Using pip

```bash
python -m venv venv
source venv/bin/activate
pip install raga-llm-eval


* `python -m venv venv` - Create a new python environment.
* `source venv/bin/activate` - Activate the environment.
* `pip install raga-llm-eval` - Install the package

### with conda
* `conda create --name myenv` - Create a new python environment.
* `conda activate myenv` - Activate the environment.
* `python -m pip install raga-llm-eval` - Install the package



## Quick Tour
### Setting up
```py
from raga_llm_eval import RagaLLMEval, get_data

# Initialize with API key
evaluator = RagaLLMEval(api_keys={"OPENAI_API_KEY": "xxx"})
```

###  List available
```py
# List available tests
evaluator.list_available_tests()
```

### Adding and Running Tests
#### Using Custom Data
```py
# Add tests with custom data
evaluator.add_test(
    test_names=["relevancy_test", "summarisation_test"],
    data={
        "prompt": ["How are you?", "How do you do?"],
        "context": ["You are a student, answering your teacher."],
        "response": ["I am fine. Thank you", "Doooo do do do doooo..."],
    },
    arguments={"model": "gpt-3.5-turbo-1106", "threshold": 0.6},
).run()

evaluator.print_results()

```

#### Using Provided Test Data
```py
# Add tests with provided test data
evaluator.add_test(
    test_names=["relevancy_test"],
    data=get_data("relevancy_test", num_samples=1),
    arguments={"model": "gpt-3.5-turbo-1106", "threshold": 0.6},
).run()

evaluator.print_results()
```

## Advanced Usage: Piping and Saving Results
The `raga_llm_eval` package supports a fluent interface, allowing you to chain methods together using a piping style. This approach can make your code more readable and concise. Additionally, you can save the evaluation results to a JSON file for further analysis or record-keeping. Below are examples demonstrating these capabilities.

### Piping Method Calls
Piping allows you to chain multiple operations in a single statement. This can simplify your code, making it easier to read and maintain. Here's an example of how to use piping to add a test, run it, and print the results:

```python
# Method piping
evaluator.add_test(
    test_names=["relevancy_test", "summarisation_test"],
    data={
        "prompt": ["What is the capital of France?", "Explain quantum entanglement."],
        "context": ["You are a geography teacher.", "You are a physics professor explaining to a student."],
        "response": ["The capital of France is Paris.", "Quantum entanglement is a phenomenon where particles become interconnected..."],
    },
    arguments={"model": "gpt-3.5-turbo-1106", "threshold": 0.75},
).run()

evaluator.print_results()
```

### Saving Results to a File
```python
# Adding a test, running it, printing, and saving the results to a JSON file
evaluator.add_test(
    test_names=["relevancy_test", "summarisation_test"],
    data={
        "prompt": ["What is the capital of France?", "Explain quantum entanglement."],
        "context": ["You are a geography teacher.", "You are a physics professor explaining to a student."],
        "response": ["The capital of France is Paris.", "Quantum entanglement is a phenomenon where particles become interconnected..."],
    },
    arguments={"model": "gpt-3.5-turbo-1106", "threshold": 0.75},
).run()

evaluator.print_results()

```
This will execute the tests, print the results to the console, and also save the results in a file named `evaluation_results.json` in your current working directory.

Explore these capabilities to get the most out of your language model evaluations with `raga-llm-eval`.

Happy Evaluating!

## Tests Supported

## Relevance & Understanding
In this suite of tests, we focus on the model's ability to provide relevant, accurate, and contextually appropriate responses. This includes evaluating the model's precision, recall, and overall understanding of the given context to generate relevant answers.

1. **Relevancy Test**: Measures the relevance of LLM response to the input prompt

2. **Contextual Precision Test**: Evaluates if relevant nodes in context are ranked higher, resulting in a dictionary with precision score, reason, and details. Higher scores indicate more precise context alignment.

3. **Contextual Recall Test**: Measures alignment of retrieval context with expected response, outputting a dictionary with recall score, reason, and details. Higher scores denote better recall.

4. **Contextual Relevancy Test**: Assesses the overall relevance of context to the input prompt, providing a dictionary with relevancy score, reason, and details. Higher scores mean more relevant context.

5. **Hallucination Test**: Determines the hallucination score of the model's response compared to the context, offering a dictionary with scores and details. Higher scores indicate more hallucinated responses.

6. **Faithfulness Test**: Evaluates if the LLM response aligns with the retrieval context, producing a dictionary with a faithfulness score and details. Higher scores suggest more faithful responses.

7. **Consistency Test**: Provides a score for the consistency of responses, with a dictionary containing scores and evaluation details. Higher scores indicate better consistency.

8. **Conciseness Test**: Checks the conciseness of the LLM response, yielding a dictionary with a conciseness score and related information. Higher scores denote more concise responses.

9. **Coherence Test**: Assesses the coherence of the LLM response, resulting in a dictionary with coherence scores and details. Higher scores suggest more coherent responses.

10. **Correctness Test**: Evaluates the correctness of the LLM response, offering a dictionary with correctness scores and information. Higher scores indicate more correct responses.

11. **Summarization Test**: Determines the quality of summaries generated by the LLM, providing a dictionary with summarization scores and details. Higher scores mean better summary quality.

12. **Grade Score Test**: Provides a grade score indicating the education level required to understand the text, with a dictionary containing scores and details. Higher scores indicate a higher education level needed.

13. **Complexity Test**: Offers a score for the complexity of the text, producing a dictionary with complexity scores and submetrics. Higher scores signify more complex texts.

14. **Readability Test**: Provides a readability score, yielding a dictionary with scores and details. Higher scores indicate more readable texts.

15. **Maliciousness Test**: Evaluates the maliciousness of prompts and responses, resulting in a dictionary with scores and evaluation details. Higher scores indicate more malicious content.

16. **Toxicity Test**: Provides a score for the toxicity of model responses, offering a dictionary with toxicity scores. Higher scores suggest more toxic responses.

17. **Bias Test**: Measures the bias score of model responses, yielding a dictionary with scores. Higher scores indicate more biased responses.

18. **Response Toxicity Test**: Assesses the toxicity of model responses, providing a dictionary with toxicity scores. Higher scores suggest more toxic responses.

19. **Refusal Test**: Evaluates the model's refusal similarity, offering a dictionary with refusal scores. Higher scores indicate a greater likelihood of refusal.

20. **Prompt Injection Test**: Checks for injection issues in prompts, resulting in a dictionary with injection scores. Lower scores indicate better prompts.

21. **Coverage Test**: Assesses whether all concepts are covered by model responses, providing a dictionary with coverage ratios. This test evaluates concept utilization.

22. **POS Test**: Evaluates the accuracy of part-of-speech tagging in model responses, offering a dictionary with accuracy ratios. It checks for correct PoS tag usage.

23. **Length Test**: Measures the number of words in generated responses, yielding a dictionary with length details. This test assesses response length appropriateness.

24. **Winner Test**: Compares responses of two models or between a model and human annotation, providing a dictionary indicating which is better. It evaluates response quality.

25. **Overall Test**: Compares the overall score of two models on a provided task, offering a dictionary with overall scores. This test evaluates model performance comprehensively.

26. **Sentiment Analysis Test**: Provides a score for the sentiment of model responses, yielding a dictionary with sentiment scores. Higher scores indicate more positive responses.

27. **Generic Evaluation Test**: Returns a score based on specific criteria, response, and context, offering a dictionary with evaluation scores. Higher scores indicate better response quality.

28. **Cosine Similarity Test**: Provides a score for the similarity between the prompt and response, resulting in a dictionary with similarity scores. Higher scores indicate greater similarity.



## Learn More

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "raga-llm-eval",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ragaai, raga, llm, testing, llm-eval",
    "author": null,
    "author_email": "Raga AI <kiran.scaria@raga.ai>",
    "download_url": "https://files.pythonhosted.org/packages/cd/d6/79357067af90e3e4d08e2cc52d26381d1dbc5095b5d9dbd8f014a9bdd340/raga_llm_eval-2.1.0.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <img src=\"https://github.com/aristotle-ai/raga-llm-eval/blob/v2/docs/assets/logo-lg_white.png\" alt=\"RagaAI - Logo\" width=\"100%\">\n</p>\n\n<h1 align=\"center\">\n    Raga LLM Hub\n</h1>\n\n<h3 align=\"center\">\n    <a href=\"https://raga.ai\">Raga AI</a> |\n    <a href=\"https://docs.raga.ai/raga-llm-hub\">Documentation</a> |\n    <a href=\"https://docs.raga.ai/raga-llm-hub/quickstart\">Getting Started</a> \n\n</h3>\n\n\n<div align=\"center\">\n\n\n[![PyPI - Version](https://img.shields.io/pypi/v/raga-llm-eval?label=PyPI%20Package)](https://badge.fury.io/py/raga-llm-eval) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1PQGqDGdcSUxhSvpSQYX8ZdHf5r90WSYf?usp=sharing)\n</a> [![Python Compatibility](https://img.shields.io/pypi/pyversions/raga-llm-eval)](https://pypi.org/project/raga-llm-eval/) []()\n\n</div>\n\n\nWelcome to Raga LLM Eval, a comprehensive evaluation toolkit for Language and Learning Models (LLMs). This toolkit provides a suite of tests to evaluate various aspects of language model performance, including relevance, understanding, coherence, toxicity, and more.\n\n## Installation\n\n### Using pip\n\n```bash\npython -m venv venv\nsource venv/bin/activate\npip install raga-llm-eval\n\n\n* `python -m venv venv` - Create a new python environment.\n* `source venv/bin/activate` - Activate the environment.\n* `pip install raga-llm-eval` - Install the package\n\n### with conda\n* `conda create --name myenv` - Create a new python environment.\n* `conda activate myenv` - Activate the environment.\n* `python -m pip install raga-llm-eval` - Install the package\n\n\n\n## Quick Tour\n### Setting up\n```py\nfrom raga_llm_eval import RagaLLMEval, get_data\n\n# Initialize with API key\nevaluator = RagaLLMEval(api_keys={\"OPENAI_API_KEY\": \"xxx\"})\n```\n\n###  List available\n```py\n# List available tests\nevaluator.list_available_tests()\n```\n\n### Adding and Running Tests\n#### Using Custom Data\n```py\n# Add tests with custom data\nevaluator.add_test(\n    test_names=[\"relevancy_test\", \"summarisation_test\"],\n    data={\n        \"prompt\": [\"How are you?\", \"How do you do?\"],\n        \"context\": [\"You are a student, answering your teacher.\"],\n        \"response\": [\"I am fine. Thank you\", \"Doooo do do do doooo...\"],\n    },\n    arguments={\"model\": \"gpt-3.5-turbo-1106\", \"threshold\": 0.6},\n).run()\n\nevaluator.print_results()\n\n```\n\n#### Using Provided Test Data\n```py\n# Add tests with provided test data\nevaluator.add_test(\n    test_names=[\"relevancy_test\"],\n    data=get_data(\"relevancy_test\", num_samples=1),\n    arguments={\"model\": \"gpt-3.5-turbo-1106\", \"threshold\": 0.6},\n).run()\n\nevaluator.print_results()\n```\n\n## Advanced Usage: Piping and Saving Results\nThe `raga_llm_eval` package supports a fluent interface, allowing you to chain methods together using a piping style. This approach can make your code more readable and concise. Additionally, you can save the evaluation results to a JSON file for further analysis or record-keeping. Below are examples demonstrating these capabilities.\n\n### Piping Method Calls\nPiping allows you to chain multiple operations in a single statement. This can simplify your code, making it easier to read and maintain. Here's an example of how to use piping to add a test, run it, and print the results:\n\n```python\n# Method piping\nevaluator.add_test(\n    test_names=[\"relevancy_test\", \"summarisation_test\"],\n    data={\n        \"prompt\": [\"What is the capital of France?\", \"Explain quantum entanglement.\"],\n        \"context\": [\"You are a geography teacher.\", \"You are a physics professor explaining to a student.\"],\n        \"response\": [\"The capital of France is Paris.\", \"Quantum entanglement is a phenomenon where particles become interconnected...\"],\n    },\n    arguments={\"model\": \"gpt-3.5-turbo-1106\", \"threshold\": 0.75},\n).run()\n\nevaluator.print_results()\n```\n\n### Saving Results to a File\n```python\n# Adding a test, running it, printing, and saving the results to a JSON file\nevaluator.add_test(\n    test_names=[\"relevancy_test\", \"summarisation_test\"],\n    data={\n        \"prompt\": [\"What is the capital of France?\", \"Explain quantum entanglement.\"],\n        \"context\": [\"You are a geography teacher.\", \"You are a physics professor explaining to a student.\"],\n        \"response\": [\"The capital of France is Paris.\", \"Quantum entanglement is a phenomenon where particles become interconnected...\"],\n    },\n    arguments={\"model\": \"gpt-3.5-turbo-1106\", \"threshold\": 0.75},\n).run()\n\nevaluator.print_results()\n\n```\nThis will execute the tests, print the results to the console, and also save the results in a file named `evaluation_results.json` in your current working directory.\n\nExplore these capabilities to get the most out of your language model evaluations with `raga-llm-eval`.\n\nHappy Evaluating!\n\n## Tests Supported\n\n## Relevance & Understanding\nIn this suite of tests, we focus on the model's ability to provide relevant, accurate, and contextually appropriate responses. This includes evaluating the model's precision, recall, and overall understanding of the given context to generate relevant answers.\n\n1. **Relevancy Test**: Measures the relevance of LLM response to the input prompt\n\n2. **Contextual Precision Test**: Evaluates if relevant nodes in context are ranked higher, resulting in a dictionary with precision score, reason, and details. Higher scores indicate more precise context alignment.\n\n3. **Contextual Recall Test**: Measures alignment of retrieval context with expected response, outputting a dictionary with recall score, reason, and details. Higher scores denote better recall.\n\n4. **Contextual Relevancy Test**: Assesses the overall relevance of context to the input prompt, providing a dictionary with relevancy score, reason, and details. Higher scores mean more relevant context.\n\n5. **Hallucination Test**: Determines the hallucination score of the model's response compared to the context, offering a dictionary with scores and details. Higher scores indicate more hallucinated responses.\n\n6. **Faithfulness Test**: Evaluates if the LLM response aligns with the retrieval context, producing a dictionary with a faithfulness score and details. Higher scores suggest more faithful responses.\n\n7. **Consistency Test**: Provides a score for the consistency of responses, with a dictionary containing scores and evaluation details. Higher scores indicate better consistency.\n\n8. **Conciseness Test**: Checks the conciseness of the LLM response, yielding a dictionary with a conciseness score and related information. Higher scores denote more concise responses.\n\n9. **Coherence Test**: Assesses the coherence of the LLM response, resulting in a dictionary with coherence scores and details. Higher scores suggest more coherent responses.\n\n10. **Correctness Test**: Evaluates the correctness of the LLM response, offering a dictionary with correctness scores and information. Higher scores indicate more correct responses.\n\n11. **Summarization Test**: Determines the quality of summaries generated by the LLM, providing a dictionary with summarization scores and details. Higher scores mean better summary quality.\n\n12. **Grade Score Test**: Provides a grade score indicating the education level required to understand the text, with a dictionary containing scores and details. Higher scores indicate a higher education level needed.\n\n13. **Complexity Test**: Offers a score for the complexity of the text, producing a dictionary with complexity scores and submetrics. Higher scores signify more complex texts.\n\n14. **Readability Test**: Provides a readability score, yielding a dictionary with scores and details. Higher scores indicate more readable texts.\n\n15. **Maliciousness Test**: Evaluates the maliciousness of prompts and responses, resulting in a dictionary with scores and evaluation details. Higher scores indicate more malicious content.\n\n16. **Toxicity Test**: Provides a score for the toxicity of model responses, offering a dictionary with toxicity scores. Higher scores suggest more toxic responses.\n\n17. **Bias Test**: Measures the bias score of model responses, yielding a dictionary with scores. Higher scores indicate more biased responses.\n\n18. **Response Toxicity Test**: Assesses the toxicity of model responses, providing a dictionary with toxicity scores. Higher scores suggest more toxic responses.\n\n19. **Refusal Test**: Evaluates the model's refusal similarity, offering a dictionary with refusal scores. Higher scores indicate a greater likelihood of refusal.\n\n20. **Prompt Injection Test**: Checks for injection issues in prompts, resulting in a dictionary with injection scores. Lower scores indicate better prompts.\n\n21. **Coverage Test**: Assesses whether all concepts are covered by model responses, providing a dictionary with coverage ratios. This test evaluates concept utilization.\n\n22. **POS Test**: Evaluates the accuracy of part-of-speech tagging in model responses, offering a dictionary with accuracy ratios. It checks for correct PoS tag usage.\n\n23. **Length Test**: Measures the number of words in generated responses, yielding a dictionary with length details. This test assesses response length appropriateness.\n\n24. **Winner Test**: Compares responses of two models or between a model and human annotation, providing a dictionary indicating which is better. It evaluates response quality.\n\n25. **Overall Test**: Compares the overall score of two models on a provided task, offering a dictionary with overall scores. This test evaluates model performance comprehensively.\n\n26. **Sentiment Analysis Test**: Provides a score for the sentiment of model responses, yielding a dictionary with sentiment scores. Higher scores indicate more positive responses.\n\n27. **Generic Evaluation Test**: Returns a score based on specific criteria, response, and context, offering a dictionary with evaluation scores. Higher scores indicate better response quality.\n\n28. **Cosine Similarity Test**: Provides a score for the similarity between the prompt and response, resulting in a dictionary with similarity scores. Higher scores indicate greater similarity.\n\n\n\n## Learn More\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Package for LLM Evaluation",
    "version": "2.1.0",
    "project_urls": {
        "Documentation": "https://github.com/aristotle-ai/raga-llm-eval/blob/main/doc",
        "Homepage": "https://raga.ai",
        "Issues": "https://github.com/aristotle-ai/raga-llm-eval/issues",
        "Repository": "https://github.com/aristotle-ai/raga-llm-eval"
    },
    "split_keywords": [
        "ragaai",
        " raga",
        " llm",
        " testing",
        " llm-eval"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bb972bd949a4bbd75284f07a8bc05a8bce0a9522889094e63e3e1d8ea0d63e40",
                "md5": "383a236967224722094d262d73fdd73c",
                "sha256": "cf6b03abb131f454abe46b59ebbf6c439cbd84d2f45881600db3d1bc7d762475"
            },
            "downloads": -1,
            "filename": "raga_llm_eval-2.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "383a236967224722094d262d73fdd73c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 706679,
            "upload_time": "2024-04-18T11:23:31",
            "upload_time_iso_8601": "2024-04-18T11:23:31.903263Z",
            "url": "https://files.pythonhosted.org/packages/bb/97/2bd949a4bbd75284f07a8bc05a8bce0a9522889094e63e3e1d8ea0d63e40/raga_llm_eval-2.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cdd679357067af90e3e4d08e2cc52d26381d1dbc5095b5d9dbd8f014a9bdd340",
                "md5": "90c5238ef9332359c0cb58591c8d384f",
                "sha256": "c47238fd636d5519a2b009739f120697d3d296b66f24f0fefbe3c4d70b4519f4"
            },
            "downloads": -1,
            "filename": "raga_llm_eval-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "90c5238ef9332359c0cb58591c8d384f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 584286,
            "upload_time": "2024-04-18T11:23:34",
            "upload_time_iso_8601": "2024-04-18T11:23:34.883477Z",
            "url": "https://files.pythonhosted.org/packages/cd/d6/79357067af90e3e4d08e2cc52d26381d1dbc5095b5d9dbd8f014a9bdd340/raga_llm_eval-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-18 11:23:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aristotle-ai",
    "github_project": "raga-llm-eval",
    "github_not_found": true,
    "lcname": "raga-llm-eval"
}
        
Elapsed time: 0.22099s