ragEvals


NameragEvals JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/funnyPhani
SummaryA library for evaluating Retrieval-Augmented Generation (RAG) systems
upload_time2024-05-23 04:29:39
maintainerNone
docs_urlNone
authorPhani Siginamsetty
requires_python>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RAG Evaluator

## Overview

RAG Evaluator is a Python library for evaluating Retrieval-Augmented Generation (RAG) systems. It provides various metrics to evaluate the quality of generated text against reference text.

## Installation

You can install the library using pip:

```bash
pip install rag-evaluator
```

## Usage

Here's how to use the RAG Evaluator library:

```python
from rag_evaluator import RAGEvaluator

# Initialize the evaluator
evaluator = RAGEvaluator()

# Input data
question = "What are the causes of climate change?"
response = "Climate change is caused by human activities."
reference = "Human activities such as burning fossil fuels cause climate change."

# Evaluate the response
metrics = evaluator.evaluate_all(question, response, reference)

# Print the results
print(metrics)
```

## Streamlit Web App

To run the web app:

- cd into streamlit app folder.
- Create a virtual env
- Activate
- Install all dependencies
- and run
```
streamlit run app.py
```

## Metrics

The following metrics are provided by the library:

- **BLEU**: Measures the overlap between the generated output and reference text based on n-grams.
- **ROUGE-1**: Measures the overlap of unigrams between the generated output and reference text.
- **BERT Score**: Evaluates the semantic similarity between the generated output and reference text using BERT embeddings.
- **Perplexity**: Measures how well a language model predicts the text.
- **Diversity**: Measures the uniqueness of bigrams in the generated output.
- **Racial Bias**: Detects the presence of biased language in the generated output.

## Testing

To run the tests, use the following command:

```
python -m unittest discover -s rag_evaluator -p "test_*.py"
```
## License

This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.

## Contributing

Contributions are welcome! If you have any improvements, suggestions, or bug fixes, feel free to create a pull request (PR) or open an issue on GitHub. Please ensure your contributions adhere to the project's coding standards and include appropriate tests.

### How to Contribute

1. Fork the repository.
2. Create a new branch for your feature or bug fix.
3. Make your changes.
4. Run tests to ensure everything is working.
5. Commit your changes and push to your fork.
6. Create a pull request (PR) with a detailed description of your changes.

## Contact

If you have any questions or need further assistance, feel free to reach out via [email](mailto:aianytime07@gmail.com).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/funnyPhani",
    "name": "ragEvals",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Phani Siginamsetty",
    "author_email": "siginamsettyphani@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/b2/40/361a763ee6328b176e569f3d696c0d331ecc898a8b4abf8808cdad429344/ragEvals-0.1.0.tar.gz",
    "platform": null,
    "description": "# RAG Evaluator\n\n## Overview\n\nRAG Evaluator is a Python library for evaluating Retrieval-Augmented Generation (RAG) systems. It provides various metrics to evaluate the quality of generated text against reference text.\n\n## Installation\n\nYou can install the library using pip:\n\n```bash\npip install rag-evaluator\n```\n\n## Usage\n\nHere's how to use the RAG Evaluator library:\n\n```python\nfrom rag_evaluator import RAGEvaluator\n\n# Initialize the evaluator\nevaluator = RAGEvaluator()\n\n# Input data\nquestion = \"What are the causes of climate change?\"\nresponse = \"Climate change is caused by human activities.\"\nreference = \"Human activities such as burning fossil fuels cause climate change.\"\n\n# Evaluate the response\nmetrics = evaluator.evaluate_all(question, response, reference)\n\n# Print the results\nprint(metrics)\n```\n\n## Streamlit Web App\n\nTo run the web app:\n\n- cd into streamlit app folder.\n- Create a virtual env\n- Activate\n- Install all dependencies\n- and run\n```\nstreamlit run app.py\n```\n\n## Metrics\n\nThe following metrics are provided by the library:\n\n- **BLEU**: Measures the overlap between the generated output and reference text based on n-grams.\n- **ROUGE-1**: Measures the overlap of unigrams between the generated output and reference text.\n- **BERT Score**: Evaluates the semantic similarity between the generated output and reference text using BERT embeddings.\n- **Perplexity**: Measures how well a language model predicts the text.\n- **Diversity**: Measures the uniqueness of bigrams in the generated output.\n- **Racial Bias**: Detects the presence of biased language in the generated output.\n\n## Testing\n\nTo run the tests, use the following command:\n\n```\npython -m unittest discover -s rag_evaluator -p \"test_*.py\"\n```\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.\n\n## Contributing\n\nContributions are welcome! If you have any improvements, suggestions, or bug fixes, feel free to create a pull request (PR) or open an issue on GitHub. Please ensure your contributions adhere to the project's coding standards and include appropriate tests.\n\n### How to Contribute\n\n1. Fork the repository.\n2. Create a new branch for your feature or bug fix.\n3. Make your changes.\n4. Run tests to ensure everything is working.\n5. Commit your changes and push to your fork.\n6. Create a pull request (PR) with a detailed description of your changes.\n\n## Contact\n\nIf you have any questions or need further assistance, feel free to reach out via [email](mailto:aianytime07@gmail.com).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A library for evaluating Retrieval-Augmented Generation (RAG) systems",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/funnyPhani"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9336a320af50569dddc86e75e434fd5006b1633ab4311d4522268b2c1fb78a7e",
                "md5": "d1515eec8b8da0a865f776badebcad17",
                "sha256": "efd662f28e628cfbd44fc7406813d9d5194c53481168975efd69b1d2f4b305dc"
            },
            "downloads": -1,
            "filename": "ragEvals-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d1515eec8b8da0a865f776badebcad17",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 5280,
            "upload_time": "2024-05-23T04:29:37",
            "upload_time_iso_8601": "2024-05-23T04:29:37.178344Z",
            "url": "https://files.pythonhosted.org/packages/93/36/a320af50569dddc86e75e434fd5006b1633ab4311d4522268b2c1fb78a7e/ragEvals-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b240361a763ee6328b176e569f3d696c0d331ecc898a8b4abf8808cdad429344",
                "md5": "980aec34952542d833723c3fb49fde95",
                "sha256": "db00df51809370db867b9f55ddac7d7d1d552f2bdc3646636db61c4e8e293fbf"
            },
            "downloads": -1,
            "filename": "ragEvals-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "980aec34952542d833723c3fb49fde95",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 4705,
            "upload_time": "2024-05-23T04:29:39",
            "upload_time_iso_8601": "2024-05-23T04:29:39.180948Z",
            "url": "https://files.pythonhosted.org/packages/b2/40/361a763ee6328b176e569f3d696c0d331ecc898a8b4abf8808cdad429344/ragEvals-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-23 04:29:39",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "ragevals"
}
        
Elapsed time: 0.24238s