grounded-ai


Namegrounded-ai JSON
Version 1.0.5 PyPI version JSON
download
home_pageNone
SummaryA Python package for evaluating LLM application outputs.
upload_time2024-06-22 02:07:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords nlp qa toxicity rag evaluation language-model transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## GroundedAI

### Overview

The `grounded-ai` package is a powerful tool developed by GroundedAI to evaluate the performance of large language models (LLMs) and their applications. It leverages our own fine tuned small language models and metric specific adapters to compute various metrics, providing insights into the quality and reliability of LLM outputs.
Our models can be found here: https://huggingface.co/grounded-ai

### Features

- **Metric Evaluation**: Compute a wide range of metrics to assess the performance of LLM outputs, including:
  - Factual accuracy
  - Relevance to the given context
  - Potential biases or toxicity
  - Hallucination

- **Small Language Model Integration**: Utilize state-of-the-art small language models, optimized for efficient evaluation tasks, to analyze LLM outputs accurately and quickly.

- **Adapter Support**: Leverage GroundedAI's proprietary adapters, such as the `phi3-toxicity-judge` adapter, to fine-tune the small language models for specific domains, tasks, or evaluation criteria, ensuring tailored and precise assessments.

- **Flexible Input/Output Handling**: Accept LLM outputs in various formats (text, JSON, etc.) and provide evaluation results in a structured and easily consumable manner.

### Getting Started

Install the `grounded-ai` package from [PyPI](https://pypi.org/project/grounded-ai/):

```
pip install grounded-ai==1.0.3
```

### Example Usage: Toxicity Evaluation

The `ToxicityEvaluator` class is used to evaluate the toxicity of a given text. Here's an example of how to use it:

```python
from grounded_ai.evaluators.toxicity_evaluator import ToxicityEvaluator

toxicity_evaluator = ToxicityEvaluator(quantization=True)
toxicity_evaluator.warmup()
data = [
    "That guy is so stupid and ugly",
    "Bunnies are the cutest animals in the world"
]
response = toxicity_evaluator.evaluate(data)
# Output
# {'toxic': 1, 'non-toxic': 1, 'percentage_toxic': 50.0}
```

In this example, we initialize the `ToxicityEvaluator`. The `quantization` parameter is optionally set to `True` to enable quantization for faster inference with less memory.

We then load the base model and the GroundedAI adapter using the `warmup()` method.

Next, we define a list of texts (`data`) that we want to evaluate for toxicity.

Finally, we call the `evaluate` method with the `data` list, and it returns a dictionary containing the number of toxic and non-toxic texts, as well as the percentage of toxic texts.

In the output, we can see that out of the two texts, one is classified as toxic, and the other as non-toxic, resulting in a 50% toxicity percentage.

### Documentation

Detailed documentation, including API references, examples, and guides, coming soon at [https://groundedai.tech/api](https://groundedai.tech/api).

### Contributing

We welcome contributions from the community! If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request on the [GroundedAI GitHub repository](https://github.com/grounded-ai/grounded_ai/issues).

### License

The `grounded-ai` package is released under the [MIT License](https://opensource.org/licenses/MIT).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "grounded-ai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "NLP, QA, Toxicity, Rag, evaluation, language-model, transformer",
    "author": null,
    "author_email": "Josh Longenecker <info@groundedai.tech>",
    "download_url": "https://files.pythonhosted.org/packages/be/f6/9a4876663d5445a0f439fb3a0902e522bf611d4a42621153629e5b6f5d0a/grounded_ai-1.0.5.tar.gz",
    "platform": null,
    "description": "## GroundedAI\r\n\r\n### Overview\r\n\r\nThe `grounded-ai` package is a powerful tool developed by GroundedAI to evaluate the performance of large language models (LLMs) and their applications. It leverages our own fine tuned small language models and metric specific adapters to compute various metrics, providing insights into the quality and reliability of LLM outputs.\r\nOur models can be found here: https://huggingface.co/grounded-ai\r\n\r\n### Features\r\n\r\n- **Metric Evaluation**: Compute a wide range of metrics to assess the performance of LLM outputs, including:\r\n  - Factual accuracy\r\n  - Relevance to the given context\r\n  - Potential biases or toxicity\r\n  - Hallucination\r\n\r\n- **Small Language Model Integration**: Utilize state-of-the-art small language models, optimized for efficient evaluation tasks, to analyze LLM outputs accurately and quickly.\r\n\r\n- **Adapter Support**: Leverage GroundedAI's proprietary adapters, such as the `phi3-toxicity-judge` adapter, to fine-tune the small language models for specific domains, tasks, or evaluation criteria, ensuring tailored and precise assessments.\r\n\r\n- **Flexible Input/Output Handling**: Accept LLM outputs in various formats (text, JSON, etc.) and provide evaluation results in a structured and easily consumable manner.\r\n\r\n### Getting Started\r\n\r\nInstall the `grounded-ai` package from [PyPI](https://pypi.org/project/grounded-ai/):\r\n\r\n```\r\npip install grounded-ai==1.0.3\r\n```\r\n\r\n### Example Usage: Toxicity Evaluation\r\n\r\nThe `ToxicityEvaluator` class is used to evaluate the toxicity of a given text. Here's an example of how to use it:\r\n\r\n```python\r\nfrom grounded_ai.evaluators.toxicity_evaluator import ToxicityEvaluator\r\n\r\ntoxicity_evaluator = ToxicityEvaluator(quantization=True)\r\ntoxicity_evaluator.warmup()\r\ndata = [\r\n    \"That guy is so stupid and ugly\",\r\n    \"Bunnies are the cutest animals in the world\"\r\n]\r\nresponse = toxicity_evaluator.evaluate(data)\r\n# Output\r\n# {'toxic': 1, 'non-toxic': 1, 'percentage_toxic': 50.0}\r\n```\r\n\r\nIn this example, we initialize the `ToxicityEvaluator`. The `quantization` parameter is optionally set to `True` to enable quantization for faster inference with less memory.\r\n\r\nWe then load the base model and the GroundedAI adapter using the `warmup()` method.\r\n\r\nNext, we define a list of texts (`data`) that we want to evaluate for toxicity.\r\n\r\nFinally, we call the `evaluate` method with the `data` list, and it returns a dictionary containing the number of toxic and non-toxic texts, as well as the percentage of toxic texts.\r\n\r\nIn the output, we can see that out of the two texts, one is classified as toxic, and the other as non-toxic, resulting in a 50% toxicity percentage.\r\n\r\n### Documentation\r\n\r\nDetailed documentation, including API references, examples, and guides, coming soon at [https://groundedai.tech/api](https://groundedai.tech/api).\r\n\r\n### Contributing\r\n\r\nWe welcome contributions from the community! If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request on the [GroundedAI GitHub repository](https://github.com/grounded-ai/grounded_ai/issues).\r\n\r\n### License\r\n\r\nThe `grounded-ai` package is released under the [MIT License](https://opensource.org/licenses/MIT).\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python package for evaluating LLM application outputs.",
    "version": "1.0.5",
    "project_urls": {
        "Bug Tracker": "https://github.com/grounded-ai/grounded_ai/issues",
        "Homepage": "https://github.com/grounded-ai/grounded_ai"
    },
    "split_keywords": [
        "nlp",
        " qa",
        " toxicity",
        " rag",
        " evaluation",
        " language-model",
        " transformer"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a87d0d760ce8cdbb7991e34bc063abdfffa0a26019f116926a78dc966921223f",
                "md5": "b228436516caec144b270d0554de9915",
                "sha256": "c20babf524414c598860a68897d9e451b5e80b66244f572a7c6da341aa34e6dc"
            },
            "downloads": -1,
            "filename": "grounded_ai-1.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b228436516caec144b270d0554de9915",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 13153,
            "upload_time": "2024-06-22T02:07:26",
            "upload_time_iso_8601": "2024-06-22T02:07:26.224936Z",
            "url": "https://files.pythonhosted.org/packages/a8/7d/0d760ce8cdbb7991e34bc063abdfffa0a26019f116926a78dc966921223f/grounded_ai-1.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bef69a4876663d5445a0f439fb3a0902e522bf611d4a42621153629e5b6f5d0a",
                "md5": "247e8a7f5b651ff1b63e6a8f1756b692",
                "sha256": "171f652befdb00fa45db24ba1abaefd5c52868a11514349ec3be6cfa54a52e14"
            },
            "downloads": -1,
            "filename": "grounded_ai-1.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "247e8a7f5b651ff1b63e6a8f1756b692",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 10744,
            "upload_time": "2024-06-22T02:07:27",
            "upload_time_iso_8601": "2024-06-22T02:07:27.946001Z",
            "url": "https://files.pythonhosted.org/packages/be/f6/9a4876663d5445a0f439fb3a0902e522bf611d4a42621153629e5b6f5d0a/grounded_ai-1.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-22 02:07:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "grounded-ai",
    "github_project": "grounded_ai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "grounded-ai"
}
        
Elapsed time: 0.77542s