lares


Namelares JSON
Version 0.0.31 PyPI version JSON
download
home_pagehttp://packages.python.org/lares
SummaryLARES: vaLidation, evAluation and REliability Solutions
upload_time2023-08-04 03:54:59
maintainer
docs_urlNone
authorKarime Maamari
requires_python>=3.6
licenseMIT
keywords evaluation validation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LARES: vaLidation, evAluation, and faiRnEss aSessments
![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)
[![PyPI version](https://badge.fury.io/py/lares.svg)](https://badge.fury.io/py/lares)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lares)

A Python package designed to assist with the evaluation and validation models in various tasks such as translation, summarization, and rephrasing. 

This package leverages a suite of existing tools and resources to provide the best form of evaluation and validation for the prompted task. Natural Language Toolkit (NLTK), BERT, and ROUGE are employed for evaluations, while Microsoft's Fairlearn, Facebook's BART, and roBERTa are used to assess and address the toxicity and fairness of a given model.

In addition, LARES uses datasets from HuggingFace, where the choice of datasets was informed by benchmark setters such as the General Language Understanding Evaluation (GLUE) benchmark.

## Features

- **Quantitative and Qualitative Evaluation**: Provides both qualitative and quantitative approaches to evaluating models. Quantitative metrics include METEOR scores for translations, normalized ROUGE scores for summarizations, and BERT scores for rephrasing tasks. Qualitative metrics are computed both from binary user judgements as well as sentiment analysis done on user feedback.

- **Fairness and Toxicity Validation**: Provides a quantitative measure of the toxicity and fairness of a given model for specific tasks by leveraging Fairlearn and roBERTa. 

- **Iterative Reconstruction**: Iteratively rephrases model responses until below a specified toxicity and above a specified quality threshold using BART 

## Workflow
![](images/workflow.svg)
#### Prompt from Dataset

Start with a dataset and create a set of prompts and references to evaluate the model. The dataset can be a benchmark dataset obtained from sources such as HuggingFace, or it can be real-time data that has been scraped.

#### Task Determination/Labeling

Each task is classified according to its underlying purpose, such as translation, summarization, rephrasing, sentiment analysis, or classification. This classification provides two key benefits:

1. **Model Selection**: Understanding the task helps us choose the best model for it, improving the overall performance of our framework.
2. **Response Evaluation**: Different tasks require different evaluation metrics. By classifying our tasks, we can use the most appropriate metrics to evaluate the responses.

The datasets are labeled (by the user) based on potential differences. For instance, English-to-French prompts might be labeled 'fr', while English-to-Spanish prompts could be labeled 'es'. This helps us identify potential biases in the model.

#### Output Generation from Model

The prompt is passed to a model, which generates a response.

#### Evaluation According to Task Label and Validation

The evaluation score is calculated by comparing the model's response to a reference using a task-specific metric. The validation score is calculated by using a pre-trained model to determine the sentiment of the response and assign a toxicity/profanity metric. If the user chooses not to use the optional Rephrase/Detox loop, the scores and response are added to an output dictionary.

#### (OPTIONAL) Check Against Threshold, Check Num. Iterations, Rephrase/Detox, and Optional User Evaluation

The user can set a threshold for the validity and evaluation scores. 

1. If both scores exceed their respective thresholds, they, along with the response, are added to the output dictionary.
2. If either score fails to meet its threshold, we enter an iterative loop of rephrasing and detoxifying. The user can set a maximum number of iterations for this process.

    A. The response will be rephrased and/or detoxified until it meets the threshold or until the maximum number of iterations is reached.
    
    B. If both scores exceed their thresholds, they, along with the response, are added to the output dictionary.
    
    C. If we reach the maximum number of iterations without exceeding both thresholds, the user is asked to review the results. This provides the opportunity to catch potential nuances in responses without relying solely on manual efforts. This step is optional. If the user participates, their evaluation is added to the output dictionary. If not, the scores from the final iteration are added.

#### Fairness

At this point, we have a set of labeled responses and their corresponding validation and evaluation scores. These labels and scores allow us to identify potential biases in the model. We provide the user with the responses, the average validation and evaluation scores for each labeled set, and an overall measure of the model's fairness.


## Installation

Requires Python 3.6 or later. You can install using pip via:

```bash
pip install lares
```

## Usage

Here is a basic usage example for translation task:

```python
# Imports
import openai
from datasets import load_dataset
from lares import generate
import numpy as np

# Set your API key
openai.api_key = ''

# Loader
def load_translation_data(dataset_name, language_pair, num_samples=10):
    # Grab data
    dataset = load_dataset(dataset_name, language_pair)
    data = dataset["validation"]['translation'][:num_samples]

    # Create the prompts
    prompts = [f'Translate to {language_pair.split("-")[1]}: {item["en"]}' for item in data]
    # Get the references (correct translations)
    references = [item[language_pair.split("-")[1]] for item in data]
    # Return prompts and references
    return prompts, references

# Load the translation data
prompts_fr, refs_fr = load_translation_data("opus100", "en-fr")
prompts_es, refs_es = load_translation_data("opus100", "en-es")

# Combine the prompts and references
prompts = prompts_fr + prompts_es
references = refs_fr + refs_es
# Create labels for the data (0 for French, 1 for Spanish)
labels = np.concatenate([np.zeros(len(prompts_fr)), np.ones(len(prompts_es))]).tolist()

# Use the generate function from the LARES module to get the model's metrics for this task
data, bias, acc, tox = generate(prompts, references, labels, max_iterations=1, task_type='Translation', feedback=False)

# Print the results
print(f"Bias: {bias}")
print(f"Accuracy: {acc[0]} (Set 1), {acc[1]} (Set 2)")
print(f"Toxicity: {tox[0]} (Set 1), {tox[1]} (Set 2)")
```

## Dependencies

- openai==0.27.8
- nltk==3.7
- torch==2.0.1
- transformers==4.31.0
- rouge==1.0.1
- bert_score==0.3.12
- datasets==1.11.0

To be explicit, you can install via:

```bash
pip install openai==0.27.8 nltk==3.7 torch==2.0.1 transformers==4.31.0 rouge==1.0.1 bert_score==0.3.12 datasets==1.11.0
```

Though installation of LARES via pip should account for these underlying dependencies.

            

Raw data

            {
    "_id": null,
    "home_page": "http://packages.python.org/lares",
    "name": "lares",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "evaluation,validation",
    "author": "Karime Maamari",
    "author_email": "maamari@usc.edu",
    "download_url": "https://files.pythonhosted.org/packages/e9/c9/38fde4aa8a816861505a38ef2b1dd62b3273b449f4556692ff1a7d78532f/lares-0.0.31.tar.gz",
    "platform": null,
    "description": "# LARES: vaLidation, evAluation, and faiRnEss aSessments\n![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)\n[![PyPI version](https://badge.fury.io/py/lares.svg)](https://badge.fury.io/py/lares)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lares)\n\nA Python package designed to assist with the evaluation and validation models in various tasks such as translation, summarization, and rephrasing. \n\nThis package leverages a suite of existing tools and resources to provide the best form of evaluation and validation for the prompted task. Natural Language Toolkit (NLTK), BERT, and ROUGE are employed for evaluations, while Microsoft's Fairlearn, Facebook's BART, and roBERTa are used to assess and address the toxicity and fairness of a given model.\n\nIn addition, LARES uses datasets from HuggingFace, where the choice of datasets was informed by benchmark setters such as the General Language Understanding Evaluation (GLUE) benchmark.\n\n## Features\n\n- **Quantitative and Qualitative Evaluation**: Provides both qualitative and quantitative approaches to evaluating models. Quantitative metrics include METEOR scores for translations, normalized ROUGE scores for summarizations, and BERT scores for rephrasing tasks. Qualitative metrics are computed both from binary user judgements as well as sentiment analysis done on user feedback.\n\n- **Fairness and Toxicity Validation**: Provides a quantitative measure of the toxicity and fairness of a given model for specific tasks by leveraging Fairlearn and roBERTa. \n\n- **Iterative Reconstruction**: Iteratively rephrases model responses until below a specified toxicity and above a specified quality threshold using BART \n\n## Workflow\n![](images/workflow.svg)\n#### Prompt from Dataset\n\nStart with a dataset and create a set of prompts and references to evaluate the model. The dataset can be a benchmark dataset obtained from sources such as HuggingFace, or it can be real-time data that has been scraped.\n\n#### Task Determination/Labeling\n\nEach task is classified according to its underlying purpose, such as translation, summarization, rephrasing, sentiment analysis, or classification. This classification provides two key benefits:\n\n1. **Model Selection**: Understanding the task helps us choose the best model for it, improving the overall performance of our framework.\n2. **Response Evaluation**: Different tasks require different evaluation metrics. By classifying our tasks, we can use the most appropriate metrics to evaluate the responses.\n\nThe datasets are labeled (by the user) based on potential differences. For instance, English-to-French prompts might be labeled 'fr', while English-to-Spanish prompts could be labeled 'es'. This helps us identify potential biases in the model.\n\n#### Output Generation from Model\n\nThe prompt is passed to a model, which generates a response.\n\n#### Evaluation According to Task Label and Validation\n\nThe evaluation score is calculated by comparing the model's response to a reference using a task-specific metric. The validation score is calculated by using a pre-trained model to determine the sentiment of the response and assign a toxicity/profanity metric. If the user chooses not to use the optional Rephrase/Detox loop, the scores and response are added to an output dictionary.\n\n#### (OPTIONAL) Check Against Threshold, Check Num. Iterations, Rephrase/Detox, and Optional User Evaluation\n\nThe user can set a threshold for the validity and evaluation scores. \n\n1. If both scores exceed their respective thresholds, they, along with the response, are added to the output dictionary.\n2. If either score fails to meet its threshold, we enter an iterative loop of rephrasing and detoxifying. The user can set a maximum number of iterations for this process.\n\n    A. The response will be rephrased and/or detoxified until it meets the threshold or until the maximum number of iterations is reached.\n    \n    B. If both scores exceed their thresholds, they, along with the response, are added to the output dictionary.\n    \n    C. If we reach the maximum number of iterations without exceeding both thresholds, the user is asked to review the results. This provides the opportunity to catch potential nuances in responses without relying solely on manual efforts. This step is optional. If the user participates, their evaluation is added to the output dictionary. If not, the scores from the final iteration are added.\n\n#### Fairness\n\nAt this point, we have a set of labeled responses and their corresponding validation and evaluation scores. These labels and scores allow us to identify potential biases in the model. We provide the user with the responses, the average validation and evaluation scores for each labeled set, and an overall measure of the model's fairness.\n\n\n## Installation\n\nRequires Python 3.6 or later. You can install using pip via:\n\n```bash\npip install lares\n```\n\n## Usage\n\nHere is a basic usage example for translation task:\n\n```python\n# Imports\nimport openai\nfrom datasets import load_dataset\nfrom lares import generate\nimport numpy as np\n\n# Set your API key\nopenai.api_key = ''\n\n# Loader\ndef load_translation_data(dataset_name, language_pair, num_samples=10):\n    # Grab data\n    dataset = load_dataset(dataset_name, language_pair)\n    data = dataset[\"validation\"]['translation'][:num_samples]\n\n    # Create the prompts\n    prompts = [f'Translate to {language_pair.split(\"-\")[1]}: {item[\"en\"]}' for item in data]\n    # Get the references (correct translations)\n    references = [item[language_pair.split(\"-\")[1]] for item in data]\n    # Return prompts and references\n    return prompts, references\n\n# Load the translation data\nprompts_fr, refs_fr = load_translation_data(\"opus100\", \"en-fr\")\nprompts_es, refs_es = load_translation_data(\"opus100\", \"en-es\")\n\n# Combine the prompts and references\nprompts = prompts_fr + prompts_es\nreferences = refs_fr + refs_es\n# Create labels for the data (0 for French, 1 for Spanish)\nlabels = np.concatenate([np.zeros(len(prompts_fr)), np.ones(len(prompts_es))]).tolist()\n\n# Use the generate function from the LARES module to get the model's metrics for this task\ndata, bias, acc, tox = generate(prompts, references, labels, max_iterations=1, task_type='Translation', feedback=False)\n\n# Print the results\nprint(f\"Bias: {bias}\")\nprint(f\"Accuracy: {acc[0]} (Set 1), {acc[1]} (Set 2)\")\nprint(f\"Toxicity: {tox[0]} (Set 1), {tox[1]} (Set 2)\")\n```\n\n## Dependencies\n\n- openai==0.27.8\n- nltk==3.7\n- torch==2.0.1\n- transformers==4.31.0\n- rouge==1.0.1\n- bert_score==0.3.12\n- datasets==1.11.0\n\nTo be explicit, you can install via:\n\n```bash\npip install openai==0.27.8 nltk==3.7 torch==2.0.1 transformers==4.31.0 rouge==1.0.1 bert_score==0.3.12 datasets==1.11.0\n```\n\nThough installation of LARES via pip should account for these underlying dependencies.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LARES: vaLidation, evAluation and REliability Solutions",
    "version": "0.0.31",
    "project_urls": {
        "Homepage": "http://packages.python.org/lares"
    },
    "split_keywords": [
        "evaluation",
        "validation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "51ae84b4378d1a3a596baf482862c3e7fa2ef6a36a71ae320becebfea913f15b",
                "md5": "5448f8e447eac31a0d7d1f980e5572eb",
                "sha256": "ec00d0cb83a2cd90b2b7e7db89f8fd3d6688bd548386e698f71591e70100800f"
            },
            "downloads": -1,
            "filename": "lares-0.0.31-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5448f8e447eac31a0d7d1f980e5572eb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 8910,
            "upload_time": "2023-08-04T03:54:58",
            "upload_time_iso_8601": "2023-08-04T03:54:58.134993Z",
            "url": "https://files.pythonhosted.org/packages/51/ae/84b4378d1a3a596baf482862c3e7fa2ef6a36a71ae320becebfea913f15b/lares-0.0.31-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e9c938fde4aa8a816861505a38ef2b1dd62b3273b449f4556692ff1a7d78532f",
                "md5": "c9c25bf3a12bd2a5e1433344dbfbf0ee",
                "sha256": "16df22874633b1743a477748f208fc64d704541d30ee1e489360d26e91801364"
            },
            "downloads": -1,
            "filename": "lares-0.0.31.tar.gz",
            "has_sig": false,
            "md5_digest": "c9c25bf3a12bd2a5e1433344dbfbf0ee",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 8369,
            "upload_time": "2023-08-04T03:54:59",
            "upload_time_iso_8601": "2023-08-04T03:54:59.934030Z",
            "url": "https://files.pythonhosted.org/packages/e9/c9/38fde4aa8a816861505a38ef2b1dd62b3273b449f4556692ff1a7d78532f/lares-0.0.31.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-04 03:54:59",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "lares"
}
        
Elapsed time: 0.09341s