crfm-helm


Namecrfm-helm JSON
Version 0.5.0 PyPI version JSON
download
home_pagehttps://github.com/stanford-crfm/helm
SummaryBenchmark for language models
upload_time2024-04-23 21:36:23
maintainerNone
docs_urlNone
authorStanford CRFM
requires_python<3.11,>=3.8
licenseApache License 2.0
keywords language models benchmarking
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!--intro-start-->

# Holistic Evaluation of Language Models

[comment]: <> (When using the img tag, which allows us to specify size, src has to be a URL.)
<img src="https://github.com/stanford-crfm/helm/raw/main/src/helm/benchmark/static/images/helm-logo.png" alt=""  width="800"/>

Welcome! The **`crfm-helm`** Python package contains code used in the **Holistic Evaluation of Language Models** project ([paper](https://arxiv.org/abs/2211.09110), [website](https://crfm.stanford.edu/helm/latest/)) by [Stanford CRFM](https://crfm.stanford.edu/). This package includes the following features:

- Collection of datasets in a standard format (e.g., NaturalQuestions)
- Collection of models accessible via a unified API (e.g., GPT-3, MT-NLG, OPT, BLOOM)
- Collection of metrics beyond accuracy (efficiency, bias, toxicity, etc.)
- Collection of perturbations for evaluating robustness and fairness (e.g., typos, dialect)
- Modular framework for constructing prompts from datasets
- Proxy server for managing accounts and providing unified interface to access models
<!--intro-end-->

To get started, refer to [the documentation on Read the Docs](https://crfm-helm.readthedocs.io/) for how to install and run the package.

## Directory Structure

The directory structure for this repo is as follows

```
├── docs # MD used to generate readthedocs
│
├── scripts # Python utility scripts for HELM
│ ├── cache
│ ├── data_overlap # Calculate train test overlap
│ │ ├── common
│ │ ├── scenarios
│ │ └── test
│ ├── efficiency
│ ├── fact_completion
│ ├── offline_eval
│ └── scale
└── src
├── helm # Benchmarking Scripts for HELM
│ │
│ ├── benchmark # Main Python code for running HELM
│ │ │
│ │ └── static # Current JS (Jquery) code for rendering front-end
│ │ │
│ │ └── ...
│ │
│ ├── common # Additional Python code for running HELM
│ │
│ └── proxy # Python code for external web requests
│
└── helm-frontend # New React Front-end
```

# Holistic Evaluation of Text-To-Image Models

<img src="https://github.com/stanford-crfm/helm/raw/heim/src/helm/benchmark/static/heim/images/heim-logo.png" alt=""  width="800"/>

Significant effort has recently been made in developing text-to-image generation models, which take textual prompts as 
input and generate images. As these models are widely used in real-world applications, there is an urgent need to 
comprehensively understand their capabilities and risks. However, existing evaluations primarily focus on image-text 
alignment and image quality. To address this limitation, we introduce a new benchmark, 
**Holistic Evaluation of Text-To-Image Models (HEIM)**.

We identify 12 different aspects that are important in real-world model deployment, including:

- image-text alignment
- image quality
- aesthetics
- originality
- reasoning
- knowledge
- bias
- toxicity
- fairness
- robustness
- multilinguality
- efficiency

By curating scenarios encompassing these aspects, we evaluate state-of-the-art text-to-image models using this benchmark. 
Unlike previous evaluations that focused on alignment and quality, HEIM significantly improves coverage by evaluating all 
models across all aspects. Our results reveal that no single model excels in all aspects, with different models 
demonstrating strengths in different aspects.

This repository contains the code used to produce the [results on the website](https://crfm.stanford.edu/heim/latest/) 
and [paper](https://arxiv.org/abs/2311.04287).

# Tutorial

This tutorial will explain how to use the HELM command line tools to run benchmarks, aggregate statistics, and visualize results.

We will run two runs using the `mmlu` scenario on the `openai/gpt2` model. The `mmlu` scenario implements the **Massive Multitask Language (MMLU)** benchmark from [this paper](https://arxiv.org/pdf/2009.03300.pdf), and consists of a Question Answering (QA) task using a dataset with questions from 57 subjects such as elementary mathematics, US history, computer science, law, and more. Note that GPT-2 performs poorly on MMLU, so this is just a proof of concept. We will run two runs: the first using questions about anatomy, and the second using questions about philosophy.

## Using `helm-run`

`helm-run` is a command line tool for running benchmarks.

To run this benchmark using the HELM command-line tools, we need to specify **run spec descriptions** that describes the desired runs. For this example, the run spec descriptions are `mmlu:subject=anatomy,model=openai/gpt2` (for anatomy) and `mmlu:subject=philosophy,model=openai/gpt2` (for philosophy).

Next, we need to create a **run spec configuration file** containing these run spec descriptions. A run spec configuration file is a text file containing `RunEntries` serialized to JSON, where each entry in `RunEntries` contains a run spec description. The `description` field of each entry should be a **run spec description**. Create a text file named `run_entries.conf` with the following contents:

```
entries: [
  {description: "mmlu:subject=anatomy,model=openai/gpt2", priority: 1},
  {description: "mmlu:subject=philosophy,model=openai/gpt2", priority: 1},
]
```

We will now use `helm-run` to execute the runs that have been specified in this run spec configuration file. Run this command:

```
helm-run --conf-paths run_entries.conf --suite v1 --max-eval-instances 10
```

The meaning of the additional arguments are as follows:

- `--suite` specifies a subdirectory under the output directory in which all the output will be placed.
- `--max-eval-instances` limits evaluation to only the first *N* inputs (i.e. instances) from the benchmark.

`helm-run` creates an environment directory environment and an output directory by default.

-  The environment directory is `prod_env/` by default and can be set using `--local-path`. Credentials for making API calls should be added to a `credentials.conf` file in this directory.
-  The output directory is `benchmark_output/` by default and can be set using `--output-path`.

After running this command, navigate to the `benchmark_output/runs/v1/` directory. This should contain a two sub-directories named `mmlu:subject=anatomy,model=openai_gpt2` and `mmlu:subject=philosophy,model=openai_gpt2`. Note that the names of these sub-directories is based on the run spec descriptions we used earlier, but with `/` replaced with `_`.

Each output sub-directory will contain several JSON files that were generated during the corresponding run:

- `run_spec.json` contains the `RunSpec`, which specifies the scenario, adapter and metrics for the run.
- `scenario.json` contains a serialized `Scenario`, which contains the scenario for the run and specifies the instances (i.e. inputs) used.
- `scenario_state.json` contains a serialized `ScenarioState`, which contains every request to and response from the model.
- `per_instance_stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics for each instance (i.e. input).
- `stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics, aggregated across all instances (i.e. inputs).

`helm-run` provides additional arguments that can be used to filter out `--models-to-run`, `--groups-to-run` and `--priority`. It can be convenient to create a large `run_entries.conf` file containing every run spec description of interest, and then use these flags to filter down the RunSpecs to actually run. As an example, the main `run_specs.conf` file used for the HELM benchmarking paper can be found [here](https://github.com/stanford-crfm/helm/blob/main/src/helm/benchmark/presentation/run_specs.conf).

**Using model or model_deployment:** Some models have several deployments (for exmaple `eleutherai/gpt-j-6b` is deployed under `huggingface/gpt-j-6b`, `gooseai/gpt-j-6b` and `together/gpt-j-6b`). Since the results can differ depending on the deployment, we provide a way to specify the deployment instead of the model. Instead of using `model=eleutherai/gpt-g-6b`, use `model_deployment=huggingface/gpt-j-6b`. If you do not, a deployment will be arbitrarily chosen. This can still be used for models that have a single deployment and is a good practice to follow to avoid any ambiguity.

## Using `helm-summarize`

The `helm-summarize` reads the output files of `helm-run` and computes aggregate statistics across runs. Run the following:

```
helm-summarize --suite v1
```

This reads the pre-existing files in `benchmark_output/runs/v1/` that were written by `helm-run` previously, and writes the following new files back to `benchmark_output/runs/v1/`:

- `summary.json` contains a serialized `ExecutiveSummary` with a date and suite name.
- `run_specs.json` contains the run spec descriptions for all the runs.
- `runs.json` contains serialized list of `Run`, which contains the run path, run spec and adapter spec and statistics for each run.
- `groups.json` contains a serialized list of `Table`, each containing information about groups in a group category.
- `groups_metadata.json` contains a list of all the groups along with a human-readable description and a taxonomy.

Additionally, for each group and group-relavent metric, it will output a pair of files: `benchmark_output/runs/v1/groups/latex/<group_name>_<metric_name>.tex` and `benchmark_output/runs/v1/groups/json/<group_name>_<metric_name>.json`. These files contain the statistics for that metric from each run within the group.

<!--
# TODO(#1441): Enable plots

## Using `helm-create-plots`

The `helm-create-plots` reads the `groups` directory created by `helm-summarize` and creates plots, equivalent to those use in the HELM paper. Run the following:

```
helm-create-plots --suite v1
```

This reads the pre-existing files in `benchmark_output/runs/v1/groups` that were written by `helm-summarize` previously,
and creates plots (`.png` or `.pdf`) at `benchmark_output/runs/v1/plots`.

-->

## Using `helm-server`

Finally, the `helm-server` command launches a web server to visualize the output files of `helm-run` and `helm-benchmark`. Run:

```
helm-server
```

Open a browser and go to http://localhost:8000/ to view the visualization. You should see a similar view as [live website for the paper](https://crfm.stanford.edu/helm/v1.0/), but for the data from your benchmark runs. The website has three main sections:

- **Models** contains a list of available models.
- **Scenarios** contains a list of available scenarios.
- **Results** contains results from the runs, organized into groups and categories of groups.
- **Raw Runs** contains a searchable list of runs.

## Other Tips

- The suite name can be used as a versioning mechanism to separate runs using different versions of scenarios or models.
- Tools such as [`jq`](https://stedolan.github.io/jq/) are useful for examining the JSON output files on the command line.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/stanford-crfm/helm",
    "name": "crfm-helm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.11,>=3.8",
    "maintainer_email": null,
    "keywords": "language models benchmarking",
    "author": "Stanford CRFM",
    "author_email": "contact-crfm@stanford.edu",
    "download_url": "https://files.pythonhosted.org/packages/45/d4/292e2258f26309dcd93f66fadf1639cd7c96a3f918d2c1ea4310cda6e977/crfm_helm-0.5.0.tar.gz",
    "platform": null,
    "description": "<!--intro-start-->\n\n# Holistic Evaluation of Language Models\n\n[comment]: <> (When using the img tag, which allows us to specify size, src has to be a URL.)\n<img src=\"https://github.com/stanford-crfm/helm/raw/main/src/helm/benchmark/static/images/helm-logo.png\" alt=\"\"  width=\"800\"/>\n\nWelcome! The **`crfm-helm`** Python package contains code used in the **Holistic Evaluation of Language Models** project ([paper](https://arxiv.org/abs/2211.09110), [website](https://crfm.stanford.edu/helm/latest/)) by [Stanford CRFM](https://crfm.stanford.edu/). This package includes the following features:\n\n- Collection of datasets in a standard format (e.g., NaturalQuestions)\n- Collection of models accessible via a unified API (e.g., GPT-3, MT-NLG, OPT, BLOOM)\n- Collection of metrics beyond accuracy (efficiency, bias, toxicity, etc.)\n- Collection of perturbations for evaluating robustness and fairness (e.g., typos, dialect)\n- Modular framework for constructing prompts from datasets\n- Proxy server for managing accounts and providing unified interface to access models\n<!--intro-end-->\n\nTo get started, refer to [the documentation on Read the Docs](https://crfm-helm.readthedocs.io/) for how to install and run the package.\n\n## Directory Structure\n\nThe directory structure for this repo is as follows\n\n```\n\u251c\u2500\u2500 docs # MD used to generate readthedocs\n\u2502\n\u251c\u2500\u2500 scripts # Python utility scripts for HELM\n\u2502 \u251c\u2500\u2500 cache\n\u2502 \u251c\u2500\u2500 data_overlap # Calculate train test overlap\n\u2502 \u2502 \u251c\u2500\u2500 common\n\u2502 \u2502 \u251c\u2500\u2500 scenarios\n\u2502 \u2502 \u2514\u2500\u2500 test\n\u2502 \u251c\u2500\u2500 efficiency\n\u2502 \u251c\u2500\u2500 fact_completion\n\u2502 \u251c\u2500\u2500 offline_eval\n\u2502 \u2514\u2500\u2500 scale\n\u2514\u2500\u2500 src\n\u251c\u2500\u2500 helm # Benchmarking Scripts for HELM\n\u2502 \u2502\n\u2502 \u251c\u2500\u2500 benchmark # Main Python code for running HELM\n\u2502 \u2502 \u2502\n\u2502 \u2502 \u2514\u2500\u2500 static # Current JS (Jquery) code for rendering front-end\n\u2502 \u2502 \u2502\n\u2502 \u2502 \u2514\u2500\u2500 ...\n\u2502 \u2502\n\u2502 \u251c\u2500\u2500 common # Additional Python code for running HELM\n\u2502 \u2502\n\u2502 \u2514\u2500\u2500 proxy # Python code for external web requests\n\u2502\n\u2514\u2500\u2500 helm-frontend # New React Front-end\n```\n\n# Holistic Evaluation of Text-To-Image Models\n\n<img src=\"https://github.com/stanford-crfm/helm/raw/heim/src/helm/benchmark/static/heim/images/heim-logo.png\" alt=\"\"  width=\"800\"/>\n\nSignificant effort has recently been made in developing text-to-image generation models, which take textual prompts as \ninput and generate images. As these models are widely used in real-world applications, there is an urgent need to \ncomprehensively understand their capabilities and risks. However, existing evaluations primarily focus on image-text \nalignment and image quality. To address this limitation, we introduce a new benchmark, \n**Holistic Evaluation of Text-To-Image Models (HEIM)**.\n\nWe identify 12 different aspects that are important in real-world model deployment, including:\n\n- image-text alignment\n- image quality\n- aesthetics\n- originality\n- reasoning\n- knowledge\n- bias\n- toxicity\n- fairness\n- robustness\n- multilinguality\n- efficiency\n\nBy curating scenarios encompassing these aspects, we evaluate state-of-the-art text-to-image models using this benchmark. \nUnlike previous evaluations that focused on alignment and quality, HEIM significantly improves coverage by evaluating all \nmodels across all aspects. Our results reveal that no single model excels in all aspects, with different models \ndemonstrating strengths in different aspects.\n\nThis repository contains the code used to produce the [results on the website](https://crfm.stanford.edu/heim/latest/) \nand [paper](https://arxiv.org/abs/2311.04287).\n\n# Tutorial\n\nThis tutorial will explain how to use the HELM command line tools to run benchmarks, aggregate statistics, and visualize results.\n\nWe will run two runs using the `mmlu` scenario on the `openai/gpt2` model. The `mmlu` scenario implements the **Massive Multitask Language (MMLU)** benchmark from [this paper](https://arxiv.org/pdf/2009.03300.pdf), and consists of a Question Answering (QA) task using a dataset with questions from 57 subjects such as elementary mathematics, US history, computer science, law, and more. Note that GPT-2 performs poorly on MMLU, so this is just a proof of concept. We will run two runs: the first using questions about anatomy, and the second using questions about philosophy.\n\n## Using `helm-run`\n\n`helm-run` is a command line tool for running benchmarks.\n\nTo run this benchmark using the HELM command-line tools, we need to specify **run spec descriptions** that describes the desired runs. For this example, the run spec descriptions are `mmlu:subject=anatomy,model=openai/gpt2` (for anatomy) and `mmlu:subject=philosophy,model=openai/gpt2` (for philosophy).\n\nNext, we need to create a **run spec configuration file** containing these run spec descriptions. A run spec configuration file is a text file containing `RunEntries` serialized to JSON, where each entry in `RunEntries` contains a run spec description. The `description` field of each entry should be a **run spec description**. Create a text file named `run_entries.conf` with the following contents:\n\n```\nentries: [\n  {description: \"mmlu:subject=anatomy,model=openai/gpt2\", priority: 1},\n  {description: \"mmlu:subject=philosophy,model=openai/gpt2\", priority: 1},\n]\n```\n\nWe will now use `helm-run` to execute the runs that have been specified in this run spec configuration file. Run this command:\n\n```\nhelm-run --conf-paths run_entries.conf --suite v1 --max-eval-instances 10\n```\n\nThe meaning of the additional arguments are as follows:\n\n- `--suite` specifies a subdirectory under the output directory in which all the output will be placed.\n- `--max-eval-instances` limits evaluation to only the first *N* inputs (i.e. instances) from the benchmark.\n\n`helm-run` creates an environment directory environment and an output directory by default.\n\n-  The environment directory is `prod_env/` by default and can be set using `--local-path`. Credentials for making API calls should be added to a `credentials.conf` file in this directory.\n-  The output directory is `benchmark_output/` by default and can be set using `--output-path`.\n\nAfter running this command, navigate to the `benchmark_output/runs/v1/` directory. This should contain a two sub-directories named `mmlu:subject=anatomy,model=openai_gpt2` and `mmlu:subject=philosophy,model=openai_gpt2`. Note that the names of these sub-directories is based on the run spec descriptions we used earlier, but with `/` replaced with `_`.\n\nEach output sub-directory will contain several JSON files that were generated during the corresponding run:\n\n- `run_spec.json` contains the `RunSpec`, which specifies the scenario, adapter and metrics for the run.\n- `scenario.json` contains a serialized `Scenario`, which contains the scenario for the run and specifies the instances (i.e. inputs) used.\n- `scenario_state.json` contains a serialized `ScenarioState`, which contains every request to and response from the model.\n- `per_instance_stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics for each instance (i.e. input).\n- `stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics, aggregated across all instances (i.e. inputs).\n\n`helm-run` provides additional arguments that can be used to filter out `--models-to-run`, `--groups-to-run` and `--priority`. It can be convenient to create a large `run_entries.conf` file containing every run spec description of interest, and then use these flags to filter down the RunSpecs to actually run. As an example, the main `run_specs.conf` file used for the HELM benchmarking paper can be found [here](https://github.com/stanford-crfm/helm/blob/main/src/helm/benchmark/presentation/run_specs.conf).\n\n**Using model or model_deployment:** Some models have several deployments (for exmaple `eleutherai/gpt-j-6b` is deployed under `huggingface/gpt-j-6b`, `gooseai/gpt-j-6b` and `together/gpt-j-6b`). Since the results can differ depending on the deployment, we provide a way to specify the deployment instead of the model. Instead of using `model=eleutherai/gpt-g-6b`, use `model_deployment=huggingface/gpt-j-6b`. If you do not, a deployment will be arbitrarily chosen. This can still be used for models that have a single deployment and is a good practice to follow to avoid any ambiguity.\n\n## Using `helm-summarize`\n\nThe `helm-summarize` reads the output files of `helm-run` and computes aggregate statistics across runs. Run the following:\n\n```\nhelm-summarize --suite v1\n```\n\nThis reads the pre-existing files in `benchmark_output/runs/v1/` that were written by `helm-run` previously, and writes the following new files back to `benchmark_output/runs/v1/`:\n\n- `summary.json` contains a serialized `ExecutiveSummary` with a date and suite name.\n- `run_specs.json` contains the run spec descriptions for all the runs.\n- `runs.json` contains serialized list of `Run`, which contains the run path, run spec and adapter spec and statistics for each run.\n- `groups.json` contains a serialized list of `Table`, each containing information about groups in a group category.\n- `groups_metadata.json` contains a list of all the groups along with a human-readable description and a taxonomy.\n\nAdditionally, for each group and group-relavent metric, it will output a pair of files: `benchmark_output/runs/v1/groups/latex/<group_name>_<metric_name>.tex` and `benchmark_output/runs/v1/groups/json/<group_name>_<metric_name>.json`. These files contain the statistics for that metric from each run within the group.\n\n<!--\n# TODO(#1441): Enable plots\n\n## Using `helm-create-plots`\n\nThe `helm-create-plots` reads the `groups` directory created by `helm-summarize` and creates plots, equivalent to those use in the HELM paper. Run the following:\n\n```\nhelm-create-plots --suite v1\n```\n\nThis reads the pre-existing files in `benchmark_output/runs/v1/groups` that were written by `helm-summarize` previously,\nand creates plots (`.png` or `.pdf`) at `benchmark_output/runs/v1/plots`.\n\n-->\n\n## Using `helm-server`\n\nFinally, the `helm-server` command launches a web server to visualize the output files of `helm-run` and `helm-benchmark`. Run:\n\n```\nhelm-server\n```\n\nOpen a browser and go to http://localhost:8000/ to view the visualization. You should see a similar view as [live website for the paper](https://crfm.stanford.edu/helm/v1.0/), but for the data from your benchmark runs. The website has three main sections:\n\n- **Models** contains a list of available models.\n- **Scenarios** contains a list of available scenarios.\n- **Results** contains results from the runs, organized into groups and categories of groups.\n- **Raw Runs** contains a searchable list of runs.\n\n## Other Tips\n\n- The suite name can be used as a versioning mechanism to separate runs using different versions of scenarios or models.\n- Tools such as [`jq`](https://stedolan.github.io/jq/) are useful for examining the JSON output files on the command line.\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Benchmark for language models",
    "version": "0.5.0",
    "project_urls": {
        "Homepage": "https://github.com/stanford-crfm/helm"
    },
    "split_keywords": [
        "language",
        "models",
        "benchmarking"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5252b207638de57793004a2da02209f260a3b066c13eb196a5e712698cfad69b",
                "md5": "c12d8ab4b78c3bc5997627bc954c8a27",
                "sha256": "f8c2e31b8fd07e81b27bd651308d6a6ea4a33f9cbf30a198d435623842c73957"
            },
            "downloads": -1,
            "filename": "crfm_helm-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c12d8ab4b78c3bc5997627bc954c8a27",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11,>=3.8",
            "size": 5600607,
            "upload_time": "2024-04-23T21:36:20",
            "upload_time_iso_8601": "2024-04-23T21:36:20.897655Z",
            "url": "https://files.pythonhosted.org/packages/52/52/b207638de57793004a2da02209f260a3b066c13eb196a5e712698cfad69b/crfm_helm-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "45d4292e2258f26309dcd93f66fadf1639cd7c96a3f918d2c1ea4310cda6e977",
                "md5": "53bcf07738737ccde7dffce2bc6f2181",
                "sha256": "6d0f7d40842947eda573010975126b9904673340df2ccdd94d7f28cbcd8f4b07"
            },
            "downloads": -1,
            "filename": "crfm_helm-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "53bcf07738737ccde7dffce2bc6f2181",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.11,>=3.8",
            "size": 5313836,
            "upload_time": "2024-04-23T21:36:23",
            "upload_time_iso_8601": "2024-04-23T21:36:23.372814Z",
            "url": "https://files.pythonhosted.org/packages/45/d4/292e2258f26309dcd93f66fadf1639cd7c96a3f918d2c1ea4310cda6e977/crfm_helm-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-23 21:36:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "stanford-crfm",
    "github_project": "helm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "crfm-helm"
}
        
Elapsed time: 0.30141s