# Holistic Evaluation of Language Models (HELM)
<a href="https://github.com/stanford-crfm/helm">
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/stanford-crfm/helm">
</a>
<a href="https://github.com/stanford-crfm/helm/graphs/contributors">
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors/stanford-crfm/helm">
</a>
<a href="https://github.com/stanford-crfm/helm/actions/workflows/test.yml?query=branch%3Amain">
<img alt="GitHub Actions Workflow Status" src="https://img.shields.io/github/actions/workflow/status/stanford-crfm/helm/test.yml">
</a>
<a href="https://crfm-helm.readthedocs.io/en/latest/">
<img alt="Documentation Status" src="https://readthedocs.org/projects/helm/badge/?version=latest">
</a>
<a href="https://github.com/stanford-crfm/helm/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/stanford-crfm/helm?color=blue" />
</a>
<a href="https://pypi.org/project/crfm-helm/">
<img alt="PyPI" src="https://img.shields.io/pypi/v/crfm-helm?color=blue" />
</a>
[comment]: <> (When using the img tag, which allows us to specify size, src has to be a URL.)
<img src="https://github.com/stanford-crfm/helm/raw/v0.5.4/helm-frontend/src/assets/helm-logo.png" alt="HELM logo" width="480"/>
**Holistic Evaluation of Language Models (HELM)** is an open source Python framework created by the [Center for Research on Foundation Models (CRFM) at Stanford](https://crfm.stanford.edu/) for holistic, reproducible and transparent evaluation of foundation models, including large language models (LLMs) and multimodal models. This framework includes the following features:
- Datasets and benchmarks in a standardized format (e.g. MMLU-Pro, GPQA, IFEval, WildBench)
- Models from various providers accessible through a unified interface (e.g. OpenAI models, Anthropic Claude, Google Gemini)
- Metrics for measuring various aspects beyond accuracy (e.g. efficiency, bias, toxicity)
- Web UI for inspecting individual prompts and responses
- Web leaderboard for comparing results across models and benchmarks
## Documentation
Please refer to [the documentation on Read the Docs](https://crfm-helm.readthedocs.io/) for instructions on how to install and run HELM.
## Quick Start
<!--quick-start-begin-->
Install the package from PyPI:
```sh
pip install crfm-helm
```
Run the following in your shell:
```sh
# Run benchmark
helm-run --run-entries mmlu:subject=philosophy,model=openai/gpt2 --suite my-suite --max-eval-instances 10
# Summarize benchmark results
helm-summarize --suite my-suite
# Start a web server to display benchmark results
helm-server --suite my-suite
```
Then go to http://localhost:8000/ in your browser.
<!--quick-start-end-->
## Leaderboards
We maintain offical leaderboards with results from evaluating recent models on notable benchmarks using this framework. Our current flagship leaderboards are:
- [HELM Capabilities](https://crfm.stanford.edu/helm/capabilities/latest/)
- [HELM Safety](https://crfm.stanford.edu/helm/safety/latest/)
- [Holistic Evaluation of Vision-Language Models (VHELM)](https://crfm.stanford.edu/helm/vhelm/latest/)
We also maintain leaderboards for a diverse range of domains (e.g. medicine, finance) and aspects (e.g. multi-linguality, world knowledge, regulation compliance). Refer to the [HELM website](https://crfm.stanford.edu/helm/) for a full list of leaderboards.
## Papers
The HELM framework was used in the following papers for evaluating models.
- **Holistic Evaluation of Language Models** - [paper](https://openreview.net/forum?id=iO4LZibEqW), [leaderboard](https://crfm.stanford.edu/helm/classic/latest/)
- **Holistic Evaluation of Vision-Language Models (VHELM)** - [paper](https://arxiv.org/abs/2410.07112), [leaderboard](https://crfm.stanford.edu/helm/vhelm/latest/), [documentation](https://crfm-helm.readthedocs.io/en/latest/vhelm/)
- **Holistic Evaluation of Text-To-Image Models (HEIM)** - [paper](https://arxiv.org/abs/2311.04287), [leaderboard](https://crfm.stanford.edu/helm/heim/latest/), [documentation](https://crfm-helm.readthedocs.io/en/latest/heim/)
- **Image2Struct: Benchmarking Structure Extraction for Vision-Language Models** - [paper](https://arxiv.org/abs/2410.22456)
- **Enterprise Benchmarks for Large Language Model Evaluation** - [paper](https://arxiv.org/abs/2410.12857), [documentation](https://crfm-helm.readthedocs.io/en/latest/enterprise_benchmark/)
- **The Mighty ToRR: A Benchmark for Table Reasoning and Robustness** - [paper](https://arxiv.org/abs/2502.19412), [leaderboard](https://crfm.stanford.edu/helm/torr/latest/)
- **Reliable and Efficient Amortized Model-based Evaluation** - [paper](https://arxiv.org/abs/2503.13335), [documentation](https://crfm-helm.readthedocs.io/en/latest/reeval/)
- **MedHELM** - paper in progress, [leaderboard](https://crfm.stanford.edu/helm/medhelm/latest/), [documentation](https://crfm-helm.readthedocs.io/en/latest/reeval/)
The HELM framework can be used to reproduce the published model evaluation results from these papers. To get started, refer to the documentation links above for the corresponding paper, or the [main Reproducing Leaderboards documentation](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).
## Citation
If you use this software in your research, please cite the [Holistic Evaluation of Language Models paper](https://openreview.net/forum?id=iO4LZibEqW) as below.
```bibtex
@article{
liang2023holistic,
title={Holistic Evaluation of Language Models},
author={Percy Liang and Rishi Bommasani and Tony Lee and Dimitris Tsipras and Dilara Soylu and Michihiro Yasunaga and Yian Zhang and Deepak Narayanan and Yuhuai Wu and Ananya Kumar and Benjamin Newman and Binhang Yuan and Bobby Yan and Ce Zhang and Christian Alexander Cosgrove and Christopher D Manning and Christopher Re and Diana Acosta-Navas and Drew Arad Hudson and Eric Zelikman and Esin Durmus and Faisal Ladhak and Frieda Rong and Hongyu Ren and Huaxiu Yao and Jue WANG and Keshav Santhanam and Laurel Orr and Lucia Zheng and Mert Yuksekgonul and Mirac Suzgun and Nathan Kim and Neel Guha and Niladri S. Chatterji and Omar Khattab and Peter Henderson and Qian Huang and Ryan Andrew Chi and Sang Michael Xie and Shibani Santurkar and Surya Ganguli and Tatsunori Hashimoto and Thomas Icard and Tianyi Zhang and Vishrav Chaudhary and William Wang and Xuechen Li and Yifan Mai and Yuhui Zhang and Yuta Koreeda},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2023},
url={https://openreview.net/forum?id=iO4LZibEqW},
note={Featured Certification, Expert Certification}
}
```
# Tutorial
This tutorial will explain how to use the HELM command line tools to run benchmarks, aggregate statistics, and visualize results.
We will run two runs using the `mmlu` scenario on the `openai/gpt2` model. The `mmlu` scenario implements the **Massive Multitask Language (MMLU)** benchmark from [this paper](https://arxiv.org/pdf/2009.03300.pdf), and consists of a Question Answering (QA) task using a dataset with questions from 57 subjects such as elementary mathematics, US history, computer science, law, and more. Note that GPT-2 performs poorly on MMLU, so this is just a proof of concept. We will run two runs: the first using questions about anatomy, and the second using questions about philosophy.
## Using `helm-run`
`helm-run` is a command line tool for running benchmarks.
To run this benchmark using the HELM command-line tools, we need to specify **run entries** that describes the desired runs. For this example, the run entries are `mmlu:subject=anatomy,model=openai/gpt2` (for anatomy) and `mmlu:subject=philosophy,model=openai/gpt2` (for philosophy).
We will now use `helm-run` to execute the runs. Run this command:
```sh
helm-run --run-entries mmlu:subject=anatomy,model=openai/gpt2 mmlu:subject=philosophy,model=openai/gpt2 --suite my-suite --max-eval-instances 10
```
The meaning of the arguments are as follows:
- `--run-entries` specifies the run entries from the desired runs.
- `--suite` specifies a subdirectory under the output directory in which all the output will be placed.
- `--max-eval-instances` limits evaluation to only *N* instances (i.e. items) from the benchmark, using a randomly shuffled order of instances.
`helm-run` creates an environment directory environment and an output directory by default.
- The environment directory is `prod_env/` by default and can be set using `--local-path`. Credentials for making API calls should be added to a `credentials.conf` file in this directory.
- The output directory is `benchmark_output/` by default and can be set using `--output-path`.
After running this command, navigate to the `benchmark_output/runs/my-suite/` directory. This should contain a two sub-directories named `mmlu:subject=anatomy,model=openai_gpt2` and `mmlu:subject=philosophy,model=openai_gpt2`. Note that the names of these sub-directories is based on the run entries we used earlier, but with `/` replaced with `_`.
Each output sub-directory will contain several JSON files that were generated during the corresponding run:
- `run_spec.json` contains the `RunSpec`, which specifies the scenario, adapter and metrics for the run.
- `scenario.json` contains a serialized `Scenario`, which contains the scenario for the run and specifies the instances (i.e. inputs) used.
- `scenario_state.json` contains a serialized `ScenarioState`, which contains every request to and response from the model.
- `per_instance_stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics for each instance (i.e. input).
- `stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics, aggregated across all instances (i.e. inputs).
## Using `helm-summarize`
The `helm-summarize` reads the output files of `helm-run` and computes aggregate statistics across runs. Run the following:
```sh
helm-summarize --suite my-suite
```
This reads the pre-existing files in `benchmark_output/runs/my-suite/` that were written by `helm-run` previously, and writes the following new files back to `benchmark_output/runs/my-suite/`:
- `summary.json` contains a serialized `ExecutiveSummary` with a date and suite name.
- `run_specs.json` contains the run entries for all the runs.
- `runs.json` contains serialized list of `Run`, which contains the run path, run spec and adapter spec and statistics for each run.
- `groups.json` contains a serialized list of `Table`, each containing information about groups in a group category.
- `groups_metadata.json` contains a list of all the groups along with a human-readable description and a taxonomy.
Additionally, for each group and group-relavent metric, it will output a pair of files: `benchmark_output/runs/my-suite/groups/latex/<group_name>_<metric_name>.tex` and `benchmark_output/runs/my-suite/groups/json/<group_name>_<metric_name>.json`. These files contain the statistics for that metric from each run within the group.
## Using `helm-server`
Finally, the `helm-server` command launches a web server to visualize the output files of `helm-run` and `helm-benchmark`. Run:
```sh
helm-server --suite my-suite
```
Open a browser and go to http://localhost:8000/ to view the visualization. You should see a similar view as [live website for the paper](https://crfm.stanford.edu/helm/classic/latest/), but for the data from your benchmark runs. The website has the following sections accessible from the top menu bar:
- **Leaderboards** contains the leaderboards with aggregate metrics.
- **Models** contains a list of models and their descriptions
- **Scenarios** contains a list of scenarios and their descriptions.
- **Predictions** contains a searchable list of runs.
Raw data
{
"_id": null,
"home_page": "https://github.com/stanford-crfm/helm",
"name": "crfm-helm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "language models benchmarking",
"author": "Stanford CRFM",
"author_email": "contact-crfm@stanford.edu",
"download_url": "https://files.pythonhosted.org/packages/58/17/69eed11a5082ba48a54e26ea2702b4287dd15a684b333371107dc986cdbc/crfm_helm-0.5.7.tar.gz",
"platform": null,
"description": "# Holistic Evaluation of Language Models (HELM)\n\n\n<a href=\"https://github.com/stanford-crfm/helm\">\n <img alt=\"GitHub Repo stars\" src=\"https://img.shields.io/github/stars/stanford-crfm/helm\">\n</a>\n<a href=\"https://github.com/stanford-crfm/helm/graphs/contributors\">\n <img alt=\"GitHub contributors\" src=\"https://img.shields.io/github/contributors/stanford-crfm/helm\">\n</a>\n<a href=\"https://github.com/stanford-crfm/helm/actions/workflows/test.yml?query=branch%3Amain\">\n <img alt=\"GitHub Actions Workflow Status\" src=\"https://img.shields.io/github/actions/workflow/status/stanford-crfm/helm/test.yml\">\n</a>\n<a href=\"https://crfm-helm.readthedocs.io/en/latest/\">\n <img alt=\"Documentation Status\" src=\"https://readthedocs.org/projects/helm/badge/?version=latest\">\n</a>\n<a href=\"https://github.com/stanford-crfm/helm/blob/main/LICENSE\">\n <img alt=\"License\" src=\"https://img.shields.io/github/license/stanford-crfm/helm?color=blue\" />\n</a>\n<a href=\"https://pypi.org/project/crfm-helm/\">\n <img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/crfm-helm?color=blue\" />\n</a>\n\n[comment]: <> (When using the img tag, which allows us to specify size, src has to be a URL.)\n<img src=\"https://github.com/stanford-crfm/helm/raw/v0.5.4/helm-frontend/src/assets/helm-logo.png\" alt=\"HELM logo\" width=\"480\"/>\n\n**Holistic Evaluation of Language Models (HELM)** is an open source Python framework created by the [Center for Research on Foundation Models (CRFM) at Stanford](https://crfm.stanford.edu/) for holistic, reproducible and transparent evaluation of foundation models, including large language models (LLMs) and multimodal models. This framework includes the following features:\n\n- Datasets and benchmarks in a standardized format (e.g. MMLU-Pro, GPQA, IFEval, WildBench)\n- Models from various providers accessible through a unified interface (e.g. OpenAI models, Anthropic Claude, Google Gemini)\n- Metrics for measuring various aspects beyond accuracy (e.g. efficiency, bias, toxicity)\n- Web UI for inspecting individual prompts and responses\n- Web leaderboard for comparing results across models and benchmarks\n\n## Documentation\n\nPlease refer to [the documentation on Read the Docs](https://crfm-helm.readthedocs.io/) for instructions on how to install and run HELM.\n\n## Quick Start\n\n<!--quick-start-begin-->\n\nInstall the package from PyPI:\n\n```sh\npip install crfm-helm\n```\n\nRun the following in your shell:\n\n```sh\n# Run benchmark\nhelm-run --run-entries mmlu:subject=philosophy,model=openai/gpt2 --suite my-suite --max-eval-instances 10\n\n# Summarize benchmark results\nhelm-summarize --suite my-suite\n\n# Start a web server to display benchmark results\nhelm-server --suite my-suite\n```\n\nThen go to http://localhost:8000/ in your browser.\n\n<!--quick-start-end-->\n\n## Leaderboards\n\nWe maintain offical leaderboards with results from evaluating recent models on notable benchmarks using this framework. Our current flagship leaderboards are:\n\n- [HELM Capabilities](https://crfm.stanford.edu/helm/capabilities/latest/)\n- [HELM Safety](https://crfm.stanford.edu/helm/safety/latest/)\n- [Holistic Evaluation of Vision-Language Models (VHELM)](https://crfm.stanford.edu/helm/vhelm/latest/)\n\nWe also maintain leaderboards for a diverse range of domains (e.g. medicine, finance) and aspects (e.g. multi-linguality, world knowledge, regulation compliance). Refer to the [HELM website](https://crfm.stanford.edu/helm/) for a full list of leaderboards.\n\n## Papers\n\nThe HELM framework was used in the following papers for evaluating models.\n\n- **Holistic Evaluation of Language Models** - [paper](https://openreview.net/forum?id=iO4LZibEqW), [leaderboard](https://crfm.stanford.edu/helm/classic/latest/)\n- **Holistic Evaluation of Vision-Language Models (VHELM)** - [paper](https://arxiv.org/abs/2410.07112), [leaderboard](https://crfm.stanford.edu/helm/vhelm/latest/), [documentation](https://crfm-helm.readthedocs.io/en/latest/vhelm/)\n- **Holistic Evaluation of Text-To-Image Models (HEIM)** - [paper](https://arxiv.org/abs/2311.04287), [leaderboard](https://crfm.stanford.edu/helm/heim/latest/), [documentation](https://crfm-helm.readthedocs.io/en/latest/heim/)\n- **Image2Struct: Benchmarking Structure Extraction for Vision-Language Models** - [paper](https://arxiv.org/abs/2410.22456)\n- **Enterprise Benchmarks for Large Language Model Evaluation** - [paper](https://arxiv.org/abs/2410.12857), [documentation](https://crfm-helm.readthedocs.io/en/latest/enterprise_benchmark/)\n- **The Mighty ToRR: A Benchmark for Table Reasoning and Robustness** - [paper](https://arxiv.org/abs/2502.19412), [leaderboard](https://crfm.stanford.edu/helm/torr/latest/)\n- **Reliable and Efficient Amortized Model-based Evaluation** - [paper](https://arxiv.org/abs/2503.13335), [documentation](https://crfm-helm.readthedocs.io/en/latest/reeval/)\n- **MedHELM** - paper in progress, [leaderboard](https://crfm.stanford.edu/helm/medhelm/latest/), [documentation](https://crfm-helm.readthedocs.io/en/latest/reeval/)\n\nThe HELM framework can be used to reproduce the published model evaluation results from these papers. To get started, refer to the documentation links above for the corresponding paper, or the [main Reproducing Leaderboards documentation](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).\n\n## Citation\n\nIf you use this software in your research, please cite the [Holistic Evaluation of Language Models paper](https://openreview.net/forum?id=iO4LZibEqW) as below.\n\n```bibtex\n@article{\nliang2023holistic,\ntitle={Holistic Evaluation of Language Models},\nauthor={Percy Liang and Rishi Bommasani and Tony Lee and Dimitris Tsipras and Dilara Soylu and Michihiro Yasunaga and Yian Zhang and Deepak Narayanan and Yuhuai Wu and Ananya Kumar and Benjamin Newman and Binhang Yuan and Bobby Yan and Ce Zhang and Christian Alexander Cosgrove and Christopher D Manning and Christopher Re and Diana Acosta-Navas and Drew Arad Hudson and Eric Zelikman and Esin Durmus and Faisal Ladhak and Frieda Rong and Hongyu Ren and Huaxiu Yao and Jue WANG and Keshav Santhanam and Laurel Orr and Lucia Zheng and Mert Yuksekgonul and Mirac Suzgun and Nathan Kim and Neel Guha and Niladri S. Chatterji and Omar Khattab and Peter Henderson and Qian Huang and Ryan Andrew Chi and Sang Michael Xie and Shibani Santurkar and Surya Ganguli and Tatsunori Hashimoto and Thomas Icard and Tianyi Zhang and Vishrav Chaudhary and William Wang and Xuechen Li and Yifan Mai and Yuhui Zhang and Yuta Koreeda},\njournal={Transactions on Machine Learning Research},\nissn={2835-8856},\nyear={2023},\nurl={https://openreview.net/forum?id=iO4LZibEqW},\nnote={Featured Certification, Expert Certification}\n}\n```\n\n# Tutorial\n\nThis tutorial will explain how to use the HELM command line tools to run benchmarks, aggregate statistics, and visualize results.\n\nWe will run two runs using the `mmlu` scenario on the `openai/gpt2` model. The `mmlu` scenario implements the **Massive Multitask Language (MMLU)** benchmark from [this paper](https://arxiv.org/pdf/2009.03300.pdf), and consists of a Question Answering (QA) task using a dataset with questions from 57 subjects such as elementary mathematics, US history, computer science, law, and more. Note that GPT-2 performs poorly on MMLU, so this is just a proof of concept. We will run two runs: the first using questions about anatomy, and the second using questions about philosophy.\n\n## Using `helm-run`\n\n`helm-run` is a command line tool for running benchmarks.\n\nTo run this benchmark using the HELM command-line tools, we need to specify **run entries** that describes the desired runs. For this example, the run entries are `mmlu:subject=anatomy,model=openai/gpt2` (for anatomy) and `mmlu:subject=philosophy,model=openai/gpt2` (for philosophy).\n\nWe will now use `helm-run` to execute the runs. Run this command:\n\n```sh\nhelm-run --run-entries mmlu:subject=anatomy,model=openai/gpt2 mmlu:subject=philosophy,model=openai/gpt2 --suite my-suite --max-eval-instances 10\n```\n\nThe meaning of the arguments are as follows:\n\n- `--run-entries` specifies the run entries from the desired runs.\n- `--suite` specifies a subdirectory under the output directory in which all the output will be placed.\n- `--max-eval-instances` limits evaluation to only *N* instances (i.e. items) from the benchmark, using a randomly shuffled order of instances.\n\n`helm-run` creates an environment directory environment and an output directory by default.\n\n- The environment directory is `prod_env/` by default and can be set using `--local-path`. Credentials for making API calls should be added to a `credentials.conf` file in this directory.\n- The output directory is `benchmark_output/` by default and can be set using `--output-path`.\n\nAfter running this command, navigate to the `benchmark_output/runs/my-suite/` directory. This should contain a two sub-directories named `mmlu:subject=anatomy,model=openai_gpt2` and `mmlu:subject=philosophy,model=openai_gpt2`. Note that the names of these sub-directories is based on the run entries we used earlier, but with `/` replaced with `_`.\n\nEach output sub-directory will contain several JSON files that were generated during the corresponding run:\n\n- `run_spec.json` contains the `RunSpec`, which specifies the scenario, adapter and metrics for the run.\n- `scenario.json` contains a serialized `Scenario`, which contains the scenario for the run and specifies the instances (i.e. inputs) used.\n- `scenario_state.json` contains a serialized `ScenarioState`, which contains every request to and response from the model.\n- `per_instance_stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics for each instance (i.e. input).\n- `stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics, aggregated across all instances (i.e. inputs).\n\n## Using `helm-summarize`\n\nThe `helm-summarize` reads the output files of `helm-run` and computes aggregate statistics across runs. Run the following:\n\n```sh\nhelm-summarize --suite my-suite\n```\n\nThis reads the pre-existing files in `benchmark_output/runs/my-suite/` that were written by `helm-run` previously, and writes the following new files back to `benchmark_output/runs/my-suite/`:\n\n- `summary.json` contains a serialized `ExecutiveSummary` with a date and suite name.\n- `run_specs.json` contains the run entries for all the runs.\n- `runs.json` contains serialized list of `Run`, which contains the run path, run spec and adapter spec and statistics for each run.\n- `groups.json` contains a serialized list of `Table`, each containing information about groups in a group category.\n- `groups_metadata.json` contains a list of all the groups along with a human-readable description and a taxonomy.\n\nAdditionally, for each group and group-relavent metric, it will output a pair of files: `benchmark_output/runs/my-suite/groups/latex/<group_name>_<metric_name>.tex` and `benchmark_output/runs/my-suite/groups/json/<group_name>_<metric_name>.json`. These files contain the statistics for that metric from each run within the group.\n\n## Using `helm-server`\n\nFinally, the `helm-server` command launches a web server to visualize the output files of `helm-run` and `helm-benchmark`. Run:\n\n```sh\nhelm-server --suite my-suite\n```\n\nOpen a browser and go to http://localhost:8000/ to view the visualization. You should see a similar view as [live website for the paper](https://crfm.stanford.edu/helm/classic/latest/), but for the data from your benchmark runs. The website has the following sections accessible from the top menu bar:\n\n- **Leaderboards** contains the leaderboards with aggregate metrics.\n- **Models** contains a list of models and their descriptions\n- **Scenarios** contains a list of scenarios and their descriptions.\n- **Predictions** contains a searchable list of runs.\n",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "Benchmark for language models",
"version": "0.5.7",
"project_urls": {
"Homepage": "https://github.com/stanford-crfm/helm"
},
"split_keywords": [
"language",
"models",
"benchmarking"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b0d25fa27319c09a1491620c31db207217e444c85c9ec234ef066f75d8867d3a",
"md5": "4bf88f2ac96c51058f3162e1bd7b8532",
"sha256": "052a5b56492953eec0e6060545ee45a460ff8f0fbde386108e9fcd8affdef90a"
},
"downloads": -1,
"filename": "crfm_helm-0.5.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4bf88f2ac96c51058f3162e1bd7b8532",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 7303427,
"upload_time": "2025-08-01T01:09:12",
"upload_time_iso_8601": "2025-08-01T01:09:12.381925Z",
"url": "https://files.pythonhosted.org/packages/b0/d2/5fa27319c09a1491620c31db207217e444c85c9ec234ef066f75d8867d3a/crfm_helm-0.5.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "581769eed11a5082ba48a54e26ea2702b4287dd15a684b333371107dc986cdbc",
"md5": "44f363ff2e05eee00663e4d954b942b8",
"sha256": "cb600e70fcbbdb7402d837b94a08c85f6901722d62ce101dae32afab821f23fa"
},
"downloads": -1,
"filename": "crfm_helm-0.5.7.tar.gz",
"has_sig": false,
"md5_digest": "44f363ff2e05eee00663e4d954b942b8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 6754413,
"upload_time": "2025-08-01T01:09:14",
"upload_time_iso_8601": "2025-08-01T01:09:14.801429Z",
"url": "https://files.pythonhosted.org/packages/58/17/69eed11a5082ba48a54e26ea2702b4287dd15a684b333371107dc986cdbc/crfm_helm-0.5.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 01:09:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "stanford-crfm",
"github_project": "helm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "absl-py",
"specs": [
[
"==",
"2.3.1"
]
]
},
{
"name": "accelerate",
"specs": [
[
"==",
"0.34.2"
]
]
},
{
"name": "ai2-olmo",
"specs": [
[
"==",
"0.2.2"
]
]
},
{
"name": "ai2-olmo",
"specs": [
[
"==",
"0.6.0"
]
]
},
{
"name": "ai2-olmo-core",
"specs": [
[
"==",
"0.1.0"
]
]
},
{
"name": "aiodns",
"specs": [
[
"==",
"3.5.0"
]
]
},
{
"name": "aiohappyeyeballs",
"specs": [
[
"==",
"2.6.1"
]
]
},
{
"name": "aiohttp",
"specs": [
[
"==",
"3.12.14"
]
]
},
{
"name": "aiohttp-retry",
"specs": [
[
"==",
"2.9.1"
]
]
},
{
"name": "aiosignal",
"specs": [
[
"==",
"1.4.0"
]
]
},
{
"name": "aleph-alpha-client",
"specs": [
[
"==",
"2.17.0"
]
]
},
{
"name": "annotated-types",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "anthropic",
"specs": [
[
"==",
"0.58.2"
]
]
},
{
"name": "antlr4-python3-runtime",
"specs": [
[
"==",
"4.9.3"
]
]
},
{
"name": "anyio",
"specs": [
[
"==",
"4.9.0"
]
]
},
{
"name": "astunparse",
"specs": [
[
"==",
"1.6.3"
]
]
},
{
"name": "async-timeout",
"specs": [
[
"==",
"5.0.1"
]
]
},
{
"name": "attrs",
"specs": [
[
"==",
"25.3.0"
]
]
},
{
"name": "autokeras",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "av",
"specs": [
[
"==",
"15.0.0"
]
]
},
{
"name": "awscli",
"specs": [
[
"==",
"1.41.9"
]
]
},
{
"name": "beaker-gantry",
"specs": [
[
"==",
"1.15.0"
]
]
},
{
"name": "beaker-gantry",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "beaker-py",
"specs": [
[
"==",
"1.34.3"
]
]
},
{
"name": "beaker-py",
"specs": [
[
"==",
"1.36.1"
]
]
},
{
"name": "beautifulsoup4",
"specs": [
[
"==",
"4.13.4"
]
]
},
{
"name": "black",
"specs": [
[
"==",
"24.3.0"
]
]
},
{
"name": "blis",
"specs": [
[
"==",
"1.2.1"
]
]
},
{
"name": "boltons",
"specs": [
[
"==",
"25.0.0"
]
]
},
{
"name": "boto3",
"specs": [
[
"==",
"1.39.9"
]
]
},
{
"name": "botocore",
"specs": [
[
"==",
"1.39.9"
]
]
},
{
"name": "bottle",
"specs": [
[
"==",
"0.12.25"
]
]
},
{
"name": "cached-path",
"specs": [
[
"==",
"1.7.3"
]
]
},
{
"name": "cachetools",
"specs": [
[
"==",
"5.5.2"
]
]
},
{
"name": "catalogue",
"specs": [
[
"==",
"2.0.10"
]
]
},
{
"name": "cattrs",
"specs": [
[
"==",
"22.2.0"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2025.7.14"
]
]
},
{
"name": "cffi",
"specs": [
[
"==",
"1.17.1"
]
]
},
{
"name": "cfgv",
"specs": [
[
"==",
"3.4.0"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.2"
]
]
},
{
"name": "chex",
"specs": [
[
"==",
"0.1.89"
]
]
},
{
"name": "clang",
"specs": [
[
"==",
"20.1.5"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.8"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.2.1"
]
]
},
{
"name": "click-help-colors",
"specs": [
[
"==",
"0.9.4"
]
]
},
{
"name": "clip-anytorch",
"specs": [
[
"==",
"2.6.0"
]
]
},
{
"name": "cloudpathlib",
"specs": [
[
"==",
"0.21.1"
]
]
},
{
"name": "cohere",
"specs": [
[
"==",
"5.16.1"
]
]
},
{
"name": "colorama",
"specs": [
[
"==",
"0.4.6"
]
]
},
{
"name": "colorcet",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "coloredlogs",
"specs": [
[
"==",
"15.0.1"
]
]
},
{
"name": "colorlog",
"specs": [
[
"==",
"6.9.0"
]
]
},
{
"name": "confection",
"specs": [
[
"==",
"0.1.5"
]
]
},
{
"name": "contourpy",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "contourpy",
"specs": [
[
"==",
"1.3.2"
]
]
},
{
"name": "cycler",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "cymem",
"specs": [
[
"==",
"2.0.11"
]
]
},
{
"name": "dacite",
"specs": [
[
"==",
"1.9.2"
]
]
},
{
"name": "data",
"specs": [
[
"==",
"0.4"
]
]
},
{
"name": "datasets",
"specs": [
[
"==",
"3.6.0"
]
]
},
{
"name": "decorator",
"specs": [
[
"==",
"5.2.1"
]
]
},
{
"name": "diffusers",
"specs": [
[
"==",
"0.34.0"
]
]
},
{
"name": "dill",
"specs": [
[
"==",
"0.3.8"
]
]
},
{
"name": "distlib",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "distro",
"specs": [
[
"==",
"1.9.0"
]
]
},
{
"name": "dnspython",
"specs": [
[
"==",
"2.7.0"
]
]
},
{
"name": "docker",
"specs": [
[
"==",
"7.1.0"
]
]
},
{
"name": "docstring-parser",
"specs": [
[
"==",
"0.17.0"
]
]
},
{
"name": "docutils",
"specs": [
[
"==",
"0.19"
]
]
},
{
"name": "einops",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "einops-exts",
"specs": [
[
"==",
"0.0.4"
]
]
},
{
"name": "et-xmlfile",
"specs": [
[
"==",
"2.0.0"
]
]
},
{
"name": "etils",
"specs": [
[
"==",
"1.5.2"
]
]
},
{
"name": "etils",
"specs": [
[
"==",
"1.13.0"
]
]
},
{
"name": "eval-type-backport",
"specs": [
[
"==",
"0.2.2"
]
]
},
{
"name": "exceptiongroup",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "face",
"specs": [
[
"==",
"24.0.0"
]
]
},
{
"name": "fairlearn",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "fastavro",
"specs": [
[
"==",
"1.11.1"
]
]
},
{
"name": "filelock",
"specs": [
[
"==",
"3.18.0"
]
]
},
{
"name": "flake8",
"specs": [
[
"==",
"5.0.4"
]
]
},
{
"name": "flatbuffers",
"specs": [
[
"==",
"25.2.10"
]
]
},
{
"name": "flax",
"specs": [
[
"==",
"0.8.5"
]
]
},
{
"name": "flax",
"specs": [
[
"==",
"0.10.7"
]
]
},
{
"name": "fonttools",
"specs": [
[
"==",
"4.59.0"
]
]
},
{
"name": "frozenlist",
"specs": [
[
"==",
"1.7.0"
]
]
},
{
"name": "fsspec",
"specs": [
[
"==",
"2025.3.0"
]
]
},
{
"name": "ftfy",
"specs": [
[
"==",
"6.3.1"
]
]
},
{
"name": "funcsigs",
"specs": [
[
"==",
"1.0.2"
]
]
},
{
"name": "future",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "gast",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "gast",
"specs": [
[
"==",
"0.6.0"
]
]
},
{
"name": "gdown",
"specs": [
[
"==",
"5.2.0"
]
]
},
{
"name": "gitdb",
"specs": [
[
"==",
"4.0.12"
]
]
},
{
"name": "gitpython",
"specs": [
[
"==",
"3.1.44"
]
]
},
{
"name": "glom",
"specs": [
[
"==",
"24.11.0"
]
]
},
{
"name": "google-api-core",
"specs": [
[
"==",
"1.34.1"
]
]
},
{
"name": "google-api-core",
"specs": [
[
"==",
"2.25.1"
]
]
},
{
"name": "google-api-python-client",
"specs": [
[
"==",
"2.176.0"
]
]
},
{
"name": "google-auth",
"specs": [
[
"==",
"2.40.3"
]
]
},
{
"name": "google-auth-httplib2",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "google-auth-oauthlib",
"specs": [
[
"==",
"0.4.6"
]
]
},
{
"name": "google-cloud-aiplatform",
"specs": [
[
"==",
"1.60.0"
]
]
},
{
"name": "google-cloud-aiplatform",
"specs": [
[
"==",
"1.91.0"
]
]
},
{
"name": "google-cloud-aiplatform",
"specs": [
[
"==",
"1.104.0"
]
]
},
{
"name": "google-cloud-bigquery",
"specs": [
[
"==",
"3.25.0"
]
]
},
{
"name": "google-cloud-bigquery",
"specs": [
[
"==",
"3.35.0"
]
]
},
{
"name": "google-cloud-core",
"specs": [
[
"==",
"2.4.3"
]
]
},
{
"name": "google-cloud-resource-manager",
"specs": [
[
"==",
"1.12.3"
]
]
},
{
"name": "google-cloud-resource-manager",
"specs": [
[
"==",
"1.14.2"
]
]
},
{
"name": "google-cloud-storage",
"specs": [
[
"==",
"2.14.0"
]
]
},
{
"name": "google-cloud-storage",
"specs": [
[
"==",
"2.19.0"
]
]
},
{
"name": "google-cloud-translate",
"specs": [
[
"==",
"3.15.3"
]
]
},
{
"name": "google-cloud-translate",
"specs": [
[
"==",
"3.21.1"
]
]
},
{
"name": "google-crc32c",
"specs": [
[
"==",
"1.7.1"
]
]
},
{
"name": "google-genai",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "google-pasta",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "google-resumable-media",
"specs": [
[
"==",
"2.7.2"
]
]
},
{
"name": "googleapis-common-protos",
"specs": [
[
"==",
"1.63.1"
]
]
},
{
"name": "googleapis-common-protos",
"specs": [
[
"==",
"1.70.0"
]
]
},
{
"name": "gradio-client",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "gradio-client",
"specs": [
[
"==",
"1.11.0"
]
]
},
{
"name": "grpc-google-iam-v1",
"specs": [
[
"==",
"0.13.0"
]
]
},
{
"name": "grpc-google-iam-v1",
"specs": [
[
"==",
"0.14.2"
]
]
},
{
"name": "grpcio",
"specs": [
[
"==",
"1.70.0"
]
]
},
{
"name": "grpcio",
"specs": [
[
"==",
"1.73.1"
]
]
},
{
"name": "grpcio-status",
"specs": [
[
"==",
"1.48.2"
]
]
},
{
"name": "grpcio-status",
"specs": [
[
"==",
"1.71.2"
]
]
},
{
"name": "gunicorn",
"specs": [
[
"==",
"23.0.0"
]
]
},
{
"name": "h11",
"specs": [
[
"==",
"0.16.0"
]
]
},
{
"name": "h5py",
"specs": [
[
"==",
"3.14.0"
]
]
},
{
"name": "hf-xet",
"specs": [
[
"==",
"1.1.5"
]
]
},
{
"name": "html2text",
"specs": [
[
"==",
"2024.2.26"
]
]
},
{
"name": "httpcore",
"specs": [
[
"==",
"1.0.9"
]
]
},
{
"name": "httplib2",
"specs": [
[
"==",
"0.22.0"
]
]
},
{
"name": "httpx",
"specs": [
[
"==",
"0.27.2"
]
]
},
{
"name": "httpx-sse",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "huggingface-hub",
"specs": [
[
"==",
"0.33.4"
]
]
},
{
"name": "humanfriendly",
"specs": [
[
"==",
"10.0"
]
]
},
{
"name": "humanize",
"specs": [
[
"==",
"4.12.3"
]
]
},
{
"name": "icetk",
"specs": [
[
"==",
"0.0.4"
]
]
},
{
"name": "identify",
"specs": [
[
"==",
"2.6.12"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "imagehash",
"specs": [
[
"==",
"4.3.2"
]
]
},
{
"name": "imageio",
"specs": [
[
"==",
"2.37.0"
]
]
},
{
"name": "immutabledict",
"specs": [
[
"==",
"4.2.1"
]
]
},
{
"name": "importlib-metadata",
"specs": [
[
"==",
"8.7.0"
]
]
},
{
"name": "importlib-resources",
"specs": [
[
"==",
"5.13.0"
]
]
},
{
"name": "iniconfig",
"specs": [
[
"==",
"2.1.0"
]
]
},
{
"name": "jax",
"specs": [
[
"==",
"0.4.30"
]
]
},
{
"name": "jax",
"specs": [
[
"==",
"0.6.2"
]
]
},
{
"name": "jaxlib",
"specs": [
[
"==",
"0.4.30"
]
]
},
{
"name": "jaxlib",
"specs": [
[
"==",
"0.6.2"
]
]
},
{
"name": "jieba",
"specs": [
[
"==",
"0.42.1"
]
]
},
{
"name": "jinja2",
"specs": [
[
"==",
"3.1.6"
]
]
},
{
"name": "jiter",
"specs": [
[
"==",
"0.10.0"
]
]
},
{
"name": "jmespath",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "joblib",
"specs": [
[
"==",
"1.5.1"
]
]
},
{
"name": "jsonpath-python",
"specs": [
[
"==",
"1.0.6"
]
]
},
{
"name": "kagglehub",
"specs": [
[
"==",
"0.3.12"
]
]
},
{
"name": "keras",
"specs": [
[
"==",
"2.11.0"
]
]
},
{
"name": "keras",
"specs": [
[
"==",
"3.10.0"
]
]
},
{
"name": "keras-hub",
"specs": [
[
"==",
"0.18.1"
]
]
},
{
"name": "keras-hub",
"specs": [
[
"==",
"0.21.1"
]
]
},
{
"name": "keras-nlp",
"specs": [
[
"==",
"0.18.1"
]
]
},
{
"name": "keras-nlp",
"specs": [
[
"==",
"0.21.1"
]
]
},
{
"name": "keras-tuner",
"specs": [
[
"==",
"1.4.7"
]
]
},
{
"name": "kiwisolver",
"specs": [
[
"==",
"1.4.7"
]
]
},
{
"name": "kiwisolver",
"specs": [
[
"==",
"1.4.8"
]
]
},
{
"name": "kt-legacy",
"specs": [
[
"==",
"1.0.5"
]
]
},
{
"name": "langcodes",
"specs": [
[
"==",
"3.5.0"
]
]
},
{
"name": "langdetect",
"specs": [
[
"==",
"1.0.9"
]
]
},
{
"name": "language-data",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "latex",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "lazy-loader",
"specs": [
[
"==",
"0.4"
]
]
},
{
"name": "levenshtein",
"specs": [
[
"==",
"0.27.1"
]
]
},
{
"name": "libclang",
"specs": [
[
"==",
"18.1.1"
]
]
},
{
"name": "lightning-utilities",
"specs": [
[
"==",
"0.14.3"
]
]
},
{
"name": "llvmlite",
"specs": [
[
"==",
"0.43.0"
]
]
},
{
"name": "llvmlite",
"specs": [
[
"==",
"0.44.0"
]
]
},
{
"name": "logzio-python-handler",
"specs": [
[
"==",
"3.1.1"
]
]
},
{
"name": "lpips",
"specs": [
[
"==",
"0.1.4"
]
]
},
{
"name": "lxml",
"specs": [
[
"==",
"6.0.0"
]
]
},
{
"name": "mako",
"specs": [
[
"==",
"1.3.10"
]
]
},
{
"name": "marisa-trie",
"specs": [
[
"==",
"1.2.1"
]
]
},
{
"name": "markdown",
"specs": [
[
"==",
"3.8.2"
]
]
},
{
"name": "markdown-it-py",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "markupsafe",
"specs": [
[
"==",
"3.0.2"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.9.4"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.10.3"
]
]
},
{
"name": "mccabe",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "mdurl",
"specs": [
[
"==",
"0.1.2"
]
]
},
{
"name": "mistralai",
"specs": [
[
"==",
"1.5.2"
]
]
},
{
"name": "ml-dtypes",
"specs": [
[
"==",
"0.5.1"
]
]
},
{
"name": "mpmath",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "msgpack",
"specs": [
[
"==",
"1.1.1"
]
]
},
{
"name": "msgspec",
"specs": [
[
"==",
"0.19.0"
]
]
},
{
"name": "multidict",
"specs": [
[
"==",
"6.6.3"
]
]
},
{
"name": "multilingual-clip",
"specs": [
[
"==",
"1.0.10"
]
]
},
{
"name": "multiprocess",
"specs": [
[
"==",
"0.70.16"
]
]
},
{
"name": "murmurhash",
"specs": [
[
"==",
"1.0.13"
]
]
},
{
"name": "mypy",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "mypy-extensions",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "namex",
"specs": [
[
"==",
"0.1.0"
]
]
},
{
"name": "necessary",
"specs": [
[
"==",
"0.4.3"
]
]
},
{
"name": "nest-asyncio",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "networkx",
"specs": [
[
"==",
"3.2.1"
]
]
},
{
"name": "networkx",
"specs": [
[
"==",
"3.4.2"
]
]
},
{
"name": "networkx",
"specs": [
[
"==",
"3.5"
]
]
},
{
"name": "nltk",
"specs": [
[
"==",
"3.9.1"
]
]
},
{
"name": "nodeenv",
"specs": [
[
"==",
"1.9.1"
]
]
},
{
"name": "nudenet",
"specs": [
[
"==",
"2.0.9"
]
]
},
{
"name": "numba",
"specs": [
[
"==",
"0.60.0"
]
]
},
{
"name": "numba",
"specs": [
[
"==",
"0.61.2"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.26.4"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.2.6"
]
]
},
{
"name": "nvidia-cublas-cu12",
"specs": [
[
"==",
"12.4.5.8"
]
]
},
{
"name": "nvidia-cuda-cupti-cu12",
"specs": [
[
"==",
"12.4.127"
]
]
},
{
"name": "nvidia-cuda-nvrtc-cu12",
"specs": [
[
"==",
"12.4.127"
]
]
},
{
"name": "nvidia-cuda-runtime-cu12",
"specs": [
[
"==",
"12.4.127"
]
]
},
{
"name": "nvidia-cudnn-cu12",
"specs": [
[
"==",
"9.1.0.70"
]
]
},
{
"name": "nvidia-cufft-cu12",
"specs": [
[
"==",
"11.2.1.3"
]
]
},
{
"name": "nvidia-curand-cu12",
"specs": [
[
"==",
"10.3.5.147"
]
]
},
{
"name": "nvidia-cusolver-cu12",
"specs": [
[
"==",
"11.6.1.9"
]
]
},
{
"name": "nvidia-cusparse-cu12",
"specs": [
[
"==",
"12.3.1.170"
]
]
},
{
"name": "nvidia-nccl-cu12",
"specs": [
[
"==",
"2.21.5"
]
]
},
{
"name": "nvidia-nvjitlink-cu12",
"specs": [
[
"==",
"12.4.127"
]
]
},
{
"name": "nvidia-nvtx-cu12",
"specs": [
[
"==",
"12.4.127"
]
]
},
{
"name": "oauthlib",
"specs": [
[
"==",
"3.3.1"
]
]
},
{
"name": "omegaconf",
"specs": [
[
"==",
"2.3.0"
]
]
},
{
"name": "onnxruntime",
"specs": [
[
"==",
"1.19.2"
]
]
},
{
"name": "onnxruntime",
"specs": [
[
"==",
"1.22.1"
]
]
},
{
"name": "open-clip-torch",
"specs": [
[
"==",
"2.32.0"
]
]
},
{
"name": "openai",
"specs": [
[
"==",
"1.97.0"
]
]
},
{
"name": "opencc",
"specs": [
[
"==",
"1.1.9"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.8.1.78"
]
]
},
{
"name": "opencv-python-headless",
"specs": [
[
"==",
"4.11.0.86"
]
]
},
{
"name": "openpyxl",
"specs": [
[
"==",
"3.1.5"
]
]
},
{
"name": "opt-einsum",
"specs": [
[
"==",
"3.4.0"
]
]
},
{
"name": "optax",
"specs": [
[
"==",
"0.2.4"
]
]
},
{
"name": "optax",
"specs": [
[
"==",
"0.2.5"
]
]
},
{
"name": "optree",
"specs": [
[
"==",
"0.16.0"
]
]
},
{
"name": "orbax-checkpoint",
"specs": [
[
"==",
"0.6.4"
]
]
},
{
"name": "orbax-checkpoint",
"specs": [
[
"==",
"0.11.5"
]
]
},
{
"name": "outcome",
"specs": [
[
"==",
"1.3.0.post0"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"25.0"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.3.1"
]
]
},
{
"name": "parameterized",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "pathspec",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "pdf2image",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "petname",
"specs": [
[
"==",
"2.6"
]
]
},
{
"name": "pillow",
"specs": [
[
"==",
"10.4.0"
]
]
},
{
"name": "platformdirs",
"specs": [
[
"==",
"4.3.8"
]
]
},
{
"name": "pluggy",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "portalocker",
"specs": [
[
"==",
"3.2.0"
]
]
},
{
"name": "pre-commit",
"specs": [
[
"==",
"2.20.0"
]
]
},
{
"name": "preshed",
"specs": [
[
"==",
"3.0.10"
]
]
},
{
"name": "progressbar2",
"specs": [
[
"==",
"4.5.0"
]
]
},
{
"name": "propcache",
"specs": [
[
"==",
"0.3.2"
]
]
},
{
"name": "proto-plus",
"specs": [
[
"==",
"1.26.1"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"3.19.6"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"5.29.5"
]
]
},
{
"name": "psutil",
"specs": [
[
"==",
"7.0.0"
]
]
},
{
"name": "pyarrow",
"specs": [
[
"==",
"21.0.0"
]
]
},
{
"name": "pyarrow-hotfix",
"specs": [
[
"==",
"0.7"
]
]
},
{
"name": "pyasn1",
"specs": [
[
"==",
"0.6.1"
]
]
},
{
"name": "pyasn1-modules",
"specs": [
[
"==",
"0.4.2"
]
]
},
{
"name": "pycares",
"specs": [
[
"==",
"4.9.0"
]
]
},
{
"name": "pycocoevalcap",
"specs": [
[
"==",
"1.2"
]
]
},
{
"name": "pycocotools",
"specs": [
[
"==",
"2.0.10"
]
]
},
{
"name": "pycodestyle",
"specs": [
[
"==",
"2.9.1"
]
]
},
{
"name": "pycparser",
"specs": [
[
"==",
"2.22"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.11.7"
]
]
},
{
"name": "pydantic-core",
"specs": [
[
"==",
"2.33.2"
]
]
},
{
"name": "pydload",
"specs": [
[
"==",
"1.0.9"
]
]
},
{
"name": "pyflakes",
"specs": [
[
"==",
"2.5.0"
]
]
},
{
"name": "pygments",
"specs": [
[
"==",
"2.19.2"
]
]
},
{
"name": "pyhocon",
"specs": [
[
"==",
"0.3.61"
]
]
},
{
"name": "pymongo",
"specs": [
[
"==",
"4.13.2"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"3.2.3"
]
]
},
{
"name": "pypinyin",
"specs": [
[
"==",
"0.49.0"
]
]
},
{
"name": "pyreadline3",
"specs": [
[
"==",
"3.5.4"
]
]
},
{
"name": "pysocks",
"specs": [
[
"==",
"1.7.1"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"7.2.2"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.8.2"
]
]
},
{
"name": "python-utils",
"specs": [
[
"==",
"3.9.1"
]
]
},
{
"name": "pytorch-fid",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "pytorch-lightning",
"specs": [
[
"==",
"2.5.2"
]
]
},
{
"name": "pytz",
"specs": [
[
"==",
"2025.2"
]
]
},
{
"name": "pywavelets",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "pywavelets",
"specs": [
[
"==",
"1.8.0"
]
]
},
{
"name": "pywin32",
"specs": [
[
"==",
"311"
]
]
},
{
"name": "pyyaml",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "qwen-vl-utils",
"specs": [
[
"==",
"0.0.11"
]
]
},
{
"name": "rapidfuzz",
"specs": [
[
"==",
"3.13.0"
]
]
},
{
"name": "regex",
"specs": [
[
"==",
"2024.11.6"
]
]
},
{
"name": "reka-api",
"specs": [
[
"==",
"2.0.0"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.4"
]
]
},
{
"name": "requests-oauthlib",
"specs": [
[
"==",
"2.0.0"
]
]
},
{
"name": "requirements-parser",
"specs": [
[
"==",
"0.13.0"
]
]
},
{
"name": "retrying",
"specs": [
[
"==",
"1.4.1"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"13.9.4"
]
]
},
{
"name": "rouge-score",
"specs": [
[
"==",
"0.1.2"
]
]
},
{
"name": "rsa",
"specs": [
[
"==",
"4.7.2"
]
]
},
{
"name": "s3transfer",
"specs": [
[
"==",
"0.13.1"
]
]
},
{
"name": "sacrebleu",
"specs": [
[
"==",
"2.5.1"
]
]
},
{
"name": "safetensors",
"specs": [
[
"==",
"0.5.3"
]
]
},
{
"name": "scaleapi",
"specs": [
[
"==",
"2.17.0"
]
]
},
{
"name": "scikit-image",
"specs": [
[
"==",
"0.24.0"
]
]
},
{
"name": "scikit-image",
"specs": [
[
"==",
"0.25.2"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
"==",
"1.6.1"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
"==",
"1.7.1"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.13.1"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.15.3"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "seaborn",
"specs": [
[
"==",
"0.13.2"
]
]
},
{
"name": "selenium",
"specs": [
[
"==",
"4.32.0"
]
]
},
{
"name": "selenium",
"specs": [
[
"==",
"4.34.2"
]
]
},
{
"name": "sentence-transformers",
"specs": [
[
"==",
"4.1.0"
]
]
},
{
"name": "sentencepiece",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "sentry-sdk",
"specs": [
[
"==",
"2.33.1"
]
]
},
{
"name": "setuptools",
"specs": [
[
"==",
"80.9.0"
]
]
},
{
"name": "shapely",
"specs": [
[
"==",
"2.0.7"
]
]
},
{
"name": "shapely",
"specs": [
[
"==",
"2.1.1"
]
]
},
{
"name": "shellingham",
"specs": [
[
"==",
"1.5.4"
]
]
},
{
"name": "shutilwhich",
"specs": [
[
"==",
"1.1.0"
]
]
},
{
"name": "simple-slurm",
"specs": [
[
"==",
"0.2.7"
]
]
},
{
"name": "simplejson",
"specs": [
[
"==",
"3.20.1"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "smart-open",
"specs": [
[
"==",
"7.3.0.post1"
]
]
},
{
"name": "smashed",
"specs": [
[
"==",
"0.21.5"
]
]
},
{
"name": "smmap",
"specs": [
[
"==",
"5.0.2"
]
]
},
{
"name": "sniffio",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "sortedcontainers",
"specs": [
[
"==",
"2.4.0"
]
]
},
{
"name": "soupsieve",
"specs": [
[
"==",
"2.7"
]
]
},
{
"name": "spacy",
"specs": [
[
"==",
"3.8.7"
]
]
},
{
"name": "spacy-legacy",
"specs": [
[
"==",
"3.0.12"
]
]
},
{
"name": "spacy-loggers",
"specs": [
[
"==",
"1.0.5"
]
]
},
{
"name": "sqlitedict",
"specs": [
[
"==",
"2.1.0"
]
]
},
{
"name": "srsly",
"specs": [
[
"==",
"2.5.1"
]
]
},
{
"name": "surge-api",
"specs": [
[
"==",
"1.5.10"
]
]
},
{
"name": "sympy",
"specs": [
[
"==",
"1.13.1"
]
]
},
{
"name": "tabulate",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "tempdir",
"specs": [
[
"==",
"0.7.1"
]
]
},
{
"name": "tensorboard",
"specs": [
[
"==",
"2.11.2"
]
]
},
{
"name": "tensorboard",
"specs": [
[
"==",
"2.18.0"
]
]
},
{
"name": "tensorboard-data-server",
"specs": [
[
"==",
"0.6.1"
]
]
},
{
"name": "tensorboard-data-server",
"specs": [
[
"==",
"0.7.2"
]
]
},
{
"name": "tensorboard-plugin-wit",
"specs": [
[
"==",
"1.8.1"
]
]
},
{
"name": "tensorflow",
"specs": [
[
"==",
"2.11.1"
]
]
},
{
"name": "tensorflow",
"specs": [
[
"==",
"2.18.1"
]
]
},
{
"name": "tensorflow-estimator",
"specs": [
[
"==",
"2.11.0"
]
]
},
{
"name": "tensorflow-hub",
"specs": [
[
"==",
"0.16.1"
]
]
},
{
"name": "tensorflow-io-gcs-filesystem",
"specs": [
[
"==",
"0.37.1"
]
]
},
{
"name": "tensorflow-text",
"specs": [
[
"==",
"2.11.0"
]
]
},
{
"name": "tensorflow-text",
"specs": [
[
"==",
"2.18.1"
]
]
},
{
"name": "tensorstore",
"specs": [
[
"==",
"0.1.69"
]
]
},
{
"name": "tensorstore",
"specs": [
[
"==",
"0.1.74"
]
]
},
{
"name": "termcolor",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "tf-keras",
"specs": [
[
"==",
"2.15.0"
]
]
},
{
"name": "thinc",
"specs": [
[
"==",
"8.3.4"
]
]
},
{
"name": "threadpoolctl",
"specs": [
[
"==",
"3.6.0"
]
]
},
{
"name": "tifffile",
"specs": [
[
"==",
"2024.8.30"
]
]
},
{
"name": "tifffile",
"specs": [
[
"==",
"2025.5.10"
]
]
},
{
"name": "tifffile",
"specs": [
[
"==",
"2025.6.11"
]
]
},
{
"name": "tiktoken",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "timm",
"specs": [
[
"==",
"0.6.13"
]
]
},
{
"name": "together",
"specs": [
[
"==",
"1.3.14"
]
]
},
{
"name": "tokenizers",
"specs": [
[
"==",
"0.21.2"
]
]
},
{
"name": "toml",
"specs": [
[
"==",
"0.10.2"
]
]
},
{
"name": "tomli",
"specs": [
[
"==",
"2.2.1"
]
]
},
{
"name": "toolz",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"2.5.1"
]
]
},
{
"name": "torch-fidelity",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "torchmetrics",
"specs": [
[
"==",
"0.11.4"
]
]
},
{
"name": "torchvision",
"specs": [
[
"==",
"0.20.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.67.1"
]
]
},
{
"name": "transformers",
"specs": [
[
"==",
"4.52.4"
]
]
},
{
"name": "transformers-stream-generator",
"specs": [
[
"==",
"0.0.5"
]
]
},
{
"name": "treescope",
"specs": [
[
"==",
"0.1.9"
]
]
},
{
"name": "trio",
"specs": [
[
"==",
"0.30.0"
]
]
},
{
"name": "trio-websocket",
"specs": [
[
"==",
"0.12.2"
]
]
},
{
"name": "triton",
"specs": [
[
"==",
"3.1.0"
]
]
},
{
"name": "trouting",
"specs": [
[
"==",
"0.3.3"
]
]
},
{
"name": "typer",
"specs": [
[
"==",
"0.15.3"
]
]
},
{
"name": "types-requests",
"specs": [
[
"==",
"2.31.0.6"
]
]
},
{
"name": "types-requests",
"specs": [
[
"==",
"2.32.4.20250611"
]
]
},
{
"name": "types-urllib3",
"specs": [
[
"==",
"1.26.25.14"
]
]
},
{
"name": "typing-extensions",
"specs": [
[
"==",
"4.14.1"
]
]
},
{
"name": "typing-inspect",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "typing-inspection",
"specs": [
[
"==",
"0.4.1"
]
]
},
{
"name": "tzdata",
"specs": [
[
"==",
"2025.2"
]
]
},
{
"name": "uncertainty-calibration",
"specs": [
[
"==",
"0.1.4"
]
]
},
{
"name": "unidecode",
"specs": [
[
"==",
"1.4.0"
]
]
},
{
"name": "uritemplate",
"specs": [
[
"==",
"4.2.0"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"1.26.20"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.5.0"
]
]
},
{
"name": "virtualenv",
"specs": [
[
"==",
"20.32.0"
]
]
},
{
"name": "wandb",
"specs": [
[
"==",
"0.21.0"
]
]
},
{
"name": "wasabi",
"specs": [
[
"==",
"1.1.3"
]
]
},
{
"name": "wcwidth",
"specs": [
[
"==",
"0.2.13"
]
]
},
{
"name": "weasel",
"specs": [
[
"==",
"0.4.1"
]
]
},
{
"name": "websocket-client",
"specs": [
[
"==",
"1.8.0"
]
]
},
{
"name": "websockets",
"specs": [
[
"==",
"12.0"
]
]
},
{
"name": "websockets",
"specs": [
[
"==",
"14.2"
]
]
},
{
"name": "werkzeug",
"specs": [
[
"==",
"3.1.3"
]
]
},
{
"name": "wheel",
"specs": [
[
"==",
"0.45.1"
]
]
},
{
"name": "wrapt",
"specs": [
[
"==",
"1.17.2"
]
]
},
{
"name": "writer-sdk",
"specs": [
[
"==",
"2.2.1"
]
]
},
{
"name": "writerai",
"specs": [
[
"==",
"4.0.1"
]
]
},
{
"name": "wsproto",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "xdoctest",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "xlrd",
"specs": [
[
"==",
"2.0.2"
]
]
},
{
"name": "xxhash",
"specs": [
[
"==",
"3.5.0"
]
]
},
{
"name": "yarl",
"specs": [
[
"==",
"1.20.1"
]
]
},
{
"name": "zipp",
"specs": [
[
"==",
"3.23.0"
]
]
},
{
"name": "zstandard",
"specs": [
[
"==",
"0.18.0"
]
]
}
],
"lcname": "crfm-helm"
}