Name | swebench JSON |
Version |
3.0.15
JSON |
| download |
home_page | https://swebench.com |
Summary | The official SWE-bench package - a benchmark for evaluating LMs on software engineering |
upload_time | 2025-03-02 23:50:15 |
maintainer | None |
docs_url | None |
author | John Yang |
requires_python | >=3.8 |
license | None |
keywords |
nlp
benchmark
code
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<a href="http://swe-bench.github.io">
<img src="assets/figures/swellama_banner.svg" style="height: 10em" alt="Kawi the SWE-Llama" />
</a>
</p>
<div align="center">
| [日本語](docs/README_JP.md) | [English](https://github.com/swe-bench/SWE-bench) | [中文简体](docs/README_CN.md) | [中文繁體](docs/README_TW.md) |
</div>
<p align="center">
<a href="https://www.python.org/">
<img alt="Build" src="https://img.shields.io/badge/Python-3.8+-1f425f.svg?color=purple">
</a>
<a href="https://copyright.princeton.edu/policy">
<img alt="License" src="https://img.shields.io/badge/License-MIT-blue">
</a>
<a href="https://badge.fury.io/py/swebench">
<img src="https://badge.fury.io/py/swebench.svg">
</a>
</p>
---
Code and data for the following works:
* [ICLR 2025] <a href="https://arxiv.org/abs/2410.03859">SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?</a>
* [ICLR 2024 Oral] <a href="https://arxiv.org/abs/2310.06770">SWE-bench: Can Language Models Resolve Real-World GitHub Issues?</a>
## 📰 News
* **[Jan. 13, 2025]**: We've integrated [SWE-bench Multimodal](https://swebench.github.io/multimodal) ([paper](https://arxiv.org/abs/2410.03859), [dataset](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Multimodal)) into this repository! Unlike SWE-bench, we've kept evaluation for the test split *private*. Submit to the leaderboard using [sb-cli](https://github.com/swe-bench/sb-cli/tree/main), our new cloud-based evaluation tool.
* **[Jan. 11, 2025]**: Thanks to [Modal](https://modal.com/), you can now run evaluations entirely on the cloud! See [here](https://github.com/swe-bench/SWE-bench/blob/main/assets/evaluation.md#%EF%B8%8F-evaluation-with-modal) for more details.
* **[Aug. 13, 2024]**: Introducing *SWE-bench Verified*! Part 2 of our collaboration with [OpenAI Preparedness](https://openai.com/preparedness/). A subset of 500 problems that real software engineers have confirmed are solvable. Check out more in the [report](https://openai.com/index/introducing-swe-bench-verified/)!
* **[Jun. 27, 2024]**: We have an exciting update for SWE-bench - with support from [OpenAI's Preparedness](https://openai.com/preparedness/) team: We're moving to a fully containerized evaluation harness using Docker for more reproducible evaluations! Read more in our [report](https://github.com/swe-bench/SWE-bench/blob/main/docs/20240627_docker/README.md).
* **[Apr. 2, 2024]**: We have released [SWE-agent](https://github.com/SWE-agent/SWE-agent), which sets the state-of-the-art on the full SWE-bench test set! ([Tweet 🔗](https://twitter.com/jyangballin/status/1775114444370051582))
* **[Jan. 16, 2024]**: SWE-bench has been accepted to ICLR 2024 as an oral presentation! ([OpenReview 🔗](https://openreview.net/forum?id=VTF8yNQM66))
## 👋 Overview
SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub.
Given a *codebase* and an *issue*, a language model is tasked with generating a *patch* that resolves the described problem.
<img src="assets/figures/teaser.png">
To access SWE-bench, copy and run the following code:
```python
from datasets import load_dataset
swebench = load_dataset('princeton-nlp/SWE-bench', split='test')
```
## 🚀 Set Up
SWE-bench uses Docker for reproducible evaluations.
Follow the instructions in the [Docker setup guide](https://docs.docker.com/engine/install/) to install Docker on your machine.
If you're setting up on Linux, we recommend seeing the [post-installation steps](https://docs.docker.com/engine/install/linux-postinstall/) as well.
Finally, to build SWE-bench from source, follow these steps:
```bash
git clone git@github.com:princeton-nlp/SWE-bench.git
cd SWE-bench
pip install -e .
```
Test your installation by running:
```bash
python -m swebench.harness.run_evaluation \
--predictions_path gold \
--max_workers 1 \
--instance_ids sympy__sympy-20590 \
--run_id validate-gold
```
## 💽 Usage
Evaluate patch predictions on SWE-bench Lite with the following command:
```bash
python -m swebench.harness.run_evaluation \
--dataset_name princeton-nlp/SWE-bench_Lite \
--predictions_path <path_to_predictions> \
--max_workers <num_workers> \
--run_id <run_id>
# use --predictions_path 'gold' to verify the gold patches
# use --run_id to name the evaluation run
```
This command will generate docker build logs (`logs/build_images`) and evaluation logs (`logs/run_evaluation`) in the current directory.
The final evaluation results will be stored in the `evaluation_results` directory.
> [!WARNING]
> SWE-bench evaluation can be resource intensive
> We recommend running on an `x86_64` machine with at least 120GB of free storage, 16GB of RAM, and 8 CPU cores.
> We recommend using fewer than `min(0.75 * os.cpu_count(), 24)` for `--max_workers`.
>
> If running with Docker desktop, make sure to increase your virtual disk space to ~120 free GB. Set max_workers to be consistent with the above for the CPUs available to Docker.
>
> Support for `arm64` machines is experimental.
To see the full list of arguments for the evaluation harness, run:
```bash
python -m swebench.harness.run_evaluation --help
```
See the [evaluation tutorial](assets/evaluation.md) for the full rundown on datasets you can evaluate.
If you're looking for non-local, cloud based evaluations, check out...
* [sb-cli](https://github.com/swe-bench/sb-cli), our tool for running evaluations automatically on AWS, or...
* Running SWE-bench evaluation on [Modal](https://modal.com/). Details [here](https://github.com/swe-bench/SWE-bench/blob/main/assets/evaluation.md#%EF%B8%8F-evaluation-with-modal)
Additionally, you can also:
* [Train](https://github.com/swe-bench/SWE-bench/tree/main/swebench/inference/make_datasets) your own models on our pre-processed datasets.
* Run [inference](https://github.com/swe-bench/SWE-bench/blob/main/swebench/inference/README.md) on existing models (both local and API models). The inference step is where you give the model a repo + issue and have it generate a fix.
* Run SWE-bench's [data collection procedure](https://github.com/swe-bench/SWE-bench/blob/main/swebench/collect/) ([tutorial](assets/collection.md)) on your own repositories, to make new SWE-Bench tasks.
* ⚠️ We are temporarily pausing support for queries around creating SWE-bench instances. Please see the note in the tutorial.
## ⬇️ Downloads
| Datasets | Models | RAG |
| - | - | - |
| [💿 SWE-bench](https://huggingface.co/datasets/princeton-nlp/SWE-bench) | [🦙 SWE-Llama 13b](https://huggingface.co/princeton-nlp/SWE-Llama-13b) | [🤗 "Oracle" Retrieval](https://huggingface.co/datasets/princeton-nlp/SWE-bench_oracle) |
| [💿 SWE-bench Lite](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Lite) | [🦙 SWE-Llama 13b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-13b-peft) | [🤗 BM25 Retrieval 13K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_13K) |
| [💿 SWE-bench Verified](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Verified) | [🦙 SWE-Llama 7b](https://huggingface.co/princeton-nlp/SWE-Llama-7b) | [🤗 BM25 Retrieval 27K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_27K) |
| [💿 SWE-bench Multimodal](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Multimodal) | [🦙 SWE-Llama 7b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-7b-peft) | [🤗 BM25 Retrieval 40K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_40K) |
| | | [🤗 BM25 Retrieval 50K (Llama tokens)](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_50k_llama) |
## 💫 Contributions
We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues!
To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!
Contact person: [Carlos E. Jimenez](http://www.carlosejimenez.com/) and [John Yang](https://john-b-yang.github.io/) (Email: carlosej@princeton.edu, johnby@stanford.edu).
## ✍️ Citation
If you find our work helpful, please use the following citations.
```
@inproceedings{
jimenez2024swebench,
title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=VTF8yNQM66}
}
@inproceedings{
yang2024swebenchmultimodal,
title={{SWE}-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?},
author={John Yang and Carlos E. Jimenez and Alex L. Zhang and Kilian Lieret and Joyce Yang and Xindi Wu and Ori Press and Niklas Muennighoff and Gabriel Synnaeve and Karthik R. Narasimhan and Diyi Yang and Sida I. Wang and Ofir Press},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=riTiq3i21b}
}
```
## 🪪 License
MIT. Check `LICENSE.md`.
Raw data
{
"_id": null,
"home_page": "https://swebench.com",
"name": "swebench",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "nlp, benchmark, code",
"author": "John Yang",
"author_email": "byjohnyang@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/a8/05/c163c2ee93f306110b27ddcdc7800ca1932c7489a35973e11c113d64d767/swebench-3.0.15.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <a href=\"http://swe-bench.github.io\">\n <img src=\"assets/figures/swellama_banner.svg\" style=\"height: 10em\" alt=\"Kawi the SWE-Llama\" />\n </a>\n</p>\n\n<div align=\"center\">\n\n | [\u65e5\u672c\u8a9e](docs/README_JP.md) | [English](https://github.com/swe-bench/SWE-bench) | [\u4e2d\u6587\u7b80\u4f53](docs/README_CN.md) | [\u4e2d\u6587\u7e41\u9ad4](docs/README_TW.md) |\n\n</div>\n\n<p align=\"center\">\n <a href=\"https://www.python.org/\">\n <img alt=\"Build\" src=\"https://img.shields.io/badge/Python-3.8+-1f425f.svg?color=purple\">\n </a>\n <a href=\"https://copyright.princeton.edu/policy\">\n <img alt=\"License\" src=\"https://img.shields.io/badge/License-MIT-blue\">\n </a>\n <a href=\"https://badge.fury.io/py/swebench\">\n <img src=\"https://badge.fury.io/py/swebench.svg\">\n </a>\n</p>\n\n---\n\nCode and data for the following works:\n* [ICLR 2025] <a href=\"https://arxiv.org/abs/2410.03859\">SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?</a>\n* [ICLR 2024 Oral] <a href=\"https://arxiv.org/abs/2310.06770\">SWE-bench: Can Language Models Resolve Real-World GitHub Issues?</a>\n\n## \ud83d\udcf0 News\n* **[Jan. 13, 2025]**: We've integrated [SWE-bench Multimodal](https://swebench.github.io/multimodal) ([paper](https://arxiv.org/abs/2410.03859), [dataset](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Multimodal)) into this repository! Unlike SWE-bench, we've kept evaluation for the test split *private*. Submit to the leaderboard using [sb-cli](https://github.com/swe-bench/sb-cli/tree/main), our new cloud-based evaluation tool.\n* **[Jan. 11, 2025]**: Thanks to [Modal](https://modal.com/), you can now run evaluations entirely on the cloud! See [here](https://github.com/swe-bench/SWE-bench/blob/main/assets/evaluation.md#%EF%B8%8F-evaluation-with-modal) for more details.\n* **[Aug. 13, 2024]**: Introducing *SWE-bench Verified*! Part 2 of our collaboration with [OpenAI Preparedness](https://openai.com/preparedness/). A subset of 500 problems that real software engineers have confirmed are solvable. Check out more in the [report](https://openai.com/index/introducing-swe-bench-verified/)!\n* **[Jun. 27, 2024]**: We have an exciting update for SWE-bench - with support from [OpenAI's Preparedness](https://openai.com/preparedness/) team: We're moving to a fully containerized evaluation harness using Docker for more reproducible evaluations! Read more in our [report](https://github.com/swe-bench/SWE-bench/blob/main/docs/20240627_docker/README.md).\n* **[Apr. 2, 2024]**: We have released [SWE-agent](https://github.com/SWE-agent/SWE-agent), which sets the state-of-the-art on the full SWE-bench test set! ([Tweet \ud83d\udd17](https://twitter.com/jyangballin/status/1775114444370051582))\n* **[Jan. 16, 2024]**: SWE-bench has been accepted to ICLR 2024 as an oral presentation! ([OpenReview \ud83d\udd17](https://openreview.net/forum?id=VTF8yNQM66))\n\n## \ud83d\udc4b Overview\nSWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub.\nGiven a *codebase* and an *issue*, a language model is tasked with generating a *patch* that resolves the described problem.\n\n<img src=\"assets/figures/teaser.png\">\n\nTo access SWE-bench, copy and run the following code:\n```python\nfrom datasets import load_dataset\nswebench = load_dataset('princeton-nlp/SWE-bench', split='test')\n```\n\n## \ud83d\ude80 Set Up\nSWE-bench uses Docker for reproducible evaluations.\nFollow the instructions in the [Docker setup guide](https://docs.docker.com/engine/install/) to install Docker on your machine.\nIf you're setting up on Linux, we recommend seeing the [post-installation steps](https://docs.docker.com/engine/install/linux-postinstall/) as well.\n\nFinally, to build SWE-bench from source, follow these steps:\n```bash\ngit clone git@github.com:princeton-nlp/SWE-bench.git\ncd SWE-bench\npip install -e .\n```\n\nTest your installation by running:\n```bash\npython -m swebench.harness.run_evaluation \\\n --predictions_path gold \\\n --max_workers 1 \\\n --instance_ids sympy__sympy-20590 \\\n --run_id validate-gold\n```\n\n## \ud83d\udcbd Usage\nEvaluate patch predictions on SWE-bench Lite with the following command:\n```bash\npython -m swebench.harness.run_evaluation \\\n --dataset_name princeton-nlp/SWE-bench_Lite \\\n --predictions_path <path_to_predictions> \\\n --max_workers <num_workers> \\\n --run_id <run_id>\n # use --predictions_path 'gold' to verify the gold patches\n # use --run_id to name the evaluation run\n```\n\nThis command will generate docker build logs (`logs/build_images`) and evaluation logs (`logs/run_evaluation`) in the current directory.\n\nThe final evaluation results will be stored in the `evaluation_results` directory.\n\n> [!WARNING]\n> SWE-bench evaluation can be resource intensive\n> We recommend running on an `x86_64` machine with at least 120GB of free storage, 16GB of RAM, and 8 CPU cores.\n> We recommend using fewer than `min(0.75 * os.cpu_count(), 24)` for `--max_workers`.\n>\n> If running with Docker desktop, make sure to increase your virtual disk space to ~120 free GB. Set max_workers to be consistent with the above for the CPUs available to Docker.\n>\n> Support for `arm64` machines is experimental.\n\nTo see the full list of arguments for the evaluation harness, run:\n```bash\npython -m swebench.harness.run_evaluation --help\n```\n\nSee the [evaluation tutorial](assets/evaluation.md) for the full rundown on datasets you can evaluate.\nIf you're looking for non-local, cloud based evaluations, check out...\n* [sb-cli](https://github.com/swe-bench/sb-cli), our tool for running evaluations automatically on AWS, or...\n* Running SWE-bench evaluation on [Modal](https://modal.com/). Details [here](https://github.com/swe-bench/SWE-bench/blob/main/assets/evaluation.md#%EF%B8%8F-evaluation-with-modal)\n\nAdditionally, you can also:\n* [Train](https://github.com/swe-bench/SWE-bench/tree/main/swebench/inference/make_datasets) your own models on our pre-processed datasets.\n* Run [inference](https://github.com/swe-bench/SWE-bench/blob/main/swebench/inference/README.md) on existing models (both local and API models). The inference step is where you give the model a repo + issue and have it generate a fix.\n* Run SWE-bench's [data collection procedure](https://github.com/swe-bench/SWE-bench/blob/main/swebench/collect/) ([tutorial](assets/collection.md)) on your own repositories, to make new SWE-Bench tasks.\n * \u26a0\ufe0f We are temporarily pausing support for queries around creating SWE-bench instances. Please see the note in the tutorial.\n\n## \u2b07\ufe0f Downloads\n| Datasets | Models | RAG |\n| - | - | - |\n| [\ud83d\udcbf SWE-bench](https://huggingface.co/datasets/princeton-nlp/SWE-bench) | [\ud83e\udd99 SWE-Llama 13b](https://huggingface.co/princeton-nlp/SWE-Llama-13b) | [\ud83e\udd17 \"Oracle\" Retrieval](https://huggingface.co/datasets/princeton-nlp/SWE-bench_oracle) |\n| [\ud83d\udcbf SWE-bench Lite](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Lite) | [\ud83e\udd99 SWE-Llama 13b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-13b-peft) | [\ud83e\udd17 BM25 Retrieval 13K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_13K) |\n| [\ud83d\udcbf SWE-bench Verified](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Verified) | [\ud83e\udd99 SWE-Llama 7b](https://huggingface.co/princeton-nlp/SWE-Llama-7b) | [\ud83e\udd17 BM25 Retrieval 27K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_27K) |\n| [\ud83d\udcbf SWE-bench Multimodal](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Multimodal) | [\ud83e\udd99 SWE-Llama 7b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-7b-peft) | [\ud83e\udd17 BM25 Retrieval 40K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_40K) |\n| | | [\ud83e\udd17 BM25 Retrieval 50K (Llama tokens)](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_50k_llama) |\n\n## \ud83d\udcab Contributions\nWe would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues!\nTo do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!\n\nContact person: [Carlos E. Jimenez](http://www.carlosejimenez.com/) and [John Yang](https://john-b-yang.github.io/) (Email: carlosej@princeton.edu, johnby@stanford.edu).\n\n## \u270d\ufe0f Citation\nIf you find our work helpful, please use the following citations.\n\n```\n@inproceedings{\n jimenez2024swebench,\n title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},\n author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},\n booktitle={The Twelfth International Conference on Learning Representations},\n year={2024},\n url={https://openreview.net/forum?id=VTF8yNQM66}\n}\n\n@inproceedings{\n yang2024swebenchmultimodal,\n title={{SWE}-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?},\n author={John Yang and Carlos E. Jimenez and Alex L. Zhang and Kilian Lieret and Joyce Yang and Xindi Wu and Ori Press and Niklas Muennighoff and Gabriel Synnaeve and Karthik R. Narasimhan and Diyi Yang and Sida I. Wang and Ofir Press},\n booktitle={The Thirteenth International Conference on Learning Representations},\n year={2025},\n url={https://openreview.net/forum?id=riTiq3i21b}\n}\n```\n\n## \ud83e\udeaa License\nMIT. Check `LICENSE.md`.\n",
"bugtrack_url": null,
"license": null,
"summary": "The official SWE-bench package - a benchmark for evaluating LMs on software engineering",
"version": "3.0.15",
"project_urls": {
"Bug Reports": "http://github.com/swe-bench/SWE-bench/issues",
"Documentation": "https://github.com/swe-bench/SWE-bench",
"Homepage": "https://swebench.com",
"Source Code": "http://github.com/swe-bench/SWE-bench",
"Website": "https://swebench.com"
},
"split_keywords": [
"nlp",
" benchmark",
" code"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "626cfebe6bb4398e03aa48d50c555b36d2ac26b2e6d3c427ff9dba499b2557a2",
"md5": "9da54e4480eb79e10948f806670fa95b",
"sha256": "dd694356f9c155a55d3d2e113fe58446f7385eea0574230af5e2504426f8b85b"
},
"downloads": -1,
"filename": "swebench-3.0.15-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9da54e4480eb79e10948f806670fa95b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 125151,
"upload_time": "2025-03-02T23:50:13",
"upload_time_iso_8601": "2025-03-02T23:50:13.589923Z",
"url": "https://files.pythonhosted.org/packages/62/6c/febe6bb4398e03aa48d50c555b36d2ac26b2e6d3c427ff9dba499b2557a2/swebench-3.0.15-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a805c163c2ee93f306110b27ddcdc7800ca1932c7489a35973e11c113d64d767",
"md5": "96137f5ef305c7b8efba4ceeaa230f52",
"sha256": "24e734fbcce34082665a25719075e6899382b7135103dd8c6cc09a6e23789101"
},
"downloads": -1,
"filename": "swebench-3.0.15.tar.gz",
"has_sig": false,
"md5_digest": "96137f5ef305c7b8efba4ceeaa230f52",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 108523,
"upload_time": "2025-03-02T23:50:15",
"upload_time_iso_8601": "2025-03-02T23:50:15.526137Z",
"url": "https://files.pythonhosted.org/packages/a8/05/c163c2ee93f306110b27ddcdc7800ca1932c7489a35973e11c113d64d767/swebench-3.0.15.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-02 23:50:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "swe-bench",
"github_project": "SWE-bench",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "swebench"
}