swebench


Nameswebench JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://swebench.com
SummaryThe official SWE-bench package - a benchmark for evaluating LMs on software engineering
upload_time2024-04-15 22:21:05
maintainerNone
docs_urlNone
authorJohn Yang
requires_python>=3.8
licenseNone
keywords nlp benchmark code
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <a href="https://github.com/princeton-nlp/Llamao">
    <img src="assets/swellama_banner.png" width="50%" alt="Kawi the SWE-Llama" />
  </a>
</p>

<div align="center">

 | [日本語](docs/README_JP.md) | [English](https://github.com/princeton-nlp/SWE-bench) | [中文简体](docs/README_CN.md) | [中文繁體](docs/README_TW.md) |

</div>


---
<p align="center">
Code and data for our ICLR 2024 paper <a href="http://swe-bench.github.io/paper.pdf">SWE-bench: Can Language Models Resolve Real-World GitHub Issues?</a>
    </br>
    </br>
    <a href="https://www.python.org/">
        <img alt="Build" src="https://img.shields.io/badge/Python-3.8+-1f425f.svg?color=purple">
    </a>
    <a href="https://copyright.princeton.edu/policy">
        <img alt="License" src="https://img.shields.io/badge/License-MIT-blue">
    </a>
    <a href="https://badge.fury.io/py/swebench">
        <img src="https://badge.fury.io/py/swebench.svg">
    </a>
</p>

Please refer our [website](http://swe-bench.github.io) for the public leaderboard and the [change log](https://github.com/princeton-nlp/SWE-bench/blob/main/CHANGELOG.md) for information on the latest updates to the SWE-bench benchmark.

## 📰 News
* **[Apr. 15, 2024]**: SWE-bench has gone through major improvements to resolve issues with the evaluation harness. Read more in our [report](https://github.com/princeton-nlp/SWE-bench/blob/main/docs/20240405_eval_bug/README.md).
* **[Apr. 2, 2024]**: We have released [SWE-agent](https://github.com/princeton-nlp/SWE-agent), which sets the state-of-the-art on the full SWE-bench test set! ([Tweet 🔗](https://twitter.com/jyangballin/status/1775114444370051582))
* **[Jan. 16, 2024]**: SWE-bench has been accepted to ICLR 2024 as an oral presentation! ([OpenReview 🔗](https://openreview.net/forum?id=VTF8yNQM66))

## 👋 Overview
SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub.
Given a *codebase* and an *issue*, a language model is tasked with generating a *patch* that resolves the described problem.

<img src="assets/teaser.png">

## 🚀 Set Up
To build SWE-bench from source, follow these steps:
1. Clone this repository locally
2. `cd` into the repository.
3. Run `conda env create -f environment.yml` to created a conda environment named `swe-bench`
4. Activate the environment with `conda activate swe-bench`

## 💽 Usage
You can download the SWE-bench dataset directly ([dev](https://drive.google.com/uc?export=download&id=1SbOxHiR0eXlq2azPSSOIDZz-Hva0ETpX), [test](https://drive.google.com/uc?export=download&id=164g55i3_B78F6EphCZGtgSrd2GneFyRM) sets) or from [HuggingFace](https://huggingface.co/datasets/princeton-nlp/SWE-bench).

To use SWE-Bench, you can:
* Train your own models on our pre-processed datasets  
* Run [inference](https://github.com/princeton-nlp/SWE-bench/blob/main/inference/) on existing models (either models you have on-disk like LLaMA, or models you have access to through an API like GPT-4). The inference step is where you get a repo and an issue and have the model try to generate a fix for it.
* [Evaluate](https://github.com/princeton-nlp/SWE-bench/blob/main/swebench/harness/) models against SWE-bench. This is where you take a SWE-Bench task and a model-proposed solution and evaluate its correctness. 
*  Run SWE-bench's [data collection procedure](https://github.com/princeton-nlp/SWE-bench/blob/main/swebench/collect/) on your own repositories, to make new SWE-Bench tasks. 

## ⬇️ Downloads
| Datasets | Models |
| - | - |
| [🤗 SWE-bench](https://huggingface.co/datasets/princeton-nlp/SWE-bench) | [🦙 SWE-Llama 13b](https://huggingface.co/princeton-nlp/SWE-Llama-13b) |
| [🤗 "Oracle" Retrieval](https://huggingface.co/datasets/princeton-nlp/SWE-bench_oracle) | [🦙 SWE-Llama 13b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-13b-peft) |
| [🤗 BM25 Retrieval 13K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_13K) | [🦙 SWE-Llama 7b](https://huggingface.co/princeton-nlp/SWE-Llama-7b) |
| [🤗 BM25 Retrieval 27K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_27K) | [🦙 SWE-Llama 7b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-7b-peft) |
| [🤗 BM25 Retrieval 40K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_40K) | |
| [🤗 BM25 Retrieval 50K (Llama tokens)](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_50k_llama)   | |

## 🍎 Tutorials
We've also written the following blog posts on how to use different parts of SWE-bench.
If you'd like to see a post about a particular topic, please let us know via an issue.
* [Nov 1. 2023] Collecting Evaluation Tasks for SWE-Bench ([🔗](https://github.com/princeton-nlp/SWE-bench/tree/main/tutorials/collection.md))
* [Nov 6. 2023] Evaluating on SWE-bench ([🔗](https://github.com/princeton-nlp/SWE-bench/tree/main/tutorials/evaluation.md))

## 💫 Contributions
We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues!
To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!

Contact person: [Carlos E. Jimenez](http://www.carlosejimenez.com/) and [John Yang](https://john-b-yang.github.io/) (Email: {carlosej, jy1682}@princeton.edu).

## ✍️ Citation
If you find our work helpful, please use the following citations.
```
@inproceedings{
    jimenez2024swebench,
    title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
    author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=VTF8yNQM66}
}
```

## 🪪 License
MIT. Check `LICENSE.md`.

            

Raw data

            {
    "_id": null,
    "home_page": "https://swebench.com",
    "name": "swebench",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "nlp, benchmark, code",
    "author": "John Yang",
    "author_email": "byjohnyang@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c6/8d/431b9a32126b1d5e226ecfaef5bbbcba63f03dc0f7fea05bc4bb95341f21/swebench-1.1.0.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <a href=\"https://github.com/princeton-nlp/Llamao\">\n    <img src=\"assets/swellama_banner.png\" width=\"50%\" alt=\"Kawi the SWE-Llama\" />\n  </a>\n</p>\n\n<div align=\"center\">\n\n | [\u65e5\u672c\u8a9e](docs/README_JP.md) | [English](https://github.com/princeton-nlp/SWE-bench) | [\u4e2d\u6587\u7b80\u4f53](docs/README_CN.md) | [\u4e2d\u6587\u7e41\u9ad4](docs/README_TW.md) |\n\n</div>\n\n\n---\n<p align=\"center\">\nCode and data for our ICLR 2024 paper <a href=\"http://swe-bench.github.io/paper.pdf\">SWE-bench: Can Language Models Resolve Real-World GitHub Issues?</a>\n    </br>\n    </br>\n    <a href=\"https://www.python.org/\">\n        <img alt=\"Build\" src=\"https://img.shields.io/badge/Python-3.8+-1f425f.svg?color=purple\">\n    </a>\n    <a href=\"https://copyright.princeton.edu/policy\">\n        <img alt=\"License\" src=\"https://img.shields.io/badge/License-MIT-blue\">\n    </a>\n    <a href=\"https://badge.fury.io/py/swebench\">\n        <img src=\"https://badge.fury.io/py/swebench.svg\">\n    </a>\n</p>\n\nPlease refer our [website](http://swe-bench.github.io) for the public leaderboard and the [change log](https://github.com/princeton-nlp/SWE-bench/blob/main/CHANGELOG.md) for information on the latest updates to the SWE-bench benchmark.\n\n## \ud83d\udcf0 News\n* **[Apr. 15, 2024]**: SWE-bench has gone through major improvements to resolve issues with the evaluation harness. Read more in our [report](https://github.com/princeton-nlp/SWE-bench/blob/main/docs/20240405_eval_bug/README.md).\n* **[Apr. 2, 2024]**: We have released [SWE-agent](https://github.com/princeton-nlp/SWE-agent), which sets the state-of-the-art on the full SWE-bench test set! ([Tweet \ud83d\udd17](https://twitter.com/jyangballin/status/1775114444370051582))\n* **[Jan. 16, 2024]**: SWE-bench has been accepted to ICLR 2024 as an oral presentation! ([OpenReview \ud83d\udd17](https://openreview.net/forum?id=VTF8yNQM66))\n\n## \ud83d\udc4b Overview\nSWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub.\nGiven a *codebase* and an *issue*, a language model is tasked with generating a *patch* that resolves the described problem.\n\n<img src=\"assets/teaser.png\">\n\n## \ud83d\ude80 Set Up\nTo build SWE-bench from source, follow these steps:\n1. Clone this repository locally\n2. `cd` into the repository.\n3. Run `conda env create -f environment.yml` to created a conda environment named `swe-bench`\n4. Activate the environment with `conda activate swe-bench`\n\n## \ud83d\udcbd Usage\nYou can download the SWE-bench dataset directly ([dev](https://drive.google.com/uc?export=download&id=1SbOxHiR0eXlq2azPSSOIDZz-Hva0ETpX), [test](https://drive.google.com/uc?export=download&id=164g55i3_B78F6EphCZGtgSrd2GneFyRM) sets) or from [HuggingFace](https://huggingface.co/datasets/princeton-nlp/SWE-bench).\n\nTo use SWE-Bench, you can:\n* Train your own models on our pre-processed datasets  \n* Run [inference](https://github.com/princeton-nlp/SWE-bench/blob/main/inference/) on existing models (either models you have on-disk like LLaMA, or models you have access to through an API like GPT-4). The inference step is where you get a repo and an issue and have the model try to generate a fix for it.\n* [Evaluate](https://github.com/princeton-nlp/SWE-bench/blob/main/swebench/harness/) models against SWE-bench. This is where you take a SWE-Bench task and a model-proposed solution and evaluate its correctness. \n*  Run SWE-bench's [data collection procedure](https://github.com/princeton-nlp/SWE-bench/blob/main/swebench/collect/) on your own repositories, to make new SWE-Bench tasks. \n\n## \u2b07\ufe0f Downloads\n| Datasets | Models |\n| - | - |\n| [\ud83e\udd17 SWE-bench](https://huggingface.co/datasets/princeton-nlp/SWE-bench) | [\ud83e\udd99 SWE-Llama 13b](https://huggingface.co/princeton-nlp/SWE-Llama-13b) |\n| [\ud83e\udd17 \"Oracle\" Retrieval](https://huggingface.co/datasets/princeton-nlp/SWE-bench_oracle) | [\ud83e\udd99 SWE-Llama 13b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-13b-peft) |\n| [\ud83e\udd17 BM25 Retrieval 13K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_13K) | [\ud83e\udd99 SWE-Llama 7b](https://huggingface.co/princeton-nlp/SWE-Llama-7b) |\n| [\ud83e\udd17 BM25 Retrieval 27K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_27K) | [\ud83e\udd99 SWE-Llama 7b (PEFT)](https://huggingface.co/princeton-nlp/SWE-Llama-7b-peft) |\n| [\ud83e\udd17 BM25 Retrieval 40K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_40K) | |\n| [\ud83e\udd17 BM25 Retrieval 50K (Llama tokens)](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_50k_llama)   | |\n\n## \ud83c\udf4e Tutorials\nWe've also written the following blog posts on how to use different parts of SWE-bench.\nIf you'd like to see a post about a particular topic, please let us know via an issue.\n* [Nov 1. 2023] Collecting Evaluation Tasks for SWE-Bench ([\ud83d\udd17](https://github.com/princeton-nlp/SWE-bench/tree/main/tutorials/collection.md))\n* [Nov 6. 2023] Evaluating on SWE-bench ([\ud83d\udd17](https://github.com/princeton-nlp/SWE-bench/tree/main/tutorials/evaluation.md))\n\n## \ud83d\udcab Contributions\nWe would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues!\nTo do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!\n\nContact person: [Carlos E. Jimenez](http://www.carlosejimenez.com/) and [John Yang](https://john-b-yang.github.io/) (Email: {carlosej, jy1682}@princeton.edu).\n\n## \u270d\ufe0f Citation\nIf you find our work helpful, please use the following citations.\n```\n@inproceedings{\n    jimenez2024swebench,\n    title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},\n    author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},\n    booktitle={The Twelfth International Conference on Learning Representations},\n    year={2024},\n    url={https://openreview.net/forum?id=VTF8yNQM66}\n}\n```\n\n## \ud83e\udeaa License\nMIT. Check `LICENSE.md`.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "The official SWE-bench package - a benchmark for evaluating LMs on software engineering",
    "version": "1.1.0",
    "project_urls": {
        "Bug Reports": "http://github.com/princeton-nlp/SWE-bench/issues",
        "Documentation": "https://github.com/princeton-nlp/SWE-bench",
        "Homepage": "https://swebench.com",
        "Source Code": "http://github.com/princeton-nlp/SWE-bench",
        "Website": "https://swebench.com"
    },
    "split_keywords": [
        "nlp",
        " benchmark",
        " code"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "840b9870c4b137ce3ace9233d462972e3bfa91fa011bf3e35679835469d0d0f3",
                "md5": "e95c5db3b01ebdd4f11481a105127562",
                "sha256": "0cfd7ceab67706c15531577e4f9c2339385a0a4719f0eba65e997e30f73018d8"
            },
            "downloads": -1,
            "filename": "swebench-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e95c5db3b01ebdd4f11481a105127562",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 98100,
            "upload_time": "2024-04-15T22:21:03",
            "upload_time_iso_8601": "2024-04-15T22:21:03.895548Z",
            "url": "https://files.pythonhosted.org/packages/84/0b/9870c4b137ce3ace9233d462972e3bfa91fa011bf3e35679835469d0d0f3/swebench-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c68d431b9a32126b1d5e226ecfaef5bbbcba63f03dc0f7fea05bc4bb95341f21",
                "md5": "71a49502d1718286fa8678abc831e718",
                "sha256": "309d66ee7fac726ed7c8a6b8540eab4263149e9485e2848f5a4e99906b40398d"
            },
            "downloads": -1,
            "filename": "swebench-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "71a49502d1718286fa8678abc831e718",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 84022,
            "upload_time": "2024-04-15T22:21:05",
            "upload_time_iso_8601": "2024-04-15T22:21:05.930204Z",
            "url": "https://files.pythonhosted.org/packages/c6/8d/431b9a32126b1d5e226ecfaef5bbbcba63f03dc0f7fea05bc4bb95341f21/swebench-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-15 22:21:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "princeton-nlp",
    "github_project": "SWE-bench",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "swebench"
}
        
Elapsed time: 0.24168s