tree-of-thoughts-llm


Nametree-of-thoughts-llm JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/princeton-nlp/tree-of-thought-llm
SummaryOfficial Implementation of "Tree of Thoughts: Deliberate Problem Solving with Large Language Models"
upload_time2023-07-05 22:01:08
maintainer
docs_urlNone
authorShunyu Yao
requires_python>=3.7
licenseMIT License
keywords tree-search large-language-models llm prompting tree-of-thoughts
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Official Repo of Tree of Thoughts (ToT)
[![DOI](https://zenodo.org/badge/642099326.svg)](https://zenodo.org/badge/latestdoi/642099326)

<details>
  <summary>Note: https://github.com/kyegomez/tree-of-thoughts CANNOT replicate paper results. </summary>

In fact, people have reported that his code [cannot](https://github.com/kyegomez/tree-of-thoughts/issues/52) [properly](https://github.com/kyegomez/tree-of-thoughts/issues/41) [run](https://github.com/kyegomez/tree-of-thoughts/issues/60), and is [probably automatically generated by ChatGPT](pics/fake.png), and [kyegomez has done so for other popular ML methods](https://twitter.com/qbitium/status/1663954096741814272), while intentionally refusing to link to official implementations for his own interests (see https://github.com/kyegomez/tree-of-thoughts/issues/54, https://github.com/kyegomez/tree-of-thoughts/issues/55, https://github.com/kyegomez/tree-of-thoughts/issues/56).
Unfortunately, Google/Github searches go to kyegomez's malicious repo by default as it has more stars. **Please DE-STAR his repo and STAR this to help other people avoid being misled, thanks!**
</details>


![teaser](pics/teaser.png)

Official implementation for paper [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601) with code, prompts, model outputs.
Also check [its tweet thread](https://twitter.com/ShunyuYao12/status/1659357547474681857) in 1min.






## Setup
- Set up OpenAI API key and store in environment variable ``OPENAI_API_KEY`` (see [here](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety)). 

- Install dependencies and `tot` package (PyPI package coming soon): 
```bash
git clone https://github.com/princeton-nlp/tree-of-thought-llm
cd tree-of-thought-llm
pip install -r requirements.txt
pip install -e .  # install `tot` package
```


## Quick Start
The following minimal script will attempt to solve the game of 24 with `4 5 6 10` (might be a bit slow as it's using GPT-4):
```python
import argparse
from tot.methods.bfs import solve
from tot.tasks.game24 import Game24Task

args = argparse.Namespace(backend='gpt-4', temperature=0.7, task='game24', naive_run=False, prompt_sample=None, method_generate='propose', method_evaluate='value', method_select='greedy', n_generate_sample=1, n_evaluate_sample=3, n_select_sample=5)

task = Game24Task()
ys, infos = solve(args, task, 900)
print(ys[0])
```

And the output would be something like (note it's not deterministic, and sometimes the output can be wrong):
```
10 - 4 = 6 (left: 5 6 6)
5 * 6 = 30 (left: 6 30)
30 - 6 = 24 (left: 24)
Answer: (5 * (10 - 4)) - 6 = 24
```

## Paper Experiments

Run experiments via ``sh scripts/{game24, text, crosswords}/{standard_sampling, cot_sampling, bfs}.sh``, except in crosswords we use a DFS algorithm for ToT, which can be run via ``scripts/crosswords/search_crosswords-dfs.ipynb``.

The very simple ``run.py`` implements the ToT + BFS algorithm, as well as the naive IO/CoT sampling. Some key arguments:

- ``--naive_run``: if True, run naive IO/CoT sampling instead of ToT + BFS.
-  ``--prompt_sample`` (choices=[``standard``, ``cot``]): sampling prompt
- ``--method_generate`` (choices=[``sample``, ``propose``]): thought generator, whether to sample independent thoughts (used in Creative Writing) or propose sequential thoughts (used in Game of 24)
- ``--method_evaluate`` (choices=[``value``, ``vote``]): state evaluator, whether to use the value states independently (used in Game of 24) or vote on states together (used in Creative Writing)
- ``--n_generate_sample``: number of times to prompt for thought generation
- ``--n_evaluate_sample``: number of times to prompt for state evaluation
- ``--n_select_sample``: number of states to keep from each step (i.e. ``b`` in the paper's ToT + BFS algorithm)



## Paper Trajectories
``logs/`` contains all the trajectories from the paper's experiments, except for ``logs/game24/gpt-4_0.7_propose1_value3_greedy5_start900_end1000.json`` which was reproduced after the paper (as the original experiment was done in a notebook) and achieved a 69\% score instead of the original 74\% score due to randomness in GPT decoding. We hope to aggregate multiple runs in the future to account for sampling randomness and update the paper, but this shouldn't affect the main conclusions of the paper.

## How to Add A New Task
Setting up a new task is easy, and mainly involves two steps.
* Set up a new task class in ``tot/tasks/`` and task files in ``tot/data/``. See ``tot/tasks/game24.py`` for an example. Add the task to ``tot/tasks/__init__.py``.
* Set up task-specific prompts in ``tot/prompts/``. See ``tot/prompts/game24.py`` for an example. Depending on the nature of the task, choose ``--method_generate`` (choices=[``sample``, ``propose``]) and ``--method_evaluate`` (choices=[``value``, ``vote``]) and their corresponding prompts. 

## Citations
Please cite the paper and star this repo if you use ToT and find it interesting/useful, thanks! Feel free to contact shunyuyao.cs@gmail.com or open an issue if you have any questions.

```bibtex
@misc{yao2023tree,
      title={{Tree of Thoughts}: Deliberate Problem Solving with Large Language Models}, 
      author={Shunyu Yao and Dian Yu and Jeffrey Zhao and Izhak Shafran and Thomas L. Griffiths and Yuan Cao and Karthik Narasimhan},
      year={2023},
      eprint={2305.10601},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/princeton-nlp/tree-of-thought-llm",
    "name": "tree-of-thoughts-llm",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "tree-search,large-language-models,llm,prompting,tree-of-thoughts",
    "author": "Shunyu Yao",
    "author_email": "Shunyu Yao <shunyuyao.cs@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/5c/73/617d2443db412efe4e20568ded5fe0529ead95a47c493c7fb5b9b39dc318/tree-of-thoughts-llm-0.1.0.tar.gz",
    "platform": null,
    "description": "# Official Repo of Tree of Thoughts (ToT)\n[![DOI](https://zenodo.org/badge/642099326.svg)](https://zenodo.org/badge/latestdoi/642099326)\n\n<details>\n  <summary>Note: https://github.com/kyegomez/tree-of-thoughts CANNOT replicate paper results. </summary>\n\nIn fact, people have reported that his code [cannot](https://github.com/kyegomez/tree-of-thoughts/issues/52) [properly](https://github.com/kyegomez/tree-of-thoughts/issues/41) [run](https://github.com/kyegomez/tree-of-thoughts/issues/60), and is [probably automatically generated by ChatGPT](pics/fake.png), and [kyegomez has done so for other popular ML methods](https://twitter.com/qbitium/status/1663954096741814272), while intentionally refusing to link to official implementations for his own interests (see https://github.com/kyegomez/tree-of-thoughts/issues/54, https://github.com/kyegomez/tree-of-thoughts/issues/55, https://github.com/kyegomez/tree-of-thoughts/issues/56).\nUnfortunately, Google/Github searches go to kyegomez's malicious repo by default as it has more stars. **Please DE-STAR his repo and STAR this to help other people avoid being misled, thanks!**\n</details>\n\n\n![teaser](pics/teaser.png)\n\nOfficial implementation for paper [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601) with code, prompts, model outputs.\nAlso check [its tweet thread](https://twitter.com/ShunyuYao12/status/1659357547474681857) in 1min.\n\n\n\n\n\n\n## Setup\n- Set up OpenAI API key and store in environment variable ``OPENAI_API_KEY`` (see [here](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety)). \n\n- Install dependencies and `tot` package (PyPI package coming soon): \n```bash\ngit clone https://github.com/princeton-nlp/tree-of-thought-llm\ncd tree-of-thought-llm\npip install -r requirements.txt\npip install -e .  # install `tot` package\n```\n\n\n## Quick Start\nThe following minimal script will attempt to solve the game of 24 with `4 5 6 10` (might be a bit slow as it's using GPT-4):\n```python\nimport argparse\nfrom tot.methods.bfs import solve\nfrom tot.tasks.game24 import Game24Task\n\nargs = argparse.Namespace(backend='gpt-4', temperature=0.7, task='game24', naive_run=False, prompt_sample=None, method_generate='propose', method_evaluate='value', method_select='greedy', n_generate_sample=1, n_evaluate_sample=3, n_select_sample=5)\n\ntask = Game24Task()\nys, infos = solve(args, task, 900)\nprint(ys[0])\n```\n\nAnd the output would be something like (note it's not deterministic, and sometimes the output can be wrong):\n```\n10 - 4 = 6 (left: 5 6 6)\n5 * 6 = 30 (left: 6 30)\n30 - 6 = 24 (left: 24)\nAnswer: (5 * (10 - 4)) - 6 = 24\n```\n\n## Paper Experiments\n\nRun experiments via ``sh scripts/{game24, text, crosswords}/{standard_sampling, cot_sampling, bfs}.sh``, except in crosswords we use a DFS algorithm for ToT, which can be run via ``scripts/crosswords/search_crosswords-dfs.ipynb``.\n\nThe very simple ``run.py`` implements the ToT + BFS algorithm, as well as the naive IO/CoT sampling. Some key arguments:\n\n- ``--naive_run``: if True, run naive IO/CoT sampling instead of ToT + BFS.\n-  ``--prompt_sample`` (choices=[``standard``, ``cot``]): sampling prompt\n- ``--method_generate`` (choices=[``sample``, ``propose``]): thought generator, whether to sample independent thoughts (used in Creative Writing) or propose sequential thoughts (used in Game of 24)\n- ``--method_evaluate`` (choices=[``value``, ``vote``]): state evaluator, whether to use the value states independently (used in Game of 24) or vote on states together (used in Creative Writing)\n- ``--n_generate_sample``: number of times to prompt for thought generation\n- ``--n_evaluate_sample``: number of times to prompt for state evaluation\n- ``--n_select_sample``: number of states to keep from each step (i.e. ``b`` in the paper's ToT + BFS algorithm)\n\n\n\n## Paper Trajectories\n``logs/`` contains all the trajectories from the paper's experiments, except for ``logs/game24/gpt-4_0.7_propose1_value3_greedy5_start900_end1000.json`` which was reproduced after the paper (as the original experiment was done in a notebook) and achieved a 69\\% score instead of the original 74\\% score due to randomness in GPT decoding. We hope to aggregate multiple runs in the future to account for sampling randomness and update the paper, but this shouldn't affect the main conclusions of the paper.\n\n## How to Add A New Task\nSetting up a new task is easy, and mainly involves two steps.\n* Set up a new task class in ``tot/tasks/`` and task files in ``tot/data/``. See ``tot/tasks/game24.py`` for an example. Add the task to ``tot/tasks/__init__.py``.\n* Set up task-specific prompts in ``tot/prompts/``. See ``tot/prompts/game24.py`` for an example. Depending on the nature of the task, choose ``--method_generate`` (choices=[``sample``, ``propose``]) and ``--method_evaluate`` (choices=[``value``, ``vote``]) and their corresponding prompts. \n\n## Citations\nPlease cite the paper and star this repo if you use ToT and find it interesting/useful, thanks! Feel free to contact shunyuyao.cs@gmail.com or open an issue if you have any questions.\n\n```bibtex\n@misc{yao2023tree,\n      title={{Tree of Thoughts}: Deliberate Problem Solving with Large Language Models}, \n      author={Shunyu Yao and Dian Yu and Jeffrey Zhao and Izhak Shafran and Thomas L. Griffiths and Yuan Cao and Karthik Narasimhan},\n      year={2023},\n      eprint={2305.10601},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Official Implementation of \"Tree of Thoughts: Deliberate Problem Solving with Large Language Models\"",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/princeton-nlp/tree-of-thought-llm"
    },
    "split_keywords": [
        "tree-search",
        "large-language-models",
        "llm",
        "prompting",
        "tree-of-thoughts"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "93f845b74dcbcdfcc61d21b899ee6885e9a213c3b2567248ddb3dec25daef725",
                "md5": "7102db6cb26be1db056cb19f40f5b005",
                "sha256": "54405a3c4d07e9a86af064ed96ea3f4ee38197d286a4b5ec14f2a14ea7ba0049"
            },
            "downloads": -1,
            "filename": "tree_of_thoughts_llm-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7102db6cb26be1db056cb19f40f5b005",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 63580,
            "upload_time": "2023-07-05T22:01:06",
            "upload_time_iso_8601": "2023-07-05T22:01:06.948786Z",
            "url": "https://files.pythonhosted.org/packages/93/f8/45b74dcbcdfcc61d21b899ee6885e9a213c3b2567248ddb3dec25daef725/tree_of_thoughts_llm-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5c73617d2443db412efe4e20568ded5fe0529ead95a47c493c7fb5b9b39dc318",
                "md5": "9fbdd24fc9ef7560e1d52a5f07feee42",
                "sha256": "40ff1be901a689b11b1b8cf5a4db52f91c3579ef07a989c878dfe5515481852d"
            },
            "downloads": -1,
            "filename": "tree-of-thoughts-llm-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "9fbdd24fc9ef7560e1d52a5f07feee42",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 64049,
            "upload_time": "2023-07-05T22:01:08",
            "upload_time_iso_8601": "2023-07-05T22:01:08.899465Z",
            "url": "https://files.pythonhosted.org/packages/5c/73/617d2443db412efe4e20568ded5fe0529ead95a47c493c7fb5b9b39dc318/tree-of-thoughts-llm-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-05 22:01:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "princeton-nlp",
    "github_project": "tree-of-thought-llm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "tree-of-thoughts-llm"
}
        
Elapsed time: 0.08562s