recoma


Namerecoma JSON
Version 0.0.2 PyPI version JSON
download
home_pagehttps://github.com/allenai/recoma
SummaryA Python package to reason by communicating with agents
upload_time2024-07-24 19:58:22
maintainerNone
docs_urlNone
authorTushar Khot
requires_python>=3.9
licenseNone
keywords reasoning communication language models tools llm
VCS
bugtrack_url
requirements treelib jsonnet jinja2 litellm openai diskcache tenacity registrable sympy gradio
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ReComA: A Library for *Re*asoning via *Com*municating *A*gents
ReComA is a library designed to enable easy development of solutions for reasoning problems via
communicating agents. It is a generalization of the codebase for
[Decomposed Prompting](https://github.com/allenai/DecomP). The key features of the library:
- A general-purpose framework that implements many existing approaches for reasoning
via mutliple agents --
  [DecomP](https://www.semanticscholar.org/paper/Decomposed-Prompting%3A-A-Modular-Approach-for-Tasks-Khot-Trivedi/07955e96cbd778d0ae2a68f09d073b866dd84c2a),
  [ReACT](https://www.semanticscholar.org/paper/ReAct%3A-Synergizing-Reasoning-and-Acting-in-Language-Yao-Zhao/2d2ca2e54c54748557b8aac7d328ce32ebfe8944),
  [Least-to-Most](https://www.semanticscholar.org/paper/Least-to-Most-Prompting-Enables-Complex-Reasoning-Zhou-Scharli/5437e8adab596d7294124c0e798708e050e25321),
  [Faithful CoT](https://www.semanticscholar.org/paper/Faithful-Chain-of-Thought-Reasoning-LYU-Havaldar/ea0688f9e7dfb0d3c2249486af65209c25809544)
- Can be easily extended to use other control flows (e.g.,
  [Self-Ask](https://www.semanticscholar.org/paper/Measuring-and-Narrowing-the-Compositionality-Gap-in-Press-Zhang/53c20f7bf3fabc88e1403e00241eec009cc01ed8),
  [IRCoT](https://www.semanticscholar.org/paper/Interleaving-Retrieval-with-Chain-of-Thought-for-Trivedi-Balasubramanian/f208ea909fa7f54fea82def9a92fd81dfc758c39))
- Provides an interactive GUI which includes the entire reasoning trace (with underlying prompts) for easy debugging
- Built-in Best-First Search to explore multiple reasoning traces
- Can be used as a pip-installable library in your own codebase
- Configurable via JSONNET files -- no code change needed for many use cases

Table of Contents
===============

* [Setup](#Setup)
* [Running ReComA](#Running-ReComA)
* [Using ReComA](#Using-ReComA-in-your-work)


## Setup

If you want to directly make changes in this library, set it up using conda
```shell
  conda create -n recoma python=3.9
  conda activate recoma
  pip install -r requirements.txt
```


To install it as a dependency in your own conda environment
```shell
  pip install -e .
```

**OpenAI Setup**
This library relies on the `OPENAI_API_KEY` environment variable to call GPT3+ models. Make sure
to set this env. variable
```shell
  export OPENAI_API_KEY=<key>
```

## Running ReComA
The library can be used to solve complex reasoning tasks in two modes:

### Demo/Interactive Mode

```shell
 python -m recoma.run_inference \
  --config configs/inference/letter_cat/decomp.jsonnet \
  --output_dir output/letter_cat_decomp/ \
  --gradio_demo
```
This will start an interactive server on http://localhost:7860 for the k<sup>th</sup> letter
concatenation task. Try the following question (no QID/Context needed):

> Take the letters at position 3 of the words in "Reasoning via Communicating Agents" and concatenate them using a space.

The library will use `text-davinci-002` model with Decomposed Prompting (specified via the input
config file) to answer this question. You can open the collapsed nodes (indicated with ▶) to see
the full execution trace (along with the prompts).

### Batch Inference Mode

To use the library to produce predictions for an input file (e.g. [the 3rd letter concatenation
dataset with 4 words](https://github.com/allenai/DecomP/blob/main/datasets/letter_cat/n4_eg100_pos2_space.json)):
```shell
 python -m recoma.run_inference \
  --config configs/inference/letter_cat/decomp.jsonnet \
  --output_dir output/letter_cat_decomp/ \
  --input datasets/letter_cat/n4_eg100_pos2_space.json
```

Running this script will populate the output directory with :
- `predictions.json`: qid-to-prediction map
- `all_data.jsonl`: Input examples with model predictions and correctness label (using exact match)
- `html_dump/`: Dump of the execution traces for all the examples in HTML format
- `source_config.json`: JSON config used to run this experiment (for future reproducibility)

## Using ReComA in your work

### Using existing agents
If the provided agents are sufficient for your work, you can use this library by just defining the
configuration files and prompts. See examples in the `configs/` folder.


### Defining a new agent
If you define a new agent (see the models [README](recoma/models/README.md)), you need to load
them when running inference. Assuming your agents are defined under the package `my_new_agents_pkg`
```shell
python -m recoma.run_inference \
  --config configs/inference/letter_cat/decomp.jsonnet \
  --output_dir output/letter_cat_decomp/ \
  --input datasets/letter_cat/n4_eg100_pos2_space.json \
  --include_package my_new_agents_pkg
```

Please reach out if there are any questions or issues.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/allenai/recoma",
    "name": "recoma",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "reasoning, communication, language models, tools, LLM",
    "author": "Tushar Khot",
    "author_email": "tushark@allenai.org",
    "download_url": "https://files.pythonhosted.org/packages/0f/1b/31489d125f6279f14ac425164ae19d131fdde97995b39b044f31a0e5a84c/recoma-0.0.2.tar.gz",
    "platform": null,
    "description": "# ReComA: A Library for *Re*asoning via *Com*municating *A*gents\nReComA is a library designed to enable easy development of solutions for reasoning problems via\ncommunicating agents. It is a generalization of the codebase for\n[Decomposed Prompting](https://github.com/allenai/DecomP). The key features of the library:\n- A general-purpose framework that implements many existing approaches for reasoning\nvia mutliple agents --\n  [DecomP](https://www.semanticscholar.org/paper/Decomposed-Prompting%3A-A-Modular-Approach-for-Tasks-Khot-Trivedi/07955e96cbd778d0ae2a68f09d073b866dd84c2a),\n  [ReACT](https://www.semanticscholar.org/paper/ReAct%3A-Synergizing-Reasoning-and-Acting-in-Language-Yao-Zhao/2d2ca2e54c54748557b8aac7d328ce32ebfe8944),\n  [Least-to-Most](https://www.semanticscholar.org/paper/Least-to-Most-Prompting-Enables-Complex-Reasoning-Zhou-Scharli/5437e8adab596d7294124c0e798708e050e25321),\n  [Faithful CoT](https://www.semanticscholar.org/paper/Faithful-Chain-of-Thought-Reasoning-LYU-Havaldar/ea0688f9e7dfb0d3c2249486af65209c25809544)\n- Can be easily extended to use other control flows (e.g.,\n  [Self-Ask](https://www.semanticscholar.org/paper/Measuring-and-Narrowing-the-Compositionality-Gap-in-Press-Zhang/53c20f7bf3fabc88e1403e00241eec009cc01ed8),\n  [IRCoT](https://www.semanticscholar.org/paper/Interleaving-Retrieval-with-Chain-of-Thought-for-Trivedi-Balasubramanian/f208ea909fa7f54fea82def9a92fd81dfc758c39))\n- Provides an interactive GUI which includes the entire reasoning trace (with underlying prompts) for easy debugging\n- Built-in Best-First Search to explore multiple reasoning traces\n- Can be used as a pip-installable library in your own codebase\n- Configurable via JSONNET files -- no code change needed for many use cases\n\nTable of Contents\n===============\n\n* [Setup](#Setup)\n* [Running ReComA](#Running-ReComA)\n* [Using ReComA](#Using-ReComA-in-your-work)\n\n\n## Setup\n\nIf you want to directly make changes in this library, set it up using conda\n```shell\n  conda create -n recoma python=3.9\n  conda activate recoma\n  pip install -r requirements.txt\n```\n\n\nTo install it as a dependency in your own conda environment\n```shell\n  pip install -e .\n```\n\n**OpenAI Setup**\nThis library relies on the `OPENAI_API_KEY` environment variable to call GPT3+ models. Make sure\nto set this env. variable\n```shell\n  export OPENAI_API_KEY=<key>\n```\n\n## Running ReComA\nThe library can be used to solve complex reasoning tasks in two modes:\n\n### Demo/Interactive Mode\n\n```shell\n python -m recoma.run_inference \\\n  --config configs/inference/letter_cat/decomp.jsonnet \\\n  --output_dir output/letter_cat_decomp/ \\\n  --gradio_demo\n```\nThis will start an interactive server on http://localhost:7860 for the k<sup>th</sup> letter\nconcatenation task. Try the following question (no QID/Context needed):\n\n> Take the letters at position 3 of the words in \"Reasoning via Communicating Agents\" and concatenate them using a space.\n\nThe library will use `text-davinci-002` model with Decomposed Prompting (specified via the input\nconfig file) to answer this question. You can open the collapsed nodes (indicated with \u25b6) to see\nthe full execution trace (along with the prompts).\n\n### Batch Inference Mode\n\nTo use the library to produce predictions for an input file (e.g. [the 3rd letter concatenation\ndataset with 4 words](https://github.com/allenai/DecomP/blob/main/datasets/letter_cat/n4_eg100_pos2_space.json)):\n```shell\n python -m recoma.run_inference \\\n  --config configs/inference/letter_cat/decomp.jsonnet \\\n  --output_dir output/letter_cat_decomp/ \\\n  --input datasets/letter_cat/n4_eg100_pos2_space.json\n```\n\nRunning this script will populate the output directory with :\n- `predictions.json`: qid-to-prediction map\n- `all_data.jsonl`: Input examples with model predictions and correctness label (using exact match)\n- `html_dump/`: Dump of the execution traces for all the examples in HTML format\n- `source_config.json`: JSON config used to run this experiment (for future reproducibility)\n\n## Using ReComA in your work\n\n### Using existing agents\nIf the provided agents are sufficient for your work, you can use this library by just defining the\nconfiguration files and prompts. See examples in the `configs/` folder.\n\n\n### Defining a new agent\nIf you define a new agent (see the models [README](recoma/models/README.md)), you need to load\nthem when running inference. Assuming your agents are defined under the package `my_new_agents_pkg`\n```shell\npython -m recoma.run_inference \\\n  --config configs/inference/letter_cat/decomp.jsonnet \\\n  --output_dir output/letter_cat_decomp/ \\\n  --input datasets/letter_cat/n4_eg100_pos2_space.json \\\n  --include_package my_new_agents_pkg\n```\n\nPlease reach out if there are any questions or issues.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python package to reason by communicating with agents",
    "version": "0.0.2",
    "project_urls": {
        "Homepage": "https://github.com/allenai/recoma"
    },
    "split_keywords": [
        "reasoning",
        " communication",
        " language models",
        " tools",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1ccf260c18c57c4a0ea5a035e4a999b1330bcf66b587b0183b771f377155858c",
                "md5": "403f2237eba27ecaaffb9bf84bd18f6a",
                "sha256": "8bf5783e3a8b887ca2f12a77ef45d98010bef530d93602e5872f4ae3a97d66d9"
            },
            "downloads": -1,
            "filename": "recoma-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "403f2237eba27ecaaffb9bf84bd18f6a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 41838,
            "upload_time": "2024-07-24T19:58:20",
            "upload_time_iso_8601": "2024-07-24T19:58:20.474105Z",
            "url": "https://files.pythonhosted.org/packages/1c/cf/260c18c57c4a0ea5a035e4a999b1330bcf66b587b0183b771f377155858c/recoma-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0f1b31489d125f6279f14ac425164ae19d131fdde97995b39b044f31a0e5a84c",
                "md5": "6f8f625d7222e54e6d8232369d622edc",
                "sha256": "f415eb8ac87a08e9059b41a3df9ece1a3f4ace8bcfc7939b9ed07b9ddff32d4b"
            },
            "downloads": -1,
            "filename": "recoma-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "6f8f625d7222e54e6d8232369d622edc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 32762,
            "upload_time": "2024-07-24T19:58:22",
            "upload_time_iso_8601": "2024-07-24T19:58:22.194800Z",
            "url": "https://files.pythonhosted.org/packages/0f/1b/31489d125f6279f14ac425164ae19d131fdde97995b39b044f31a0e5a84c/recoma-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-24 19:58:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "allenai",
    "github_project": "recoma",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "treelib",
            "specs": []
        },
        {
            "name": "jsonnet",
            "specs": []
        },
        {
            "name": "jinja2",
            "specs": []
        },
        {
            "name": "litellm",
            "specs": [
                [
                    "==",
                    "1.37.19"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    "==",
                    "1.30.1"
                ]
            ]
        },
        {
            "name": "diskcache",
            "specs": []
        },
        {
            "name": "tenacity",
            "specs": []
        },
        {
            "name": "registrable",
            "specs": []
        },
        {
            "name": "sympy",
            "specs": []
        },
        {
            "name": "gradio",
            "specs": []
        }
    ],
    "lcname": "recoma"
}
        
Elapsed time: 1.23672s