# ReComA: A Library for *Re*asoning via *Com*municating *A*gents
ReComA is a library designed to enable easy development of solutions for reasoning problems via
communicating agents. It is a generalization of the codebase for
[Decomposed Prompting](https://github.com/allenai/DecomP). The key features of the library:
- A general-purpose framework that implements many existing approaches for reasoning
via mutliple agents --
[DecomP](https://www.semanticscholar.org/paper/Decomposed-Prompting%3A-A-Modular-Approach-for-Tasks-Khot-Trivedi/07955e96cbd778d0ae2a68f09d073b866dd84c2a),
[ReACT](https://www.semanticscholar.org/paper/ReAct%3A-Synergizing-Reasoning-and-Acting-in-Language-Yao-Zhao/2d2ca2e54c54748557b8aac7d328ce32ebfe8944),
[Least-to-Most](https://www.semanticscholar.org/paper/Least-to-Most-Prompting-Enables-Complex-Reasoning-Zhou-Scharli/5437e8adab596d7294124c0e798708e050e25321),
[Faithful CoT](https://www.semanticscholar.org/paper/Faithful-Chain-of-Thought-Reasoning-LYU-Havaldar/ea0688f9e7dfb0d3c2249486af65209c25809544)
- Can be easily extended to use other control flows (e.g.,
[Self-Ask](https://www.semanticscholar.org/paper/Measuring-and-Narrowing-the-Compositionality-Gap-in-Press-Zhang/53c20f7bf3fabc88e1403e00241eec009cc01ed8),
[IRCoT](https://www.semanticscholar.org/paper/Interleaving-Retrieval-with-Chain-of-Thought-for-Trivedi-Balasubramanian/f208ea909fa7f54fea82def9a92fd81dfc758c39))
- Provides an interactive GUI which includes the entire reasoning trace (with underlying prompts) for easy debugging
- Built-in Best-First Search to explore multiple reasoning traces
- Can be used as a pip-installable library in your own codebase
- Configurable via JSONNET files -- no code change needed for many use cases
Table of Contents
===============
* [Setup](#Setup)
* [Running ReComA](#Running-ReComA)
* [Using ReComA](#Using-ReComA-in-your-work)
## Setup
If you want to directly make changes in this library, set it up using conda
```shell
conda create -n recoma python=3.9
conda activate recoma
pip install -r requirements.txt
```
To install it as a dependency in your own conda environment
```shell
pip install -e .
```
**OpenAI Setup**
This library relies on the `OPENAI_API_KEY` environment variable to call GPT3+ models. Make sure
to set this env. variable
```shell
export OPENAI_API_KEY=<key>
```
## Running ReComA
The library can be used to solve complex reasoning tasks in two modes:
### Demo/Interactive Mode
```shell
python -m recoma.run_inference \
--config configs/inference/letter_cat/decomp.jsonnet \
--output_dir output/letter_cat_decomp/ \
--gradio_demo
```
This will start an interactive server on http://localhost:7860 for the k<sup>th</sup> letter
concatenation task. Try the following question (no QID/Context needed):
> Take the letters at position 3 of the words in "Reasoning via Communicating Agents" and concatenate them using a space.
The library will use `text-davinci-002` model with Decomposed Prompting (specified via the input
config file) to answer this question. You can open the collapsed nodes (indicated with ▶) to see
the full execution trace (along with the prompts).
### Batch Inference Mode
To use the library to produce predictions for an input file (e.g. [the 3rd letter concatenation
dataset with 4 words](https://github.com/allenai/DecomP/blob/main/datasets/letter_cat/n4_eg100_pos2_space.json)):
```shell
python -m recoma.run_inference \
--config configs/inference/letter_cat/decomp.jsonnet \
--output_dir output/letter_cat_decomp/ \
--input datasets/letter_cat/n4_eg100_pos2_space.json
```
Running this script will populate the output directory with :
- `predictions.json`: qid-to-prediction map
- `all_data.jsonl`: Input examples with model predictions and correctness label (using exact match)
- `html_dump/`: Dump of the execution traces for all the examples in HTML format
- `source_config.json`: JSON config used to run this experiment (for future reproducibility)
## Using ReComA in your work
### Using existing agents
If the provided agents are sufficient for your work, you can use this library by just defining the
configuration files and prompts. See examples in the `configs/` folder.
### Defining a new agent
If you define a new agent (see the models [README](recoma/models/README.md)), you need to load
them when running inference. Assuming your agents are defined under the package `my_new_agents_pkg`
```shell
python -m recoma.run_inference \
--config configs/inference/letter_cat/decomp.jsonnet \
--output_dir output/letter_cat_decomp/ \
--input datasets/letter_cat/n4_eg100_pos2_space.json \
--include_package my_new_agents_pkg
```
Please reach out if there are any questions or issues.
Raw data
{
"_id": null,
"home_page": "https://github.com/allenai/recoma",
"name": "recoma",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "reasoning, communication, language models, tools, LLM",
"author": "Tushar Khot",
"author_email": "tushark@allenai.org",
"download_url": "https://files.pythonhosted.org/packages/90/04/80d2f08f765ff2744e4f1075d2689db7d38ac09fa181733b4767a24050e0/recoma-0.0.4.tar.gz",
"platform": null,
"description": "# ReComA: A Library for *Re*asoning via *Com*municating *A*gents\nReComA is a library designed to enable easy development of solutions for reasoning problems via\ncommunicating agents. It is a generalization of the codebase for\n[Decomposed Prompting](https://github.com/allenai/DecomP). The key features of the library:\n- A general-purpose framework that implements many existing approaches for reasoning\nvia mutliple agents --\n [DecomP](https://www.semanticscholar.org/paper/Decomposed-Prompting%3A-A-Modular-Approach-for-Tasks-Khot-Trivedi/07955e96cbd778d0ae2a68f09d073b866dd84c2a),\n [ReACT](https://www.semanticscholar.org/paper/ReAct%3A-Synergizing-Reasoning-and-Acting-in-Language-Yao-Zhao/2d2ca2e54c54748557b8aac7d328ce32ebfe8944),\n [Least-to-Most](https://www.semanticscholar.org/paper/Least-to-Most-Prompting-Enables-Complex-Reasoning-Zhou-Scharli/5437e8adab596d7294124c0e798708e050e25321),\n [Faithful CoT](https://www.semanticscholar.org/paper/Faithful-Chain-of-Thought-Reasoning-LYU-Havaldar/ea0688f9e7dfb0d3c2249486af65209c25809544)\n- Can be easily extended to use other control flows (e.g.,\n [Self-Ask](https://www.semanticscholar.org/paper/Measuring-and-Narrowing-the-Compositionality-Gap-in-Press-Zhang/53c20f7bf3fabc88e1403e00241eec009cc01ed8),\n [IRCoT](https://www.semanticscholar.org/paper/Interleaving-Retrieval-with-Chain-of-Thought-for-Trivedi-Balasubramanian/f208ea909fa7f54fea82def9a92fd81dfc758c39))\n- Provides an interactive GUI which includes the entire reasoning trace (with underlying prompts) for easy debugging\n- Built-in Best-First Search to explore multiple reasoning traces\n- Can be used as a pip-installable library in your own codebase\n- Configurable via JSONNET files -- no code change needed for many use cases\n\nTable of Contents\n===============\n\n* [Setup](#Setup)\n* [Running ReComA](#Running-ReComA)\n* [Using ReComA](#Using-ReComA-in-your-work)\n\n\n## Setup\n\nIf you want to directly make changes in this library, set it up using conda\n```shell\n conda create -n recoma python=3.9\n conda activate recoma\n pip install -r requirements.txt\n```\n\n\nTo install it as a dependency in your own conda environment\n```shell\n pip install -e .\n```\n\n**OpenAI Setup**\nThis library relies on the `OPENAI_API_KEY` environment variable to call GPT3+ models. Make sure\nto set this env. variable\n```shell\n export OPENAI_API_KEY=<key>\n```\n\n## Running ReComA\nThe library can be used to solve complex reasoning tasks in two modes:\n\n### Demo/Interactive Mode\n\n```shell\n python -m recoma.run_inference \\\n --config configs/inference/letter_cat/decomp.jsonnet \\\n --output_dir output/letter_cat_decomp/ \\\n --gradio_demo\n```\nThis will start an interactive server on http://localhost:7860 for the k<sup>th</sup> letter\nconcatenation task. Try the following question (no QID/Context needed):\n\n> Take the letters at position 3 of the words in \"Reasoning via Communicating Agents\" and concatenate them using a space.\n\nThe library will use `text-davinci-002` model with Decomposed Prompting (specified via the input\nconfig file) to answer this question. You can open the collapsed nodes (indicated with \u25b6) to see\nthe full execution trace (along with the prompts).\n\n### Batch Inference Mode\n\nTo use the library to produce predictions for an input file (e.g. [the 3rd letter concatenation\ndataset with 4 words](https://github.com/allenai/DecomP/blob/main/datasets/letter_cat/n4_eg100_pos2_space.json)):\n```shell\n python -m recoma.run_inference \\\n --config configs/inference/letter_cat/decomp.jsonnet \\\n --output_dir output/letter_cat_decomp/ \\\n --input datasets/letter_cat/n4_eg100_pos2_space.json\n```\n\nRunning this script will populate the output directory with :\n- `predictions.json`: qid-to-prediction map\n- `all_data.jsonl`: Input examples with model predictions and correctness label (using exact match)\n- `html_dump/`: Dump of the execution traces for all the examples in HTML format\n- `source_config.json`: JSON config used to run this experiment (for future reproducibility)\n\n## Using ReComA in your work\n\n### Using existing agents\nIf the provided agents are sufficient for your work, you can use this library by just defining the\nconfiguration files and prompts. See examples in the `configs/` folder.\n\n\n### Defining a new agent\nIf you define a new agent (see the models [README](recoma/models/README.md)), you need to load\nthem when running inference. Assuming your agents are defined under the package `my_new_agents_pkg`\n```shell\npython -m recoma.run_inference \\\n --config configs/inference/letter_cat/decomp.jsonnet \\\n --output_dir output/letter_cat_decomp/ \\\n --input datasets/letter_cat/n4_eg100_pos2_space.json \\\n --include_package my_new_agents_pkg\n```\n\nPlease reach out if there are any questions or issues.\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python package to reason by communicating with agents",
"version": "0.0.4",
"project_urls": {
"Homepage": "https://github.com/allenai/recoma"
},
"split_keywords": [
"reasoning",
" communication",
" language models",
" tools",
" llm"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "76b74fba3d495d665f8a5648e8c77352a5a19db9fb6605e41a90a92947167d22",
"md5": "c09a2706485ae03b90068370ada74387",
"sha256": "c7bf72e8520c8432bd0ec608cf56742545e57c8d92e25377c13443e326db5de8"
},
"downloads": -1,
"filename": "recoma-0.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c09a2706485ae03b90068370ada74387",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 40452,
"upload_time": "2024-09-30T20:31:07",
"upload_time_iso_8601": "2024-09-30T20:31:07.429149Z",
"url": "https://files.pythonhosted.org/packages/76/b7/4fba3d495d665f8a5648e8c77352a5a19db9fb6605e41a90a92947167d22/recoma-0.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "900480d2f08f765ff2744e4f1075d2689db7d38ac09fa181733b4767a24050e0",
"md5": "d5ddf13e28ec8f6f4e02c004a5adb553",
"sha256": "1f7b8335e271c09868322222ae236258c55423aa3ac54629a167a63d48276aac"
},
"downloads": -1,
"filename": "recoma-0.0.4.tar.gz",
"has_sig": false,
"md5_digest": "d5ddf13e28ec8f6f4e02c004a5adb553",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 33013,
"upload_time": "2024-09-30T20:31:08",
"upload_time_iso_8601": "2024-09-30T20:31:08.583868Z",
"url": "https://files.pythonhosted.org/packages/90/04/80d2f08f765ff2744e4f1075d2689db7d38ac09fa181733b4767a24050e0/recoma-0.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-30 20:31:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "allenai",
"github_project": "recoma",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "recoma"
}