Name | llama-index-agent-coa JSON |
Version |
0.2.0
JSON |
| download |
home_page | None |
Summary | llama-index agent coa integration |
upload_time | 2024-08-22 03:40:04 |
maintainer | jerryjliu |
docs_url | None |
author | Logan Markewich |
requires_python | <4.0,>=3.8.1 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Agent Integration: Coa
# Chain-of-Abstraction Agent Pack
`pip install llama-index-agent-coa`
The chain-of-abstraction (CoA) agent integration implements a generalized version of the strategy described in the [origin CoA paper](https://arxiv.org/abs/2401.17464).
By prompting the LLM to write function calls in a chain-of-thought format, we can execute both simple and complex combinations of function calls needed to execute a task.
The LLM is prompted to write a response containing function calls, for example, a CoA plan might look like:
```
After buying the apples, Sally has [FUNC add(3, 2) = y1] apples.
Then, the wizard casts a spell to multiply the number of apples by 3,
resulting in [FUNC multiply(y1, 3) = y2] apples.
```
From there, the function calls can be parsed into a dependency graph, and executed.
Then, the values in the CoA are replaced with their actual results.
As an extension to the original paper, we also run the LLM a final time, to rewrite the response in a more readable and user-friendly way.
**NOTE:** In the original paper, the authors fine-tuned an LLM specifically for this, and also for specific functions and datasets. As such, only capabale LLMs (OpenAI, Anthropic, etc.) will be (hopefully) reliable for this without finetuning.
A full example notebook is [also provided](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/agent/coa_agent.ipynb).
## Code Usage
`pip install llama-index-agent-coa`
First, setup some tools (could be function tools, query engines, etc.)
```python
from llama_index.core.tools import QueryEngineTool, FunctionTool
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
query_engine = index.as_query_engine(...)
function_tool = FunctionTool.from_defaults(fn=add)
query_tool = QueryEngineTool.from_defaults(
query_engine=query_engine, name="...", description="..."
)
```
Next, create the pack with the tools, and run it!
```python
from llama_index.packs.agent.coa import CoAAgentPack
from llama_index.llms.openai import OpenAI
pack = CoAAgentPack(
tools=[function_tool, query_tool], llm=OpenAI(model="gpt-4")
)
print(pack.run("What is 1245 + 4321?"))
```
See the example notebook for [more thorough details](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/agent/coa_agent.ipynb).
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-agent-coa",
"maintainer": "jerryjliu",
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": null,
"author": "Logan Markewich",
"author_email": "logan@runllama.ai",
"download_url": "https://files.pythonhosted.org/packages/fb/35/8d88d02d73e35b29aec8c636f762ba37853468018e37d70ba5a79c507995/llama_index_agent_coa-0.2.0.tar.gz",
"platform": null,
"description": "# LlamaIndex Agent Integration: Coa\n\n# Chain-of-Abstraction Agent Pack\n\n`pip install llama-index-agent-coa`\n\nThe chain-of-abstraction (CoA) agent integration implements a generalized version of the strategy described in the [origin CoA paper](https://arxiv.org/abs/2401.17464).\n\nBy prompting the LLM to write function calls in a chain-of-thought format, we can execute both simple and complex combinations of function calls needed to execute a task.\n\nThe LLM is prompted to write a response containing function calls, for example, a CoA plan might look like:\n\n```\nAfter buying the apples, Sally has [FUNC add(3, 2) = y1] apples.\nThen, the wizard casts a spell to multiply the number of apples by 3,\nresulting in [FUNC multiply(y1, 3) = y2] apples.\n```\n\nFrom there, the function calls can be parsed into a dependency graph, and executed.\n\nThen, the values in the CoA are replaced with their actual results.\n\nAs an extension to the original paper, we also run the LLM a final time, to rewrite the response in a more readable and user-friendly way.\n\n**NOTE:** In the original paper, the authors fine-tuned an LLM specifically for this, and also for specific functions and datasets. As such, only capabale LLMs (OpenAI, Anthropic, etc.) will be (hopefully) reliable for this without finetuning.\n\nA full example notebook is [also provided](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/agent/coa_agent.ipynb).\n\n## Code Usage\n\n`pip install llama-index-agent-coa`\n\nFirst, setup some tools (could be function tools, query engines, etc.)\n\n```python\nfrom llama_index.core.tools import QueryEngineTool, FunctionTool\n\n\ndef add(a: int, b: int) -> int:\n \"\"\"Add two numbers together.\"\"\"\n return a + b\n\n\nquery_engine = index.as_query_engine(...)\n\nfunction_tool = FunctionTool.from_defaults(fn=add)\nquery_tool = QueryEngineTool.from_defaults(\n query_engine=query_engine, name=\"...\", description=\"...\"\n)\n```\n\nNext, create the pack with the tools, and run it!\n\n```python\nfrom llama_index.packs.agent.coa import CoAAgentPack\nfrom llama_index.llms.openai import OpenAI\n\npack = CoAAgentPack(\n tools=[function_tool, query_tool], llm=OpenAI(model=\"gpt-4\")\n)\n\nprint(pack.run(\"What is 1245 + 4321?\"))\n```\n\nSee the example notebook for [more thorough details](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/agent/coa_agent.ipynb).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index agent coa integration",
"version": "0.2.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6086f43c0a1f81191cf976d63b42e3675d06ffd2522fdd83bf373e10ee4efa80",
"md5": "46d96507373e82f56e1d00b8a0793c10",
"sha256": "b1da045cdd95bbf7747ded1c1dbbfcf4c9dbe97559753764607b7505d06afdea"
},
"downloads": -1,
"filename": "llama_index_agent_coa-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "46d96507373e82f56e1d00b8a0793c10",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8.1",
"size": 8294,
"upload_time": "2024-08-22T03:40:02",
"upload_time_iso_8601": "2024-08-22T03:40:02.849554Z",
"url": "https://files.pythonhosted.org/packages/60/86/f43c0a1f81191cf976d63b42e3675d06ffd2522fdd83bf373e10ee4efa80/llama_index_agent_coa-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fb358d88d02d73e35b29aec8c636f762ba37853468018e37d70ba5a79c507995",
"md5": "df83d12e4a69e96d17e6f50cc01106f9",
"sha256": "ac68cd7929edaf1629b9aba5103f8c921d6df6fb4833ca3b6ec32c5bf9351c53"
},
"downloads": -1,
"filename": "llama_index_agent_coa-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "df83d12e4a69e96d17e6f50cc01106f9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 8035,
"upload_time": "2024-08-22T03:40:04",
"upload_time_iso_8601": "2024-08-22T03:40:04.095337Z",
"url": "https://files.pythonhosted.org/packages/fb/35/8d88d02d73e35b29aec8c636f762ba37853468018e37d70ba5a79c507995/llama_index_agent_coa-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-22 03:40:04",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-agent-coa"
}