llama-index-packs-searchain


Namellama-index-packs-searchain JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
Summaryllama-index packs searchain integration
upload_time2024-11-18 01:30:53
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Packs Integration: Searchain

This LlamaPack implements a framework called SearChain, which implements the interaction between LLM and IR in the form of the global reasoning chain called Chain-of-Query (CoQ).

This follows the idea in the paper [Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks](https://arxiv.org/abs/2304.14732).

Making content generated by large language models (LLMs) such as ChatGPT accurate, trustworthy, and traceable is critical, especially for knowledge-intensive tasks. Introducing information retrieval (IR) to provide LLM with external knowledge is likely to solve this problem, however, where and how to introduce IR is a big challenge. The SearChain framework generates a global reasoning chain called a Chain of Query (CoQ) for LLM, where each node contains an IR-oriented query and the answer to the query. Second, IR verifies the answer of each node of CoQ, it corrects the answer that is not consistent with the retrieved information when IR gives high confidence, which improves the credibility. Third, LLM can mark its missing knowledge in CoQ and IR can provide this knowledge to LLM. These three operations improve the accuracy of LLM for complex knowledge-intensive tasks in terms of reasoning ability and knowledge. This Pack implements the above🤗!

You can see its use case in the examples folder.

This implementation is adapted from the author's implementation. You can find the official code repository [here](https://github.com/xsc1234/Search-in-the-Chain).

## Code Usage

First, you need to install SearChainpack using the following code,

```python
from llama_index.core.llama_pack import download_llama_pack

download_llama_pack("SearChainPack", "./searchain_pack")
```

Next you can load and initialize a searchain object,

```python
from searchain_pack.base import SearChainPack

searchain = SearChainPack(
    data_path="data",
    dprtokenizer_path="dpr_reader_multi",
    dprmodel_path="dpr_reader_multi",
    crossencoder_name_or_path="Quora_cross_encoder",
)
```

Relevant data can be found [here](https://www.kaggle.com/datasets/anastasiajia/searchain/data). You can run searchain using the following method,

```python
start_idx = 0
while not start_idx == -1:
    start_idx = execute(
        "/hotpotqa/hotpot_dev_fullwiki_v1_line.json", start_idx=start_idx
    )
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-packs-searchain",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/ff/c0/dec4abe82f8c688f0c6b5ba1d4124c827302aeb61132f0cb376d12e2d759/llama_index_packs_searchain-0.3.0.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Packs Integration: Searchain\n\nThis LlamaPack implements a framework called SearChain, which implements the interaction between LLM and IR in the form of the global reasoning chain called Chain-of-Query (CoQ).\n\nThis follows the idea in the paper [Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks](https://arxiv.org/abs/2304.14732).\n\nMaking content generated by large language models (LLMs) such as ChatGPT accurate, trustworthy, and traceable is critical, especially for knowledge-intensive tasks. Introducing information retrieval (IR) to provide LLM with external knowledge is likely to solve this problem, however, where and how to introduce IR is a big challenge. The SearChain framework generates a global reasoning chain called a Chain of Query (CoQ) for LLM, where each node contains an IR-oriented query and the answer to the query. Second, IR verifies the answer of each node of CoQ, it corrects the answer that is not consistent with the retrieved information when IR gives high confidence, which improves the credibility. Third, LLM can mark its missing knowledge in CoQ and IR can provide this knowledge to LLM. These three operations improve the accuracy of LLM for complex knowledge-intensive tasks in terms of reasoning ability and knowledge. This Pack implements the above\ud83e\udd17!\n\nYou can see its use case in the examples folder.\n\nThis implementation is adapted from the author's implementation. You can find the official code repository [here](https://github.com/xsc1234/Search-in-the-Chain).\n\n## Code Usage\n\nFirst, you need to install SearChainpack using the following code,\n\n```python\nfrom llama_index.core.llama_pack import download_llama_pack\n\ndownload_llama_pack(\"SearChainPack\", \"./searchain_pack\")\n```\n\nNext you can load and initialize a searchain object,\n\n```python\nfrom searchain_pack.base import SearChainPack\n\nsearchain = SearChainPack(\n    data_path=\"data\",\n    dprtokenizer_path=\"dpr_reader_multi\",\n    dprmodel_path=\"dpr_reader_multi\",\n    crossencoder_name_or_path=\"Quora_cross_encoder\",\n)\n```\n\nRelevant data can be found [here](https://www.kaggle.com/datasets/anastasiajia/searchain/data). You can run searchain using the following method,\n\n```python\nstart_idx = 0\nwhile not start_idx == -1:\n    start_idx = execute(\n        \"/hotpotqa/hotpot_dev_fullwiki_v1_line.json\", start_idx=start_idx\n    )\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index packs searchain integration",
    "version": "0.3.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a289f72e6070bc07890327149caeaea54a18be8f38e5ef5e59383aa4c2716711",
                "md5": "543e2d6f691594bf2f6486cb637dfc70",
                "sha256": "0665d26dba2f95fee0c12a51e4d255f67b5ec901b6574a89f5ad0267b9133563"
            },
            "downloads": -1,
            "filename": "llama_index_packs_searchain-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "543e2d6f691594bf2f6486cb637dfc70",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 5801,
            "upload_time": "2024-11-18T01:30:52",
            "upload_time_iso_8601": "2024-11-18T01:30:52.247387Z",
            "url": "https://files.pythonhosted.org/packages/a2/89/f72e6070bc07890327149caeaea54a18be8f38e5ef5e59383aa4c2716711/llama_index_packs_searchain-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ffc0dec4abe82f8c688f0c6b5ba1d4124c827302aeb61132f0cb376d12e2d759",
                "md5": "c30dc676a2b62d4b3da6a4792ed8879d",
                "sha256": "5a53ff81e8dcdb1e8d62448d38fe97f04d67c57c6d64e2740068d8944063357c"
            },
            "downloads": -1,
            "filename": "llama_index_packs_searchain-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c30dc676a2b62d4b3da6a4792ed8879d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 5480,
            "upload_time": "2024-11-18T01:30:53",
            "upload_time_iso_8601": "2024-11-18T01:30:53.147218Z",
            "url": "https://files.pythonhosted.org/packages/ff/c0/dec4abe82f8c688f0c6b5ba1d4124c827302aeb61132f0cb376d12e2d759/llama_index_packs_searchain-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-18 01:30:53",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-packs-searchain"
}
        
Elapsed time: 0.41135s