llama-index-node-parser-topic


Namellama-index-node-parser-topic JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
Summaryllama-index node_parser topic node parser integration
upload_time2024-09-20 20:28:19
maintainerNone
docs_urlNone
authorllama-index
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Node_Parser Integration: TopicNodeParser

Implements the topic node parser described in the paper [MedGraphRAG](https://arxiv.org/html/2408.04187), which aims to improve the capabilities of LLMs in the medical domain by generating evidence-based results through a novel graph-based Retrieval-Augmented Generation framework, improving safety and reliability in handling private medical data.

`TopicNodeParser` implements an approximate version of the chunking technique described in the paper.

Here is the technique as outlined in the paper:

```
Large medical documents often contain multiple themes or diverse content. To process these effectively, we first segment the document into data chunks that conform to the context limitations of Large Language Models (LLMs). Traditional methods such as chunking based on token size or fixed characters typically fail to detect subtle shifts in topics accurately. Consequently, these chunks may not fully capture the intended context, leading to a loss in the richness of meaning.

To enhance accuracy, we adopt a mixed method of character separation coupled with topic-based segmentation. Specifically, we utilize static characters (line break symbols) to isolate individual paragraphs within the document. Following this, we apply a derived form of the text for semantic chunking. Our approach includes the use of proposition transfer, which extracts standalone statements from a raw text Chen et al. (2023). Through proposition transfer, each paragraph is transformed into self-sustaining statements. We then conduct a sequential analysis of the document to assess each proposition, deciding whether it should merge with an existing chunk or initiate a new one. This decision is made via a zero-shot approach by an LLM. To reduce noise generated by sequential processing, we implement a sliding window technique, managing five paragraphs at a time. We continuously adjust the window by removing the first paragraph and adding the next, maintaining focus on topic consistency. We set a hard threshold that the longest chunk cannot excess the context length limitation of LLM. After chunking the document, we construct graph on each individual of the data chunk.
```

## Installation

```
pip install llama-index-node-parser-topic
```

## Usage

```python
from llama_index.core import Document
from llama_index.node_parser.topic import TopicNodeParser

node_parser = TopicNodeParser.from_defaults(
    llm=llm,
    max_chunk_size=1000,
    similarity_method="llm",  # can be "llm" or "embedding"
    # embed_model=embed_model,  # used for "embedding" similarity_method
    # similarity_threshold=0.8,  # used for "embedding" similarity_method
    window_size=2,  # paper suggests window_size=5
)

nodes = node_parser(
    [
        Document(text="document text 1"),
        Document(text="document text 2"),
    ],
)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-node-parser-topic",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "llama-index",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/56/1a/a90f1627cb03f3628140cfa43f00e57292e0d102631fe9cd98a098356e03/llama_index_node_parser_topic-0.1.0.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Node_Parser Integration: TopicNodeParser\n\nImplements the topic node parser described in the paper [MedGraphRAG](https://arxiv.org/html/2408.04187), which aims to improve the capabilities of LLMs in the medical domain by generating evidence-based results through a novel graph-based Retrieval-Augmented Generation framework, improving safety and reliability in handling private medical data.\n\n`TopicNodeParser` implements an approximate version of the chunking technique described in the paper.\n\nHere is the technique as outlined in the paper:\n\n```\nLarge medical documents often contain multiple themes or diverse content. To process these effectively, we first segment the document into data chunks that conform to the context limitations of Large Language Models (LLMs). Traditional methods such as chunking based on token size or fixed characters typically fail to detect subtle shifts in topics accurately. Consequently, these chunks may not fully capture the intended context, leading to a loss in the richness of meaning.\n\nTo enhance accuracy, we adopt a mixed method of character separation coupled with topic-based segmentation. Specifically, we utilize static characters (line break symbols) to isolate individual paragraphs within the document. Following this, we apply a derived form of the text for semantic chunking. Our approach includes the use of proposition transfer, which extracts standalone statements from a raw text Chen et al. (2023). Through proposition transfer, each paragraph is transformed into self-sustaining statements. We then conduct a sequential analysis of the document to assess each proposition, deciding whether it should merge with an existing chunk or initiate a new one. This decision is made via a zero-shot approach by an LLM. To reduce noise generated by sequential processing, we implement a sliding window technique, managing five paragraphs at a time. We continuously adjust the window by removing the first paragraph and adding the next, maintaining focus on topic consistency. We set a hard threshold that the longest chunk cannot excess the context length limitation of LLM. After chunking the document, we construct graph on each individual of the data chunk.\n```\n\n## Installation\n\n```\npip install llama-index-node-parser-topic\n```\n\n## Usage\n\n```python\nfrom llama_index.core import Document\nfrom llama_index.node_parser.topic import TopicNodeParser\n\nnode_parser = TopicNodeParser.from_defaults(\n    llm=llm,\n    max_chunk_size=1000,\n    similarity_method=\"llm\",  # can be \"llm\" or \"embedding\"\n    # embed_model=embed_model,  # used for \"embedding\" similarity_method\n    # similarity_threshold=0.8,  # used for \"embedding\" similarity_method\n    window_size=2,  # paper suggests window_size=5\n)\n\nnodes = node_parser(\n    [\n        Document(text=\"document text 1\"),\n        Document(text=\"document text 2\"),\n    ],\n)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index node_parser topic node parser integration",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6276572735def7e7ec84a8336c340037bc461182fca29f209e07e8f553e5ac33",
                "md5": "7530b45f0d397439b7d182bbc5a003cd",
                "sha256": "9261d4e7ab034ba236605d36a51451025205ed993393147f2f88459a107a5254"
            },
            "downloads": -1,
            "filename": "llama_index_node_parser_topic-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7530b45f0d397439b7d182bbc5a003cd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 6529,
            "upload_time": "2024-09-20T20:28:18",
            "upload_time_iso_8601": "2024-09-20T20:28:18.095059Z",
            "url": "https://files.pythonhosted.org/packages/62/76/572735def7e7ec84a8336c340037bc461182fca29f209e07e8f553e5ac33/llama_index_node_parser_topic-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "561aa90f1627cb03f3628140cfa43f00e57292e0d102631fe9cd98a098356e03",
                "md5": "f767d6d3d1715d8e98c285a2ac6ce2c9",
                "sha256": "14b65d06b952e927ae5f4c5c5927d4dab0d826117a5ee05943bf1b060b38aab3"
            },
            "downloads": -1,
            "filename": "llama_index_node_parser_topic-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "f767d6d3d1715d8e98c285a2ac6ce2c9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 6141,
            "upload_time": "2024-09-20T20:28:19",
            "upload_time_iso_8601": "2024-09-20T20:28:19.493365Z",
            "url": "https://files.pythonhosted.org/packages/56/1a/a90f1627cb03f3628140cfa43f00e57292e0d102631fe9cd98a098356e03/llama_index_node_parser_topic-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-20 20:28:19",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-node-parser-topic"
}
        
Elapsed time: 0.53397s