llama-index-readers-docugami


Namellama-index-readers-docugami JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
Summaryllama-index readers docugami integration
upload_time2024-03-22 22:04:49
maintainertjaffri
docs_urlNone
authorYour Name
requires_python<4.0,>=3.8.1
licenseMIT
keywords doc docugami docx pdf xml
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Docugami Loader

```bash
pip install llama-index-readers-docugami
```

This loader takes in IDs of PDF, DOCX or DOC files processed by [Docugami](https://docugami.com) and returns nodes in a Document XML Knowledge Graph for each document. This is a rich representation that includes the semantic and structural characteristics of various chunks in the document as an XML tree. Entire sets of documents are processed, resulting in forests of XML semantic trees.

## Pre-requisites

1. Create a Docugami workspace: [http://www.docugami.com](http://www.docugami.com) (free trials available)
2. Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can [change the docset assignments](https://help.docugami.com/home/working-with-the-doc-sets-view) later.
3. Create an access token via the Developer Playground for your workspace. Detailed instructions: [https://help.docugami.com/home/docugami-api](https://help.docugami.com/home/docugami-api)
4. Explore the Docugami API at [https://api-docs.docugami.com](https://api-docs.docugami.com) to get a list of your processed docset IDs, or just the document IDs for a particular docset.

## Usage

To use this loader, you simply need to pass in a Docugami Doc Set ID, and optionally an array of Document IDs (by default, all documents in the Doc Set are loaded).

```python
from llama_index.readers.docugami import DocugamiReader

docset_id = "tjwrr2ekqkc3"
document_ids = ["ui7pkriyckwi", "1be3o7ch10iy"]

loader = DocugamiReader()
documents = loader.load_data(docset_id=docset_id, document_ids=document_ids)
```

This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/tree/main/llama_index) and/or subsequently used as a Tool in a [LangChain](https://github.com/hwchase17/langchain) Agent. See [here](https://github.com/emptycrown/llama-hub/tree/main) for examples.

See more information about how to use Docugami with LangChain in the [LangChain docs](https://python.langchain.com/docs/ecosystem/integrations/docugami).

# Advantages vs Other Chunking Techniques

Appropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach:

1. **Intelligent Chunking:** Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking.
2. **Structured Representation:** In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction.
3. **Semantic Annotations:** Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause.
4. **Additional Metadata:** Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through in [this notebook](https://github.com/docugami/llama-hub/blob/main/llama_hub/docugami/docugami.ipynb).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-readers-docugami",
    "maintainer": "tjaffri",
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": "doc, docugami, docx, pdf, xml",
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/09/38/d7cba27fc3d177fa7d6c4630b7ce29bcf14d0305cb31efdba6a9c8e81b21/llama_index_readers_docugami-0.1.4.tar.gz",
    "platform": null,
    "description": "# Docugami Loader\n\n```bash\npip install llama-index-readers-docugami\n```\n\nThis loader takes in IDs of PDF, DOCX or DOC files processed by [Docugami](https://docugami.com) and returns nodes in a Document XML Knowledge Graph for each document. This is a rich representation that includes the semantic and structural characteristics of various chunks in the document as an XML tree. Entire sets of documents are processed, resulting in forests of XML semantic trees.\n\n## Pre-requisites\n\n1. Create a Docugami workspace: [http://www.docugami.com](http://www.docugami.com) (free trials available)\n2. Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can [change the docset assignments](https://help.docugami.com/home/working-with-the-doc-sets-view) later.\n3. Create an access token via the Developer Playground for your workspace. Detailed instructions: [https://help.docugami.com/home/docugami-api](https://help.docugami.com/home/docugami-api)\n4. Explore the Docugami API at [https://api-docs.docugami.com](https://api-docs.docugami.com) to get a list of your processed docset IDs, or just the document IDs for a particular docset.\n\n## Usage\n\nTo use this loader, you simply need to pass in a Docugami Doc Set ID, and optionally an array of Document IDs (by default, all documents in the Doc Set are loaded).\n\n```python\nfrom llama_index.readers.docugami import DocugamiReader\n\ndocset_id = \"tjwrr2ekqkc3\"\ndocument_ids = [\"ui7pkriyckwi\", \"1be3o7ch10iy\"]\n\nloader = DocugamiReader()\ndocuments = loader.load_data(docset_id=docset_id, document_ids=document_ids)\n```\n\nThis loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/tree/main/llama_index) and/or subsequently used as a Tool in a [LangChain](https://github.com/hwchase17/langchain) Agent. See [here](https://github.com/emptycrown/llama-hub/tree/main) for examples.\n\nSee more information about how to use Docugami with LangChain in the [LangChain docs](https://python.langchain.com/docs/ecosystem/integrations/docugami).\n\n# Advantages vs Other Chunking Techniques\n\nAppropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach:\n\n1. **Intelligent Chunking:** Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking.\n2. **Structured Representation:** In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction.\n3. **Semantic Annotations:** Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause.\n4. **Additional Metadata:** Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through in [this notebook](https://github.com/docugami/llama-hub/blob/main/llama_hub/docugami/docugami.ipynb).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index readers docugami integration",
    "version": "0.1.4",
    "project_urls": null,
    "split_keywords": [
        "doc",
        " docugami",
        " docx",
        " pdf",
        " xml"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "337be287c466bb390b68fe95a5d6c27c09067acdfac0b99ae7899ad9cacb52f7",
                "md5": "2bd513fa1e9f9c423d493d52294b99e9",
                "sha256": "775041e7fbbdea801702ee9ad4f7804118badb3fe5758b001cf16582a038f317"
            },
            "downloads": -1,
            "filename": "llama_index_readers_docugami-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2bd513fa1e9f9c423d493d52294b99e9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 7151,
            "upload_time": "2024-03-22T22:04:48",
            "upload_time_iso_8601": "2024-03-22T22:04:48.105800Z",
            "url": "https://files.pythonhosted.org/packages/33/7b/e287c466bb390b68fe95a5d6c27c09067acdfac0b99ae7899ad9cacb52f7/llama_index_readers_docugami-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0938d7cba27fc3d177fa7d6c4630b7ce29bcf14d0305cb31efdba6a9c8e81b21",
                "md5": "93accfcb1a3f24a3e8700fc615971457",
                "sha256": "d6ad22566873f72492ae7e3fd2a95653bc8c68bdcd7c098da0267b274c851d37"
            },
            "downloads": -1,
            "filename": "llama_index_readers_docugami-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "93accfcb1a3f24a3e8700fc615971457",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 6886,
            "upload_time": "2024-03-22T22:04:49",
            "upload_time_iso_8601": "2024-03-22T22:04:49.061087Z",
            "url": "https://files.pythonhosted.org/packages/09/38/d7cba27fc3d177fa7d6c4630b7ce29bcf14d0305cb31efdba6a9c8e81b21/llama_index_readers_docugami-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-22 22:04:49",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-readers-docugami"
}
        
Elapsed time: 0.20928s