chunknorris


Namechunknorris JSON
Version 1.0.1 PyPI version JSON
download
home_pageNone
SummaryA package for chunking documents from various formats
upload_time2024-10-29 13:57:59
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords chunk document split html markdown pdf header rag
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Chunk Norris

## Goal

This project aims at improving the method of chunking documents from various sources (HTML, PDFs, ...).
In the context of Retrieval Augmented Generation (RAG), an optimized chunking method might lead to smaller chunks, meaning :
- **Better relevancy of chunks** (and thus easier identification of useful chunks through embedding cosine similarity)
- **Less errors** because of chunks exceeding the API limit in terms of number of tokens
- **Less hallucinations** of generation models because of superfluous information in the prompt
- **Reduced cost** as the prompt would have reduced size

## ⬇️ Installation

Using Pypi, just run the following command :
```pip install chunknorris```

## 🚀 Quick usage

You can directly invoke chunknorris on any **.md**, **.html** or **.pdf** file by running the following command in your terminal :

```chunknorris --filepath "path/to/myfile.pdf"```

See ``chunknorris -h`` for available options. Feel free to experiment 🧪 !

## ⚙️ How it works

ChunkNorris relies on 3 components :
- **Parsers** : they handle the cleaning and formating of your input document. You may use any parser suited for your need (e.g PdfParser for parsing PDF documents, MarkdownParser for parser)
- **Chunkers** : they use the output of the parser and handle its chunking.
- **pipelines**: they combine a parser and a chunker, allowing to output chunks directly from you input documents.

### Parsers

The role of parsers is to take a file or a string as input, and output a clean formated string suited for a chunker. As of today, **3 parsers are available** : 
- ``MarkdownParser`` : for parsing markdown strings.
- ``HTMLParser`` : for parsing html-formated strings.
- ``PdfParser`` : for parsing PDF files.

For now, all parsers will output a Markdown string. Indeed, markdown is a great format to be use in RAG application as it is very well understood by LLMs.

### Chunkers

![](images/chunk_method.png)

The role of chunkers is to process the output of parsers in order to obtain relevant chunks of the document. As of today, only ``MarkdownChunker`` is available. Used in conjunction with parsers, it allows to process a various inputs.

The chunking strategy of chunkers is based on several principles:
- **Each chunk must carry homogenous information.** To this end, they use the document's headers to chunk the documents. It helps ensuring that a specific piece of information is not splitted across multiple chunks.
- **Each chunk must keep contextual information.** A document's section might loose its meaning if the reader as no knowledge of its context. Consequently, all the headers of the parents sections are added ad the top of the chunk.
- **All chunks must be of similar sizes.** Indeed, when attempting to retrieve relevant chunks regarding a query, embedding models tend to be sensitive to the length of chunks. Actually, it is likely that a chunk with a text content of similar length to the query will have a high similarity score, while a chunk with a longer text content will see its similarity score descrease despite its relevancy. To prevent this, chunkers try to keep chunks of similar sizes whenever possible.


### Pipelines

Pipelines are the glue that sticks together a parser and a chunker. They use both to process documents and ensure constant output quality.

## Usage

You may find more details examples in the [examples section](link) of the repo. Nevertheless, here is a basic example to get you started, assuming you need to chunk Mardown files.

```py
from chunknorris.parsers import MarkdownParser
from chunknorris.chunkers import MarkdownChunker
from chunknorris.pipelines import BasePipeline

# Instanciate components
parser = MarkdownParser()
chunker = MarkdownChunker()
pipeline = BasePipeline(parser, chunker)

# Get some chunks !
chunks = pipeline.chunk_file(filepath="myfile.md")

# Print or save :
for chunk in chunks:
    print(chunk.get_text())
pipeline.save_chunks(chunks)
```

The ``BasePipeline`` is rather simple : it simply puts the parsers output into the chunker. While this is enough most in most cases, you may sometime need to use more advanced strategies.

The ``PdfPipeline`` for example works better with ``PdfParser``, as it has a ***fallback mechanism*** toward chunking the document by page in case no headers have been found. Here is a basic example of how to use it.

```py
from chunknorris.parsers import PdfParser
from chunknorris.chunkers import MarkdownChunker
from chunknorris.pipelines import PdfPipeline

# Instanciate components
parser = PdfParser()
chunker = MarkdownChunker()
pipeline = PdfPipeline(parser, chunker)

# Get some chunks !
chunks = pipeline.chunk_file(filepath="myfile.pdf")

# Print or save :
for chunk in chunks:
    print(chunk.get_text())
pipeline.save_chunks(chunks)
```

Feel free to experiment with various combinations, or even to implement your the pipeline that suits your needs !.


### Advanced usage

Additionally, the chunkers and parsers can take a number of argument allowing to modifiy their behavior:

```py
from chunknorris.chunkers import MarkdownChunker

chunker = MarkdownChunker(
    max_headers_to_use="h4",
    max_chunk_word_count=250,
    hard_max_chunk_word_count=400,
    min_chunk_word_count=15,
)
```

***max_headers_to_use*** 
(str): The maximum (included) level of headers take into account for chunking. For example, if "h3" is set, then "h4" and "h5" titles won't be used. Must be a string of type "hx" with x being the title level. Defaults to "h4".

***max_chunk_word_count***
(int): The maximum size (soft limit, in words) a chunk can be. Chunk bigger that this size will be chunked using lower level headers, until no lower level headers are available. Defaults to 200.

***hard_max_chunk_word_count***
(int): The hard maximum of number of words a chunk can be. Chunks bigger by this limit will be split into subchunks. ChunkNorris will try to equilibrate the size of resulting subchunks. It uses newlines to split. It should be greater than max_chunk_word_count. Defaults to 400. 

***min_chunk_word_count***
(int): Minimum number of words to consider keeping the chunks. Chunks with less words will be discarded. Defaults to 15.


### Implementing your own pipeline

#### TODO

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chunknorris",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "chunk, document, split, html, markdown, pdf, header, RAG",
    "author": null,
    "author_email": "Wikit <dev@wikit.ai>",
    "download_url": "https://files.pythonhosted.org/packages/14/06/e8b400d1ccb2079cdcdcb091309a42b27d9977f2d5de0fa9238c1c4a2021/chunknorris-1.0.1.tar.gz",
    "platform": null,
    "description": "# Chunk Norris\n\n## Goal\n\nThis project aims at improving the method of chunking documents from various sources (HTML, PDFs, ...).\nIn the context of Retrieval Augmented Generation (RAG), an optimized chunking method might lead to smaller chunks, meaning :\n- **Better relevancy of chunks** (and thus easier identification of useful chunks through embedding cosine similarity)\n- **Less errors** because of chunks exceeding the API limit in terms of number of tokens\n- **Less hallucinations** of generation models because of superfluous information in the prompt\n- **Reduced cost** as the prompt would have reduced size\n\n## \u2b07\ufe0f Installation\n\nUsing Pypi, just run the following command :\n```pip install chunknorris```\n\n## \ud83d\ude80 Quick usage\n\nYou can directly invoke chunknorris on any **.md**, **.html** or **.pdf** file by running the following command in your terminal :\n\n```chunknorris --filepath \"path/to/myfile.pdf\"```\n\nSee ``chunknorris -h`` for available options. Feel free to experiment \ud83e\uddea !\n\n## \u2699\ufe0f How it works\n\nChunkNorris relies on 3 components :\n- **Parsers** : they handle the cleaning and formating of your input document. You may use any parser suited for your need (e.g PdfParser for parsing PDF documents, MarkdownParser for parser)\n- **Chunkers** : they use the output of the parser and handle its chunking.\n- **pipelines**: they combine a parser and a chunker, allowing to output chunks directly from you input documents.\n\n### Parsers\n\nThe role of parsers is to take a file or a string as input, and output a clean formated string suited for a chunker. As of today, **3 parsers are available** : \n- ``MarkdownParser`` : for parsing markdown strings.\n- ``HTMLParser`` : for parsing html-formated strings.\n- ``PdfParser`` : for parsing PDF files.\n\nFor now, all parsers will output a Markdown string. Indeed, markdown is a great format to be use in RAG application as it is very well understood by LLMs.\n\n### Chunkers\n\n![](images/chunk_method.png)\n\nThe role of chunkers is to process the output of parsers in order to obtain relevant chunks of the document. As of today, only ``MarkdownChunker`` is available. Used in conjunction with parsers, it allows to process a various inputs.\n\nThe chunking strategy of chunkers is based on several principles:\n- **Each chunk must carry homogenous information.** To this end, they use the document's headers to chunk the documents. It helps ensuring that a specific piece of information is not splitted across multiple chunks.\n- **Each chunk must keep contextual information.** A document's section might loose its meaning if the reader as no knowledge of its context. Consequently, all the headers of the parents sections are added ad the top of the chunk.\n- **All chunks must be of similar sizes.** Indeed, when attempting to retrieve relevant chunks regarding a query, embedding models tend to be sensitive to the length of chunks. Actually, it is likely that a chunk with a text content of similar length to the query will have a high similarity score, while a chunk with a longer text content will see its similarity score descrease despite its relevancy. To prevent this, chunkers try to keep chunks of similar sizes whenever possible.\n\n\n### Pipelines\n\nPipelines are the glue that sticks together a parser and a chunker. They use both to process documents and ensure constant output quality.\n\n## Usage\n\nYou may find more details examples in the [examples section](link) of the repo. Nevertheless, here is a basic example to get you started, assuming you need to chunk Mardown files.\n\n```py\nfrom chunknorris.parsers import MarkdownParser\nfrom chunknorris.chunkers import MarkdownChunker\nfrom chunknorris.pipelines import BasePipeline\n\n# Instanciate components\nparser = MarkdownParser()\nchunker = MarkdownChunker()\npipeline = BasePipeline(parser, chunker)\n\n# Get some chunks !\nchunks = pipeline.chunk_file(filepath=\"myfile.md\")\n\n# Print or save :\nfor chunk in chunks:\n    print(chunk.get_text())\npipeline.save_chunks(chunks)\n```\n\nThe ``BasePipeline`` is rather simple : it simply puts the parsers output into the chunker. While this is enough most in most cases, you may sometime need to use more advanced strategies.\n\nThe ``PdfPipeline`` for example works better with ``PdfParser``, as it has a ***fallback mechanism*** toward chunking the document by page in case no headers have been found. Here is a basic example of how to use it.\n\n```py\nfrom chunknorris.parsers import PdfParser\nfrom chunknorris.chunkers import MarkdownChunker\nfrom chunknorris.pipelines import PdfPipeline\n\n# Instanciate components\nparser = PdfParser()\nchunker = MarkdownChunker()\npipeline = PdfPipeline(parser, chunker)\n\n# Get some chunks !\nchunks = pipeline.chunk_file(filepath=\"myfile.pdf\")\n\n# Print or save :\nfor chunk in chunks:\n    print(chunk.get_text())\npipeline.save_chunks(chunks)\n```\n\nFeel free to experiment with various combinations, or even to implement your the pipeline that suits your needs !.\n\n\n### Advanced usage\n\nAdditionally, the chunkers and parsers can take a number of argument allowing to modifiy their behavior:\n\n```py\nfrom chunknorris.chunkers import MarkdownChunker\n\nchunker = MarkdownChunker(\n    max_headers_to_use=\"h4\",\n    max_chunk_word_count=250,\n    hard_max_chunk_word_count=400,\n    min_chunk_word_count=15,\n)\n```\n\n***max_headers_to_use*** \n(str): The maximum (included) level of headers take into account for chunking. For example, if \"h3\" is set, then \"h4\" and \"h5\" titles won't be used. Must be a string of type \"hx\" with x being the title level. Defaults to \"h4\".\n\n***max_chunk_word_count***\n(int): The maximum size (soft limit, in words) a chunk can be. Chunk bigger that this size will be chunked using lower level headers, until no lower level headers are available. Defaults to 200.\n\n***hard_max_chunk_word_count***\n(int): The hard maximum of number of words a chunk can be. Chunks bigger by this limit will be split into subchunks. ChunkNorris will try to equilibrate the size of resulting subchunks. It uses newlines to split. It should be greater than max_chunk_word_count. Defaults to 400. \n\n***min_chunk_word_count***\n(int): Minimum number of words to consider keeping the chunks. Chunks with less words will be discarded. Defaults to 15.\n\n\n### Implementing your own pipeline\n\n#### TODO\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A package for chunking documents from various formats",
    "version": "1.0.1",
    "project_urls": {
        "Homepage": "https://gitlab.com/wikit/research-and-development/chunk-norris",
        "Issues": "https://gitlab.com/wikit/research-and-development/chunk-norris/-/issues"
    },
    "split_keywords": [
        "chunk",
        " document",
        " split",
        " html",
        " markdown",
        " pdf",
        " header",
        " rag"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "26fda2ed03de8ae9821bbda7ce1350187a8d78bd13fa4f733b59edbed2bf46e0",
                "md5": "0150f43e7fe0aefaf3a63fa37dc9f776",
                "sha256": "7aede95a1c6ef1d7a34b344f490e20b688af6afcf1e7d01f7ac508973c7a7924"
            },
            "downloads": -1,
            "filename": "chunknorris-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0150f43e7fe0aefaf3a63fa37dc9f776",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 44747,
            "upload_time": "2024-10-29T13:57:57",
            "upload_time_iso_8601": "2024-10-29T13:57:57.737542Z",
            "url": "https://files.pythonhosted.org/packages/26/fd/a2ed03de8ae9821bbda7ce1350187a8d78bd13fa4f733b59edbed2bf46e0/chunknorris-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1406e8b400d1ccb2079cdcdcb091309a42b27d9977f2d5de0fa9238c1c4a2021",
                "md5": "fb54ce86e8267724935ce09027ca3127",
                "sha256": "f5f2b4703e1a049b149c0407ae4991f1a47943591f411192db051b47350ffee7"
            },
            "downloads": -1,
            "filename": "chunknorris-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "fb54ce86e8267724935ce09027ca3127",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 34296,
            "upload_time": "2024-10-29T13:57:59",
            "upload_time_iso_8601": "2024-10-29T13:57:59.406395Z",
            "url": "https://files.pythonhosted.org/packages/14/06/e8b400d1ccb2079cdcdcb091309a42b27d9977f2d5de0fa9238c1c4a2021/chunknorris-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-29 13:57:59",
    "github": false,
    "gitlab": true,
    "bitbucket": false,
    "codeberg": false,
    "gitlab_user": "wikit",
    "gitlab_project": "research-and-development",
    "lcname": "chunknorris"
}
        
Elapsed time: 1.05110s