semchunk


Namesemchunk JSON
Version 2.2.1 PyPI version JSON
download
home_pageNone
SummaryA fast and lightweight Python library for splitting text into semantically meaningful chunks.
upload_time2024-12-17 04:52:55
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords chunk chunker chunking chunks nlp split splits splitter splitting text
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # semchunk
<a href="https://pypi.org/project/semchunk/" alt="PyPI Version"><img src="https://img.shields.io/pypi/v/semchunk"></a> <a href="https://github.com/umarbutler/semchunk/actions/workflows/ci.yml" alt="Build Status"><img src="https://img.shields.io/github/actions/workflow/status/umarbutler/semchunk/ci.yml?branch=main"></a> <a href="https://app.codecov.io/gh/umarbutler/semchunk" alt="Code Coverage"><img src="https://img.shields.io/codecov/c/github/umarbutler/semchunk"></a> <a href="https://pypistats.org/packages/semchunk" alt="Downloads"><img src="https://img.shields.io/pypi/dm/semchunk"></a>

`semchunk` is a fast and lightweight Python library for splitting text into semantically meaningful chunks.

Owing to its complex yet highly efficient chunking algorithm, `semchunk` is both more semantically accurate than [`langchain.text_splitter.RecursiveCharacterTextSplitter`](https://python.langchain.com/v0.2/docs/how_to/recursive_text_splitter/#splitting-text-from-languages-without-word-boundaries) (see [How It Works 🔍](https://github.com/umarbutler/semchunk#how-it-works-)) and is also over 80% faster than [`semantic-text-splitter`](https://pypi.org/project/semantic-text-splitter/) (see the [Benchmarks 📊](https://github.com/umarbutler/semchunk#benchmarks-)).

## Installation 📦
`semchunk` may be installed with `pip`:
```bash
pip install semchunk
```

## Usage 👩‍💻
The code snippet below demonstrates how text can be chunked with `semchunk`:
```python
import semchunk
from transformers import AutoTokenizer # Neither `transformers` nor `tiktoken` are required,
import tiktoken                        # they are here for demonstration purposes.

chunk_size = 2 # A low chunk size is used here for demonstration purposes. Keep in mind that
               # `semchunk` doesn't take special tokens into account unless you're using a
               # custom token counter, so you probably want to deduct your chunk size by the
               # number of special tokens added by your tokenizer.
text = 'The quick brown fox jumps over the lazy dog.'

# As you can see below, `semchunk.chunkerify` will accept the names of all OpenAI models, OpenAI
# `tiktoken` encodings and Hugging Face models (in that order of precedence), along with custom
# tokenizers that have an `encode()` method (such as `tiktoken`, `transformers` and `tokenizers`
# tokenizers) and finally any function that can take a text and return the number of tokens in it.
chunker = semchunk.chunkerify('umarbutler/emubert', chunk_size) or \
          semchunk.chunkerify('gpt-4', chunk_size) or \
          semchunk.chunkerify('cl100k_base', chunk_size) or \
          semchunk.chunkerify(AutoTokenizer.from_pretrained('umarbutler/emubert'), chunk_size) or \
          semchunk.chunkerify(tiktoken.encoding_for_model('gpt-4'), chunk_size) or \
          semchunk.chunkerify(lambda text: len(text.split()), chunk_size)

# The resulting `chunker` can take and chunk a single text or a list of texts, returning a list of
# chunks or a list of lists of chunks, respectively.
assert chunker(text) == ['The quick', 'brown', 'fox', 'jumps', 'over the', 'lazy', 'dog.']
assert chunker([text], progress = True) == [['The quick', 'brown', 'fox', 'jumps', 'over the', 'lazy', 'dog.']]

# If you have a large number of texts to chunk and speed is a concern, you can also enable
# multiprocessing by setting `processes` to a number greater than 1.
assert chunker([text], processes = 2) == [['The quick', 'brown', 'fox', 'jumps', 'over the', 'lazy', 'dog.']]
```

### Chunkerify
```python
def chunkerify(
    tokenizer_or_token_counter: str | tiktoken.Encoding | transformers.PreTrainedTokenizer | \
                                tokenizers.Tokenizer | Callable[[str], int],
    chunk_size: int = None,
    max_token_chars: int = None,
    memoize: bool = True,
) -> Callable[[str | Sequence[str], bool, bool], list[str] | list[list[str]]]:
```

`chunkerify()` constructs a chunker that splits one or more texts into semantically meaningful chunks of a specified size as determined by the provided tokenizer or token counter.

`tokenizer_or_token_counter` is either: the name of a `tiktoken` or `transformers` tokenizer (with priority given to the former); a tokenizer that possesses an `encode` attribute (eg, a `tiktoken`, `transformers` or `tokenizers` tokenizer); or a token counter that returns the number of tokens in a input.

`chunk_size` is the maximum number of tokens a chunk may contain. It defaults to `None` in which case it will be set to the same value as the tokenizer's `model_max_length` attribute (deducted by the number of tokens returned by attempting to tokenize an empty string) if possible otherwise a `ValueError` will be raised.

`max_token_chars` is the maximum numbers of characters a token may contain. It is used to significantly speed up the token counting of long inputs. It defaults to `None` in which case it will either not be used or will, if possible, be set to the numbers of characters in the longest token in the tokenizer's vocabulary as determined by the `token_byte_values` or `get_vocab` methods.

`memoize` flags whether to memoize the token counter. It defaults to `True`.

This function returns a chunker that takes either a single text or a sequence of texts and returns, if a single text has been provided, a list of chunks up to `chunk_size`-tokens-long with any whitespace used to split the text removed, or, if multiple texts have been provided, a list of lists of chunks, with each inner list corresponding to the chunks of one of the provided input texts.

The resulting chunker can be passed a `processes` argument that specifies the number of processes to be used when chunking multiple texts.

It is also possible to pass a `progress` argument which, if set to `True` and multiple texts are passed, will display a progress bar.

Technically, the chunker will be an instance of the `semchunk.Chunker` class to assist with type hinting, though this should have no impact on how it can be used.

### Chunk
```python
def chunk(
    text: str,
    chunk_size: int,
    token_counter: Callable,
    memoize: bool = True,
) -> list[str]
```

`chunk()` splits a text into semantically meaningful chunks of a specified size as determined by the provided token counter.

`text` is the text to be chunked.

`chunk_size` is the maximum number of tokens a chunk may contain.

`token_counter` is a callable that takes a string and returns the number of tokens in it.

`memoize` flags whether to memoize the token counter. It defaults to `True`.

This function returns a list of chunks up to `chunk_size`-tokens-long, with any whitespace used to split the text removed.

## How It Works 🔍
`semchunk` works by recursively splitting texts until all resulting chunks are equal to or less than a specified chunk size. In particular, it:
1. Splits text using the most semantically meaningful splitter possible;
1. Recursively splits the resulting chunks until a set of chunks equal to or less than the specified chunk size is produced;
1. Merges any chunks that are under the chunk size back together until the chunk size is reached; and
1. Reattaches any non-whitespace splitters back to the ends of chunks barring the final chunk if doing so does not bring chunks over the chunk size, otherwise adds non-whitespace splitters as their own chunks.

To ensure that chunks are as semantically meaningful as possible, `semchunk` uses the following splitters, in order of precedence:
1. The largest sequence of newlines (`\n`) and/or carriage returns (`\r`);
1. The largest sequence of tabs;
1. The largest sequence of whitespace characters (as defined by regex's `\s` character class);
1. Sentence terminators (`.`, `?`, `!` and `*`);
1. Clause separators (`;`, `,`, `(`, `)`, `[`, `]`, `“`, `”`, `‘`, `’`, `'`, `"` and `` ` ``);
1. Sentence interrupters (`:`, `—` and `…`);
1. Word joiners (`/`, `\`, `–`, `&` and `-`); and
1. All other characters.

## Benchmarks 📊
On a desktop with a Ryzen 9 7900X, 96 GB of DDR5 5600MHz CL40 RAM, Windows 11 and Python 3.12.4, it takes `semchunk` 2.87 seconds to split every sample in [NLTK's Gutenberg Corpus](https://www.nltk.org/howto/corpus.html#plaintext-corpora) into 512-token-long chunks with GPT-4's tokenizer (for context, the Corpus contains 18 texts and 3,001,260 tokens). By comparison, it takes [`semantic-text-splitter`](https://pypi.org/project/semantic-text-splitter/) (with multiprocessing) 25.03 seconds to chunk the same texts into 512-token-long chunks — a difference of 88.53%.

The code used to benchmark `semchunk` and `semantic-text-splitter` is available [here](https://github.com/umarbutler/semchunk/blob/main/tests/bench.py).

## Licence 📄
This library is licensed under the [MIT License](https://github.com/umarbutler/semchunk/blob/main/LICENCE).
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "semchunk",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "chunk, chunker, chunking, chunks, nlp, split, splits, splitter, splitting, text",
    "author": null,
    "author_email": "Umar Butler <umar@umar.au>",
    "download_url": "https://files.pythonhosted.org/packages/2b/a3/7ee17b6d4383dfdf182f354a9c5a70564ae7985b6b67e14156078be24540/semchunk-2.2.1.tar.gz",
    "platform": null,
    "description": "# semchunk\n<a href=\"https://pypi.org/project/semchunk/\" alt=\"PyPI Version\"><img src=\"https://img.shields.io/pypi/v/semchunk\"></a> <a href=\"https://github.com/umarbutler/semchunk/actions/workflows/ci.yml\" alt=\"Build Status\"><img src=\"https://img.shields.io/github/actions/workflow/status/umarbutler/semchunk/ci.yml?branch=main\"></a> <a href=\"https://app.codecov.io/gh/umarbutler/semchunk\" alt=\"Code Coverage\"><img src=\"https://img.shields.io/codecov/c/github/umarbutler/semchunk\"></a> <a href=\"https://pypistats.org/packages/semchunk\" alt=\"Downloads\"><img src=\"https://img.shields.io/pypi/dm/semchunk\"></a>\n\n`semchunk` is a fast and lightweight Python library for splitting text into semantically meaningful chunks.\n\nOwing to its complex yet highly efficient chunking algorithm, `semchunk` is both more semantically accurate than [`langchain.text_splitter.RecursiveCharacterTextSplitter`](https://python.langchain.com/v0.2/docs/how_to/recursive_text_splitter/#splitting-text-from-languages-without-word-boundaries) (see [How It Works \ud83d\udd0d](https://github.com/umarbutler/semchunk#how-it-works-)) and is also over 80% faster than [`semantic-text-splitter`](https://pypi.org/project/semantic-text-splitter/) (see the [Benchmarks \ud83d\udcca](https://github.com/umarbutler/semchunk#benchmarks-)).\n\n## Installation \ud83d\udce6\n`semchunk` may be installed with `pip`:\n```bash\npip install semchunk\n```\n\n## Usage \ud83d\udc69\u200d\ud83d\udcbb\nThe code snippet below demonstrates how text can be chunked with `semchunk`:\n```python\nimport semchunk\nfrom transformers import AutoTokenizer # Neither `transformers` nor `tiktoken` are required,\nimport tiktoken                        # they are here for demonstration purposes.\n\nchunk_size = 2 # A low chunk size is used here for demonstration purposes. Keep in mind that\n               # `semchunk` doesn't take special tokens into account unless you're using a\n               # custom token counter, so you probably want to deduct your chunk size by the\n               # number of special tokens added by your tokenizer.\ntext = 'The quick brown fox jumps over the lazy dog.'\n\n# As you can see below, `semchunk.chunkerify` will accept the names of all OpenAI models, OpenAI\n# `tiktoken` encodings and Hugging Face models (in that order of precedence), along with custom\n# tokenizers that have an `encode()` method (such as `tiktoken`, `transformers` and `tokenizers`\n# tokenizers) and finally any function that can take a text and return the number of tokens in it.\nchunker = semchunk.chunkerify('umarbutler/emubert', chunk_size) or \\\n          semchunk.chunkerify('gpt-4', chunk_size) or \\\n          semchunk.chunkerify('cl100k_base', chunk_size) or \\\n          semchunk.chunkerify(AutoTokenizer.from_pretrained('umarbutler/emubert'), chunk_size) or \\\n          semchunk.chunkerify(tiktoken.encoding_for_model('gpt-4'), chunk_size) or \\\n          semchunk.chunkerify(lambda text: len(text.split()), chunk_size)\n\n# The resulting `chunker` can take and chunk a single text or a list of texts, returning a list of\n# chunks or a list of lists of chunks, respectively.\nassert chunker(text) == ['The quick', 'brown', 'fox', 'jumps', 'over the', 'lazy', 'dog.']\nassert chunker([text], progress = True) == [['The quick', 'brown', 'fox', 'jumps', 'over the', 'lazy', 'dog.']]\n\n# If you have a large number of texts to chunk and speed is a concern, you can also enable\n# multiprocessing by setting `processes` to a number greater than 1.\nassert chunker([text], processes = 2) == [['The quick', 'brown', 'fox', 'jumps', 'over the', 'lazy', 'dog.']]\n```\n\n### Chunkerify\n```python\ndef chunkerify(\n    tokenizer_or_token_counter: str | tiktoken.Encoding | transformers.PreTrainedTokenizer | \\\n                                tokenizers.Tokenizer | Callable[[str], int],\n    chunk_size: int = None,\n    max_token_chars: int = None,\n    memoize: bool = True,\n) -> Callable[[str | Sequence[str], bool, bool], list[str] | list[list[str]]]:\n```\n\n`chunkerify()` constructs a chunker that splits one or more texts into semantically meaningful chunks of a specified size as determined by the provided tokenizer or token counter.\n\n`tokenizer_or_token_counter` is either: the name of a `tiktoken` or `transformers` tokenizer (with priority given to the former); a tokenizer that possesses an `encode` attribute (eg, a `tiktoken`, `transformers` or `tokenizers` tokenizer); or a token counter that returns the number of tokens in a input.\n\n`chunk_size` is the maximum number of tokens a chunk may contain. It defaults to `None` in which case it will be set to the same value as the tokenizer's `model_max_length` attribute (deducted by the number of tokens returned by attempting to tokenize an empty string) if possible otherwise a `ValueError` will be raised.\n\n`max_token_chars` is the maximum numbers of characters a token may contain. It is used to significantly speed up the token counting of long inputs. It defaults to `None` in which case it will either not be used or will, if possible, be set to the numbers of characters in the longest token in the tokenizer's vocabulary as determined by the `token_byte_values` or `get_vocab` methods.\n\n`memoize` flags whether to memoize the token counter. It defaults to `True`.\n\nThis function returns a chunker that takes either a single text or a sequence of texts and returns, if a single text has been provided, a list of chunks up to `chunk_size`-tokens-long with any whitespace used to split the text removed, or, if multiple texts have been provided, a list of lists of chunks, with each inner list corresponding to the chunks of one of the provided input texts.\n\nThe resulting chunker can be passed a `processes` argument that specifies the number of processes to be used when chunking multiple texts.\n\nIt is also possible to pass a `progress` argument which, if set to `True` and multiple texts are passed, will display a progress bar.\n\nTechnically, the chunker will be an instance of the `semchunk.Chunker` class to assist with type hinting, though this should have no impact on how it can be used.\n\n### Chunk\n```python\ndef chunk(\n    text: str,\n    chunk_size: int,\n    token_counter: Callable,\n    memoize: bool = True,\n) -> list[str]\n```\n\n`chunk()` splits a text into semantically meaningful chunks of a specified size as determined by the provided token counter.\n\n`text` is the text to be chunked.\n\n`chunk_size` is the maximum number of tokens a chunk may contain.\n\n`token_counter` is a callable that takes a string and returns the number of tokens in it.\n\n`memoize` flags whether to memoize the token counter. It defaults to `True`.\n\nThis function returns a list of chunks up to `chunk_size`-tokens-long, with any whitespace used to split the text removed.\n\n## How It Works \ud83d\udd0d\n`semchunk` works by recursively splitting texts until all resulting chunks are equal to or less than a specified chunk size. In particular, it:\n1. Splits text using the most semantically meaningful splitter possible;\n1. Recursively splits the resulting chunks until a set of chunks equal to or less than the specified chunk size is produced;\n1. Merges any chunks that are under the chunk size back together until the chunk size is reached; and\n1. Reattaches any non-whitespace splitters back to the ends of chunks barring the final chunk if doing so does not bring chunks over the chunk size, otherwise adds non-whitespace splitters as their own chunks.\n\nTo ensure that chunks are as semantically meaningful as possible, `semchunk` uses the following splitters, in order of precedence:\n1. The largest sequence of newlines (`\\n`) and/or carriage returns (`\\r`);\n1. The largest sequence of tabs;\n1. The largest sequence of whitespace characters (as defined by regex's `\\s` character class);\n1. Sentence terminators (`.`, `?`, `!` and `*`);\n1. Clause separators (`;`, `,`, `(`, `)`, `[`, `]`, `\u201c`, `\u201d`, `\u2018`, `\u2019`, `'`, `\"` and `` ` ``);\n1. Sentence interrupters (`:`, `\u2014` and `\u2026`);\n1. Word joiners (`/`, `\\`, `\u2013`, `&` and `-`); and\n1. All other characters.\n\n## Benchmarks \ud83d\udcca\nOn a desktop with a Ryzen 9 7900X, 96 GB of DDR5 5600MHz CL40 RAM, Windows 11 and Python 3.12.4, it takes `semchunk` 2.87 seconds to split every sample in [NLTK's Gutenberg Corpus](https://www.nltk.org/howto/corpus.html#plaintext-corpora) into 512-token-long chunks with GPT-4's tokenizer (for context, the Corpus contains 18 texts and 3,001,260 tokens). By comparison, it takes [`semantic-text-splitter`](https://pypi.org/project/semantic-text-splitter/) (with multiprocessing) 25.03 seconds to chunk the same texts into 512-token-long chunks \u2014 a difference of 88.53%.\n\nThe code used to benchmark `semchunk` and `semantic-text-splitter` is available [here](https://github.com/umarbutler/semchunk/blob/main/tests/bench.py).\n\n## Licence \ud83d\udcc4\nThis library is licensed under the [MIT License](https://github.com/umarbutler/semchunk/blob/main/LICENCE).",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A fast and lightweight Python library for splitting text into semantically meaningful chunks.",
    "version": "2.2.1",
    "project_urls": {
        "Documentation": "https://github.com/umarbutler/semchunk/blob/main/README.md",
        "Homepage": "https://github.com/umarbutler/semchunk",
        "Issues": "https://github.com/umarbutler/semchunk/issues",
        "Source": "https://github.com/umarbutler/semchunk"
    },
    "split_keywords": [
        "chunk",
        " chunker",
        " chunking",
        " chunks",
        " nlp",
        " split",
        " splits",
        " splitter",
        " splitting",
        " text"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d27bf3f2719d3da9105fd126139e9e67a8b400c4855e33f56ac11b8c295ef7a4",
                "md5": "ffce49f8f6784f2375c8c889cd438e1c",
                "sha256": "8f8801293d2ee0c4c345844b74ca9666350af6689052ca5deaa53813c6d6fbf3"
            },
            "downloads": -1,
            "filename": "semchunk-2.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ffce49f8f6784f2375c8c889cd438e1c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 10274,
            "upload_time": "2024-12-17T04:52:44",
            "upload_time_iso_8601": "2024-12-17T04:52:44.163175Z",
            "url": "https://files.pythonhosted.org/packages/d2/7b/f3f2719d3da9105fd126139e9e67a8b400c4855e33f56ac11b8c295ef7a4/semchunk-2.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2ba37ee17b6d4383dfdf182f354a9c5a70564ae7985b6b67e14156078be24540",
                "md5": "4ae7b63fe4da8a492293788448702c7a",
                "sha256": "9a96d9270ce3f8692d2ce0ddb5a5de582df8d9106d1d96427f804b8b6cc4a127"
            },
            "downloads": -1,
            "filename": "semchunk-2.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "4ae7b63fe4da8a492293788448702c7a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 34984997,
            "upload_time": "2024-12-17T04:52:55",
            "upload_time_iso_8601": "2024-12-17T04:52:55.516966Z",
            "url": "https://files.pythonhosted.org/packages/2b/a3/7ee17b6d4383dfdf182f354a9c5a70564ae7985b6b67e14156078be24540/semchunk-2.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-17 04:52:55",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "umarbutler",
    "github_project": "semchunk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "semchunk"
}
        
Elapsed time: 0.59170s