semantic-text-splitter


Namesemantic-text-splitter JSON
Version 0.20.0 PyPI version JSON
download
home_pageNone
SummarySplit text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rust and Python.
upload_time2024-12-14 20:55:37
maintainerNone
docs_urlNone
authorBen Brandt <benjamin.j.brandt@gmail.com>
requires_python>=3.9
licenseMIT
keywords text split tokenizer nlp ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # semantic-text-splitter

[![Documentation Status](https://readthedocs.org/projects/semantic-text-splitter/badge/?version=stable)](https://semantic-text-splitter.readthedocs.io/en/latest/?badge=latest) [![Licence](https://img.shields.io/crates/l/text-splitter)](https://github.com/benbrandt/text-splitter/blob/main/LICENSE.txt)

Large language models (LLMs) can be used for many tasks, but often have a limited context size that can be smaller than documents you might want to use. To use documents of larger length, you often have to split your text into chunks to fit within this context size.

This crate provides methods for splitting longer pieces of text into smaller chunks, aiming to maximize a desired chunk size, but still splitting at semantically sensible boundaries whenever possible.

## Get Started

### By Number of Characters

```python
from semantic_text_splitter import TextSplitter

# Maximum number of characters in a chunk
max_characters = 1000
# Optionally can also have the splitter not trim whitespace for you
splitter = TextSplitter(max_characters)
# splitter = TextSplitter(max_characters, trim=False)

chunks = splitter.chunks("your document text")
```

### Using a Range for Chunk Capacity

You also have the option of specifying your chunk capacity as a range.

Once a chunk has reached a length that falls within the range it will be returned.

It is always possible that a chunk may be returned that is less than the `start` value, as adding the next piece of text may have made it larger than the `end` capacity.

```python
from semantic_text_splitter import TextSplitter


# Maximum number of characters in a chunk. Will fill up the
# chunk until it is somewhere in this range.
splitter = TextSplitter((200,1000))

chunks = splitter.chunks("your document text")
```

### Using a Hugging Face Tokenizer

```python
from semantic_text_splitter import TextSplitter
from tokenizers import Tokenizer

# Maximum number of tokens in a chunk
max_tokens = 1000
tokenizer = Tokenizer.from_pretrained("bert-base-uncased")
splitter = TextSplitter.from_huggingface_tokenizer(tokenizer, max_tokens)

chunks = splitter.chunks("your document text")
```

### Using a Tiktoken Tokenizer

```python
from semantic_text_splitter import TextSplitter

# Maximum number of tokens in a chunk
max_tokens = 1000
splitter = TextSplitter.from_tiktoken_model("gpt-3.5-turbo", max_tokens)

chunks = splitter.chunks("your document text")
```

### Using a Custom Callback

```python
from semantic_text_splitter import TextSplitter

splitter = TextSplitter.from_callback(lambda text: len(text), 1000)

chunks = splitter.chunks("your document text")
```

### Markdown

All of the above examples also can also work with Markdown text. You can use the `MarkdownSplitter` in the same ways as the `TextSplitter`.

```python
from semantic_text_splitter import MarkdownSplitter

# Maximum number of characters in a chunk
max_characters = 1000
# Optionally can also have the splitter not trim whitespace for you
splitter = MarkdownSplitter(max_characters)
# splitter = MarkdownSplitter(max_characters, trim=False)

chunks = splitter.chunks("# Header\n\nyour document text")
```

## Method

To preserve as much semantic meaning within a chunk as possible, each chunk is composed of the largest semantic units that can fit in the next given chunk. For each splitter type, there is a defined set of semantic levels. Here is an example of the steps used:

1. Split the text by a increasing semantic levels.
2. Check the first item for each level and select the highest level whose first item still fits within the chunk size.
3. Merge as many of these neighboring sections of this level or above into a chunk to maximize chunk length. Boundaries of higher semantic levels are always included when merging, so that the chunk doesn't inadvertantly cross semantic boundaries.

The boundaries used to split the text if using the `chunks` method, in ascending order:

### `TextSplitter` Semantic Levels

1. Characters
2. [Unicode Grapheme Cluster Boundaries](https://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries)
3. [Unicode Word Boundaries](https://www.unicode.org/reports/tr29/#Word_Boundaries)
4. [Unicode Sentence Boundaries](https://www.unicode.org/reports/tr29/#Sentence_Boundaries)
5. Ascending sequence length of newlines. (Newline is `\r\n`, `\n`, or `\r`) Each unique length of consecutive newline sequences is treated as its own semantic level. So a sequence of 2 newlines is a higher level than a sequence of 1 newline, and so on.

Splitting doesn't occur below the character level, otherwise you could get partial bytes of a char, which may not be a valid unicode str.

### `MarkdownSplitter` Semantic Levels

Markdown is parsed according to the `CommonMark` spec, along with some optional features such as GitHub Flavored Markdown.

1. Characters
2. [Unicode Grapheme Cluster Boundaries](https://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries)
3. [Unicode Word Boundaries](https://www.unicode.org/reports/tr29/#Word_Boundaries)
4. [Unicode Sentence Boundaries](https://www.unicode.org/reports/tr29/#Sentence_Boundaries)
5. Soft line breaks (single newline) which isn't necessarily a new element in Markdown.
6. Inline elements such as: text nodes, emphasis, strong, strikethrough, link, image, table cells, inline code, footnote references, task list markers, and inline html.
7. Block elements suce as: paragraphs, code blocks, footnote definitions, metadata. Also, a block quote or row/item within a table or list that can contain other "block" type elements, and a list or table that contains items.
8. Thematic breaks or horizontal rules.
9. Headings by level

Splitting doesn't occur below the character level, otherwise you could get partial bytes of a char, which may not be a valid unicode str.

### Note on sentences

There are lots of methods of determining sentence breaks, all to varying degrees of accuracy, and many requiring ML models to do so. Rather than trying to find the perfect sentence breaks, we rely on unicode method of sentence boundaries, which in most cases is good enough for finding a decent semantic breaking point if a paragraph is too large, and avoids the performance penalties of many other methods.

## Inspiration

This crate was inspired by [LangChain's TextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html#langchain_text_splitters.character.RecursiveCharacterTextSplitter). But, looking into the implementation, there was potential for better performance as well as better semantic chunking.

A big thank you to the Unicode team for their [icu_segmenter](https://crates.io/crates/icu_segmenter) crate that manages a lot of the complexity of matching the Unicode rules for words and sentences.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "semantic-text-splitter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "text, split, tokenizer, nlp, ai",
    "author": "Ben Brandt <benjamin.j.brandt@gmail.com>",
    "author_email": "Ben Brandt <benjamin.j.brandt@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/98/87/f6a6b483b2eb9ea51dcefca153ce4055eeaed201a45e92b85b5e903e97f1/semantic_text_splitter-0.20.0.tar.gz",
    "platform": null,
    "description": "# semantic-text-splitter\n\n[![Documentation Status](https://readthedocs.org/projects/semantic-text-splitter/badge/?version=stable)](https://semantic-text-splitter.readthedocs.io/en/latest/?badge=latest) [![Licence](https://img.shields.io/crates/l/text-splitter)](https://github.com/benbrandt/text-splitter/blob/main/LICENSE.txt)\n\nLarge language models (LLMs) can be used for many tasks, but often have a limited context size that can be smaller than documents you might want to use. To use documents of larger length, you often have to split your text into chunks to fit within this context size.\n\nThis crate provides methods for splitting longer pieces of text into smaller chunks, aiming to maximize a desired chunk size, but still splitting at semantically sensible boundaries whenever possible.\n\n## Get Started\n\n### By Number of Characters\n\n```python\nfrom semantic_text_splitter import TextSplitter\n\n# Maximum number of characters in a chunk\nmax_characters = 1000\n# Optionally can also have the splitter not trim whitespace for you\nsplitter = TextSplitter(max_characters)\n# splitter = TextSplitter(max_characters, trim=False)\n\nchunks = splitter.chunks(\"your document text\")\n```\n\n### Using a Range for Chunk Capacity\n\nYou also have the option of specifying your chunk capacity as a range.\n\nOnce a chunk has reached a length that falls within the range it will be returned.\n\nIt is always possible that a chunk may be returned that is less than the `start` value, as adding the next piece of text may have made it larger than the `end` capacity.\n\n```python\nfrom semantic_text_splitter import TextSplitter\n\n\n# Maximum number of characters in a chunk. Will fill up the\n# chunk until it is somewhere in this range.\nsplitter = TextSplitter((200,1000))\n\nchunks = splitter.chunks(\"your document text\")\n```\n\n### Using a Hugging Face Tokenizer\n\n```python\nfrom semantic_text_splitter import TextSplitter\nfrom tokenizers import Tokenizer\n\n# Maximum number of tokens in a chunk\nmax_tokens = 1000\ntokenizer = Tokenizer.from_pretrained(\"bert-base-uncased\")\nsplitter = TextSplitter.from_huggingface_tokenizer(tokenizer, max_tokens)\n\nchunks = splitter.chunks(\"your document text\")\n```\n\n### Using a Tiktoken Tokenizer\n\n```python\nfrom semantic_text_splitter import TextSplitter\n\n# Maximum number of tokens in a chunk\nmax_tokens = 1000\nsplitter = TextSplitter.from_tiktoken_model(\"gpt-3.5-turbo\", max_tokens)\n\nchunks = splitter.chunks(\"your document text\")\n```\n\n### Using a Custom Callback\n\n```python\nfrom semantic_text_splitter import TextSplitter\n\nsplitter = TextSplitter.from_callback(lambda text: len(text), 1000)\n\nchunks = splitter.chunks(\"your document text\")\n```\n\n### Markdown\n\nAll of the above examples also can also work with Markdown text. You can use the `MarkdownSplitter` in the same ways as the `TextSplitter`.\n\n```python\nfrom semantic_text_splitter import MarkdownSplitter\n\n# Maximum number of characters in a chunk\nmax_characters = 1000\n# Optionally can also have the splitter not trim whitespace for you\nsplitter = MarkdownSplitter(max_characters)\n# splitter = MarkdownSplitter(max_characters, trim=False)\n\nchunks = splitter.chunks(\"# Header\\n\\nyour document text\")\n```\n\n## Method\n\nTo preserve as much semantic meaning within a chunk as possible, each chunk is composed of the largest semantic units that can fit in the next given chunk. For each splitter type, there is a defined set of semantic levels. Here is an example of the steps used:\n\n1. Split the text by a increasing semantic levels.\n2. Check the first item for each level and select the highest level whose first item still fits within the chunk size.\n3. Merge as many of these neighboring sections of this level or above into a chunk to maximize chunk length. Boundaries of higher semantic levels are always included when merging, so that the chunk doesn't inadvertantly cross semantic boundaries.\n\nThe boundaries used to split the text if using the `chunks` method, in ascending order:\n\n### `TextSplitter` Semantic Levels\n\n1. Characters\n2. [Unicode Grapheme Cluster Boundaries](https://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries)\n3. [Unicode Word Boundaries](https://www.unicode.org/reports/tr29/#Word_Boundaries)\n4. [Unicode Sentence Boundaries](https://www.unicode.org/reports/tr29/#Sentence_Boundaries)\n5. Ascending sequence length of newlines. (Newline is `\\r\\n`, `\\n`, or `\\r`) Each unique length of consecutive newline sequences is treated as its own semantic level. So a sequence of 2 newlines is a higher level than a sequence of 1 newline, and so on.\n\nSplitting doesn't occur below the character level, otherwise you could get partial bytes of a char, which may not be a valid unicode str.\n\n### `MarkdownSplitter` Semantic Levels\n\nMarkdown is parsed according to the `CommonMark` spec, along with some optional features such as GitHub Flavored Markdown.\n\n1. Characters\n2. [Unicode Grapheme Cluster Boundaries](https://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries)\n3. [Unicode Word Boundaries](https://www.unicode.org/reports/tr29/#Word_Boundaries)\n4. [Unicode Sentence Boundaries](https://www.unicode.org/reports/tr29/#Sentence_Boundaries)\n5. Soft line breaks (single newline) which isn't necessarily a new element in Markdown.\n6. Inline elements such as: text nodes, emphasis, strong, strikethrough, link, image, table cells, inline code, footnote references, task list markers, and inline html.\n7. Block elements suce as: paragraphs, code blocks, footnote definitions, metadata. Also, a block quote or row/item within a table or list that can contain other \"block\" type elements, and a list or table that contains items.\n8. Thematic breaks or horizontal rules.\n9. Headings by level\n\nSplitting doesn't occur below the character level, otherwise you could get partial bytes of a char, which may not be a valid unicode str.\n\n### Note on sentences\n\nThere are lots of methods of determining sentence breaks, all to varying degrees of accuracy, and many requiring ML models to do so. Rather than trying to find the perfect sentence breaks, we rely on unicode method of sentence boundaries, which in most cases is good enough for finding a decent semantic breaking point if a paragraph is too large, and avoids the performance penalties of many other methods.\n\n## Inspiration\n\nThis crate was inspired by [LangChain's TextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html#langchain_text_splitters.character.RecursiveCharacterTextSplitter). But, looking into the implementation, there was potential for better performance as well as better semantic chunking.\n\nA big thank you to the Unicode team for their [icu_segmenter](https://crates.io/crates/icu_segmenter) crate that manages a lot of the complexity of matching the Unicode rules for words and sentences.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rust and Python.",
    "version": "0.20.0",
    "project_urls": {
        "Source Code": "https://github.com/benbrandt/text-splitter"
    },
    "split_keywords": [
        "text",
        " split",
        " tokenizer",
        " nlp",
        " ai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "20208b2be52da5c239734ba919c9aaa0f5e505d9ec3f7681847a7dffff82e6cb",
                "md5": "ef7bdd0c58f632254091a971473b2bf5",
                "sha256": "b0f131cbcb36b4653cf21d0a0990cd7371f27ae3b8fd81f325496f62c0f36a5d"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-macosx_10_12_x86_64.whl",
            "has_sig": false,
            "md5_digest": "ef7bdd0c58f632254091a971473b2bf5",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8089653,
            "upload_time": "2024-12-14T20:55:01",
            "upload_time_iso_8601": "2024-12-14T20:55:01.072570Z",
            "url": "https://files.pythonhosted.org/packages/20/20/8b2be52da5c239734ba919c9aaa0f5e505d9ec3f7681847a7dffff82e6cb/semantic_text_splitter-0.20.0-cp39-abi3-macosx_10_12_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0c5a21b39ed3b074430473f42699bd4e92fea9697049049e13b6f673a6a607ea",
                "md5": "f4263a254b1590b67eee6dd19c7ac1d9",
                "sha256": "0d983e788f16ec92bf52c9351aa378f3e960214a033bcb8912249a8c188831fb"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "f4263a254b1590b67eee6dd19c7ac1d9",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8077335,
            "upload_time": "2024-12-14T20:55:05",
            "upload_time_iso_8601": "2024-12-14T20:55:05.055403Z",
            "url": "https://files.pythonhosted.org/packages/0c/5a/21b39ed3b074430473f42699bd4e92fea9697049049e13b6f673a6a607ea/semantic_text_splitter-0.20.0-cp39-abi3-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3b0b9e0ce3097a784019085d4ddb3b8cf5ce3b2986956eaccfe8f9bdb1d060f3",
                "md5": "e5bbd86e0583627efc1b10da7c44657f",
                "sha256": "ff4810cded3b281a3396e87de3190cca4a209104b4e66ee783b18aafaa61e02c"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "has_sig": false,
            "md5_digest": "e5bbd86e0583627efc1b10da7c44657f",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8299855,
            "upload_time": "2024-12-14T20:55:08",
            "upload_time_iso_8601": "2024-12-14T20:55:08.822047Z",
            "url": "https://files.pythonhosted.org/packages/3b/0b/9e0ce3097a784019085d4ddb3b8cf5ce3b2986956eaccfe8f9bdb1d060f3/semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3276423485fe517e94fce915bcc543f4e819e5c61bed03c2bd7efebb47b2d101",
                "md5": "dec1d399369caf2db99b5f27da87dd4d",
                "sha256": "ebdd678f0215d57da87e47c868e1b7930261276cbca42fc67d7c5387c4a79056"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl",
            "has_sig": false,
            "md5_digest": "dec1d399369caf2db99b5f27da87dd4d",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8190503,
            "upload_time": "2024-12-14T20:55:12",
            "upload_time_iso_8601": "2024-12-14T20:55:12.770601Z",
            "url": "https://files.pythonhosted.org/packages/32/76/423485fe517e94fce915bcc543f4e819e5c61bed03c2bd7efebb47b2d101/semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b132e2d3558f7a01e3f1ca12ab8af261d8856a6097a9b8f04643e4b27c8979cd",
                "md5": "2e34d9e8265f66f5a3cc6140430825c0",
                "sha256": "16a69b97f4eadc9ca5b9ef5d250d264f598b4428cf55f5d6dc9b403358f47abc"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl",
            "has_sig": false,
            "md5_digest": "2e34d9e8265f66f5a3cc6140430825c0",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8450760,
            "upload_time": "2024-12-14T20:55:16",
            "upload_time_iso_8601": "2024-12-14T20:55:16.475542Z",
            "url": "https://files.pythonhosted.org/packages/b1/32/e2d3558f7a01e3f1ca12ab8af261d8856a6097a9b8f04643e4b27c8979cd/semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "99c548a060b049c297fd7e54958895f344a384ebafde935f97fffccceaa81ff6",
                "md5": "c438bf7d08044af0b138350c4e1779a0",
                "sha256": "38128174e392dffb34617bf64712cfb90487485b9cf9ddf4594606f184df7052"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl",
            "has_sig": false,
            "md5_digest": "c438bf7d08044af0b138350c4e1779a0",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8508822,
            "upload_time": "2024-12-14T20:55:20",
            "upload_time_iso_8601": "2024-12-14T20:55:20.471565Z",
            "url": "https://files.pythonhosted.org/packages/99/c5/48a060b049c297fd7e54958895f344a384ebafde935f97fffccceaa81ff6/semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "df7df711862b5d242a9a238968ead2ed55867264fce7fc5fdd10c145679ffa1d",
                "md5": "2f78351e1bcd6268ca50c3ce19ec4263",
                "sha256": "a8e9fa0d45a2ae6a73dcd246e01c7e9f949d74d635147c70538c2ab06e620789"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl",
            "has_sig": false,
            "md5_digest": "2f78351e1bcd6268ca50c3ce19ec4263",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8775340,
            "upload_time": "2024-12-14T20:55:23",
            "upload_time_iso_8601": "2024-12-14T20:55:23.606593Z",
            "url": "https://files.pythonhosted.org/packages/df/7d/f711862b5d242a9a238968ead2ed55867264fce7fc5fdd10c145679ffa1d/semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d7212997c0f65588184d01610efa6ab6a245026ef5bbb8f112826d362147c05c",
                "md5": "a3796cb1d79888c55c3580e9c1c8cfbf",
                "sha256": "610db70ca6cff5e9c33e185485cf83043d63e930745ebbcd25baf3a28378da7a"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a3796cb1d79888c55c3580e9c1c8cfbf",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 8398004,
            "upload_time": "2024-12-14T20:55:28",
            "upload_time_iso_8601": "2024-12-14T20:55:28.214854Z",
            "url": "https://files.pythonhosted.org/packages/d7/21/2997c0f65588184d01610efa6ab6a245026ef5bbb8f112826d362147c05c/semantic_text_splitter-0.20.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "eb1adfca3ff50243bccc667f8bb75e2ccc26b1369d2f91f5f87da1a024d19d4d",
                "md5": "7885e14049a73ffccb87f7ae3dabfede",
                "sha256": "b5ace397975cea04d0aeb3340c1d320330f92f8191a9d550f15e6f9903037603"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-win32.whl",
            "has_sig": false,
            "md5_digest": "7885e14049a73ffccb87f7ae3dabfede",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 7638863,
            "upload_time": "2024-12-14T20:55:32",
            "upload_time_iso_8601": "2024-12-14T20:55:32.354976Z",
            "url": "https://files.pythonhosted.org/packages/eb/1a/dfca3ff50243bccc667f8bb75e2ccc26b1369d2f91f5f87da1a024d19d4d/semantic_text_splitter-0.20.0-cp39-abi3-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7e25fbf77bfe5c4009621e4b8960aa218f7d49677d8b9e64982e4db8d2284da5",
                "md5": "1dde145c1f16701b89393335ffaeffbd",
                "sha256": "c2ba31329f6f2bb7a4b687c343d3c27fdb743225242de6b0fafa445053fe08a3"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0-cp39-abi3-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "1dde145c1f16701b89393335ffaeffbd",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 7811746,
            "upload_time": "2024-12-14T20:55:35",
            "upload_time_iso_8601": "2024-12-14T20:55:35.955285Z",
            "url": "https://files.pythonhosted.org/packages/7e/25/fbf77bfe5c4009621e4b8960aa218f7d49677d8b9e64982e4db8d2284da5/semantic_text_splitter-0.20.0-cp39-abi3-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9887f6a6b483b2eb9ea51dcefca153ce4055eeaed201a45e92b85b5e903e97f1",
                "md5": "993ec7c53bc7bc4165bc552e67740a85",
                "sha256": "e8717abe6a7adbed0ec3b0c92d0f9fd292815ba469866ede5dafd9bd8907b617"
            },
            "downloads": -1,
            "filename": "semantic_text_splitter-0.20.0.tar.gz",
            "has_sig": false,
            "md5_digest": "993ec7c53bc7bc4165bc552e67740a85",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 286571,
            "upload_time": "2024-12-14T20:55:37",
            "upload_time_iso_8601": "2024-12-14T20:55:37.933266Z",
            "url": "https://files.pythonhosted.org/packages/98/87/f6a6b483b2eb9ea51dcefca153ce4055eeaed201a45e92b85b5e903e97f1/semantic_text_splitter-0.20.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-14 20:55:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "benbrandt",
    "github_project": "text-splitter",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "semantic-text-splitter"
}
        
Elapsed time: 0.41534s