Name | chonkie JSON |
Version |
1.1.1
JSON |
| download |
home_page | None |
Summary | π¦ CHONK your texts with Chonkie β¨ - The no-nonsense chunking library |
upload_time | 2025-07-18 05:08:19 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT License
Copyright (c) 2025 Chonkie
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
keywords |
chunking
rag
retrieval-augmented-generation
nlp
natural-language-processing
text-processing
text-analysis
text-chunking
artificial-intelligence
machine-learning
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align='center'>

# π¦ Chonkie β¨
[](https://pypi.org/project/chonkie/)
[](https://github.com/chonkie-inc/chonkie/blob/main/LICENSE)
[](https://docs.chonkie.ai)
[](https://github.com/chonkie-inc/chonkie/blob/main/README.md#installation)
[](https://codecov.io/gh/chonkie-inc/chonkie)
[](https://pepy.tech/project/chonkie)
[](https://discord.gg/vH3SkRqmUz)
[](https://github.com/chonkie-inc/chonkie/stargazers)
_The no-nonsense ultra-light and lightning-fast chunking library that's ready to CHONK your texts!_
[Installation](#installation) β’
[Usage](#basic-usage) β’
[Pipeline](#the-chonkie-pipeline) β’
[Chunkers](#chunkers) β’
[Integrations](#integrations) β’
[Benchmarks](#benchmarks)
</div>
Tired of making your gazillionth chunker? Sick of the overhead of large libraries? Want to chunk your texts quickly and efficiently? Chonkie the mighty hippo is here to help!
**π Feature-rich**: All the CHONKs you'd ever need </br>
**β¨ Easy to use**: Install, Import, CHONK </br>
**β‘ Fast**: CHONK at the speed of light! zooooom </br>
**πͺΆ Light-weight**: No bloat, just CHONK </br>
**π Wide support**: CHONKie [integrates](#integrations) with your favorite tokenizer, embedding model and APIs! </br>
**π¬ οΈMultilingual**: Out-of-the-box support for 56 languages </br>
**βοΈ Cloud-Ready**: CHONK locally or in the [Chonkie Cloud](https://cloud.chonkie.ai) </br>
**π¦ Cute CHONK mascot**: psst it's a pygmy hippo btw </br>
**β€οΈ [Moto Moto](#acknowledgements)'s favorite python library** </br>
**Chonkie** is a chunking library that "**just works**" β¨
## Installation
To install chonkie, run:
```bash
pip install chonkie
```
Chonkie follows the rule of minimum installs.
Have a favorite chunker? Read our [docs](https://docs.chonkie.ai) to install only what you need
Don't want to think about it? Simply install `all` (Not recommended for production environments)
```bash
pip install chonkie[all]
```
## Basic Usage
Here's a basic example to get you started:
```python
# First import the chunker you want from Chonkie
from chonkie import RecursiveChunker
# Initialize the chunker
chunker = RecursiveChunker()
# Chunk some text
chunks = chunker("Chonkie is the goodest boi! My favorite chunking hippo hehe.")
# Access chunks
for chunk in chunks:
print(f"Chunk: {chunk.text}")
print(f"Tokens: {chunk.token_count}")
```
Check out more usage examples in the [docs](https://docs.chonkie.ai)!
## The Chonkie Pipeline
Chonkie processes text using a pipeline approach to transform raw documents into refined, usable chunks. This allows for flexibility and efficiency in handling different chunking strategies. We call this pipeline `CHOMP` (short for _'CHOnkie's Multi-step Pipeline'_).
Here's a conceptual overview of the pipeline, as illustrated in the diagram:

The main stages are:
1. **π Document**: The starting point β your input text data. It can be in any format!
2. **π¨βπ³ Chef**: This stage handles initial text preprocessing. It might involve cleaning, normalization, or other preparatory steps to get the text ready for chunking. While this is optional, it is recommended to use the `Chef` stage to clean your text before chunking.
3. **π¦ Chunker**: The core component you select (e.g., RecursiveChunker, SentenceChunker). It applies its specific logic to split the preprocessed text into initial chunks based on the chosen strategy and parameters.
4. **π Refinery**: After initial chunking, the Refinery performs post-processing. This can include merging small chunks based on overlap, adding embeddings, or adding additional context to the chunks. It helps ensure the quality and consistency of the output. You can have multiple `Refineries` to apply different post-processing steps.
5. **π€ Friends**: The pipeline's produces the final results which can be either exported to be saved or ingested into your vector database. Chonkie offers `Porters` to export the chunks and `Handshakes` to ingest the chunks into your vector database.
- **π΄ Porters**: Porters can save the chunks to a file or a database. Currently, only `JSON` is supported for exporting the chunks.
- **π€ Handshakes**: Handshakes provide a unified interface for ingesting the chunks into your preferred vector databases.
This modular pipeline allows Chonkie to be both powerful and easy to configure for various text chunking needs.
## Chunkers
Chonkie provides several chunkers to help you split your text efficiently for RAG applications. Here's a quick overview of the available chunkers:
| Name | Alias | Description |
|------------------|------------|------------------------------------------------------------------------------------------------------------|
| `TokenChunker` | `token` | Splits text into fixed-size token chunks. |
| `SentenceChunker`| `sentence` | Splits text into chunks based on sentences. |
| `RecursiveChunker`| `recursive`| Splits text hierarchically using customizable rules to create semantically meaningful chunks. |
| `SemanticChunker`| `semantic` | Splits text into chunks based on semantic similarity. Inspired by the work of [Greg Kamradt](https://github.com/gkamradt). |
| `SDPMChunker` | `sdpm` | Splits text using a Semantic Double-Pass Merge approach. |
| `LateChunker` | `late` | Embeds text and then splits it to have better chunk embeddings. |
| `CodeChunker` | `code` | Splits code into structurally meaningful chunks. |
| `NeuralChunker` | `neural` | Splits text using a neural model. |
| `SlumberChunker` | `slumber` | Splits text using an LLM to find semantically meaningful chunks. Also known as _"AgenticChunker"_. |
More on these methods and the approaches taken inside the [docs](https://docs.chonkie.ai)
## Integrations
Chonkie boasts 19+ integrations across tokenizers, embedding providers, LLMs, porters, and vector databases, ensuring it fits seamlessly into your existing workflow.
<details>
<summary><strong>πͺ Slice 'n' Dice! Chonkie supports 5+ ways to tokenize! </strong></summary>
Choose from supported tokenizers or provide your own custom token counting function. Flexibility first!
| Name | Description | Optional Install |
|----------------|----------------------------------------------------------------|--------------------|
| `character` | Basic character-level tokenizer. **Default tokenizer.** | `default` |
| `word` | Basic word-level tokenizer. | `default` |
| `tokenizers` | Load any tokenizer from the Hugging Face `tokenizers` library. | `default` |
| `tiktoken` | Use OpenAI's `tiktoken` library (e.g., for `gpt-4`). | `chonkie[tiktoken]`|
| `transformers` | Load tokenizers via `AutoTokenizer` from HF `transformers`. | `chonkie[transformers]`|
`default` indicates that the feature is available with the default `pip install chonkie`.
To use a custom token counter, you can pass in any function that takes a string and returns an integer! Something like this:
```python
def custom_token_counter(text: str) -> int:
return len(text)
chunker = RecursiveChunker(tokenizer_or_token_counter=custom_token_counter)
```
You can use this to extend Chonkie to support any tokenization scheme you want!
</details>
<details>
<summary><strong>π§ Embed like a boss! Chonkie links up with 7+ embedding pals!</strong></summary>
Seamlessly works with various embedding model providers. Bring your favorite embeddings to the CHONK party! Use `AutoEmbeddings` to load models easily.
| Provider / Alias | Class | Description | Optional Install |
|-------------------------|---------------------------------|----------------------------------------------|-------------------|
| `model2vec` | `Model2VecEmbeddings` | Use `Model2Vec` models. | `chonkie[model2vec]` |
| `sentence-transformers` | `SentenceTransformerEmbeddings` | Use any `sentence-transformers` model. | `chonkie[st]` |
| `openai` | `OpenAIEmbeddings` | Use OpenAI's embedding API. | `chonkie[openai]` |
| `cohere` | `CohereEmbeddings` | Use Cohere's embedding API. | `chonkie[cohere]` |
| `gemini` | `GeminiEmbeddings` | Use Google's Gemini embedding API. | `chonkie[gemini]` |
| `jina` | `JinaEmbeddings` | Use Jina AI's embedding API. | `chonkie[jina]` |
| `voyageai` | `VoyageAIEmbeddings` | Use Voyage AI's embedding API. | `chonkie[voyageai]` |
</details>
<details>
<summary><strong>π§ββοΈ Power Up with Genies! Chonkie supports 2+ LLM providers!</strong></summary>
Genies provide interfaces to interact with Large Language Models (LLMs) for advanced chunking strategies or other tasks within the pipeline.
| Genie Name | Class | Description | Optional Install |
|--------------|---------------|----------------------------------|----------------------|
| `gemini` | `GeminiGenie` | Interact with Google Gemini APIs. | `chonkie[gemini]` |
| `openai` | `OpenAIGenie` | Interact with OpenAI APIs. | `chonkie[openai]` |
You can also use the `OpenAIGenie` to interact with any LLM provider that supports the OpenAI API format, by simply changing the `model`, `base_url`, and `api_key` parameters. For example, here's how to use the `OpenAIGenie` to interact with the `Llama-4-Maverick` model via OpenRouter:
```python
from chonkie import OpenAIGenie
genie = OpenAIGenie(model="meta-llama/llama-4-maverick",
base_url="https://openrouter.ai/api/v1",
api_key="your_api_key")
```
</details>
<details>
<summary><strong>π΄ Exporting CHONKs! Chonkie supports 1+ Porter!</strong></summary>
Porters help you save your chunks easily.
| Porter Name | Class | Description | Optional Install |
|-------------|--------------|-----------------------------|-----------------|
| `json` | `JSONPorter` | Export chunks to a JSON file. | `default` |
</details>
<details>
<summary><strong>π€ Shake hands with your DB! Chonkie connects with 4+ vector stores!</strong></summary>
Handshakes provide a unified interface to ingest chunks directly into your favorite vector databases.
| Handshake Name | Class | Description | Optional Install |
|----------------|-----------------------|-----------------------------------------|---------------------------|
| `chroma` | `ChromaHandshake` | Ingest chunks into ChromaDB. | `chonkie[chroma]` |
| `qdrant` | `QdrantHandshake` | Ingest chunks into Qdrant. | `chonkie[qdrant]` |
| `pgvector` | `PgvectorHandshake` | Ingest chunks into PostgreSQL with pgvector. | `chonkie[pgvector]` |
| `turbopuffer` | `TurbopufferHandshake`| Ingest chunks into Turbopuffer. | `chonkie[turbopuffer]` |
</details>
With Chonkie's wide range of integrations, you can easily plug it into your existing infrastructure and start CHONKING!
## Benchmarks
> "I may be smol hippo, but I pack a big punch!" π¦
Chonkie is not just cute, it's also fast and efficient! Here's how it stacks up against the competition:
**Size**π¦
- **Default Install:** 15MB (vs 80-171MB for alternatives)
- **With Semantic:** Still 10x lighter than the closest competition!
**Speed**β‘
- **Token Chunking:** 33x faster than the slowest alternative
- **Sentence Chunking:** Almost 2x faster than competitors
- **Semantic Chunking:** Up to 2.5x faster than others
Check out our detailed [benchmarks](BENCHMARKS.md) to see how Chonkie races past the competition! πββοΈπ¨
## Contributing
Want to help grow Chonkie? Check out [CONTRIBUTING.md](CONTRIBUTING.md) to get started! Whether you're fixing bugs, adding features, or improving docs, every contribution helps make Chonkie a better CHONK for everyone.
Remember: No contribution is too small for this tiny hippo! π¦
## Acknowledgements
Chonkie would like to CHONK its way through a special thanks to all the users and contributors who have helped make this library what it is today! Your feedback, issue reports, and improvements have helped make Chonkie the CHONKIEST it can be.
And of course, special thanks to [Moto Moto](https://www.youtube.com/watch?v=I0zZC4wtqDQ&t=5s) for endorsing Chonkie with his famous quote:
> "I like them big, I like them chonkie." ~ Moto Moto
## Citation
If you use Chonkie in your research, please cite it as follows:
```bibtex
@software{chonkie2025,
author = {Minhas, Bhavnick AND Nigam, Shreyash},
title = {Chonkie: A no-nonsense fast, lightweight, and efficient text chunking library},
year = {2025},
publisher = {GitHub},
howpublished = {\url{https://github.com/chonkie-inc/chonkie}},
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "chonkie",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Bhavnick Minhas <bhavnick@chonkie.ai>, Shreyash Nigam <shreyash@chonkie.ai>",
"keywords": "chunking, rag, retrieval-augmented-generation, nlp, natural-language-processing, text-processing, text-analysis, text-chunking, artificial-intelligence, machine-learning",
"author": null,
"author_email": "Bhavnick Minhas <bhavnick@chonkie.ai>, Shreyash Nigam <shreyash@chonkie.ai>",
"download_url": "https://files.pythonhosted.org/packages/eb/cb/5d95cecf45c07faff0e8057f10187ad3b1127c6a34f2c57fb5d5dbae2fdf/chonkie-1.1.1.tar.gz",
"platform": null,
"description": "<div align='center'>\n\n\n\n# \ud83e\udd9b Chonkie \u2728\n\n[](https://pypi.org/project/chonkie/)\n[](https://github.com/chonkie-inc/chonkie/blob/main/LICENSE)\n[](https://docs.chonkie.ai)\n[](https://github.com/chonkie-inc/chonkie/blob/main/README.md#installation)\n[](https://codecov.io/gh/chonkie-inc/chonkie)\n[](https://pepy.tech/project/chonkie)\n[](https://discord.gg/vH3SkRqmUz)\n[](https://github.com/chonkie-inc/chonkie/stargazers)\n\n_The no-nonsense ultra-light and lightning-fast chunking library that's ready to CHONK your texts!_\n\n[Installation](#installation) \u2022\n[Usage](#basic-usage) \u2022\n[Pipeline](#the-chonkie-pipeline) \u2022\n[Chunkers](#chunkers) \u2022\n[Integrations](#integrations) \u2022\n[Benchmarks](#benchmarks)\n\n</div>\n\nTired of making your gazillionth chunker? Sick of the overhead of large libraries? Want to chunk your texts quickly and efficiently? Chonkie the mighty hippo is here to help!\n\n**\ud83d\ude80 Feature-rich**: All the CHONKs you'd ever need </br>\n**\u2728 Easy to use**: Install, Import, CHONK </br>\n**\u26a1 Fast**: CHONK at the speed of light! zooooom </br>\n**\ud83e\udeb6 Light-weight**: No bloat, just CHONK </br>\n**\ud83c\udf0f Wide support**: CHONKie [integrates](#integrations) with your favorite tokenizer, embedding model and APIs! </br>\n**\ud83d\udcac \ufe0fMultilingual**: Out-of-the-box support for 56 languages </br>\n**\u2601\ufe0f Cloud-Ready**: CHONK locally or in the [Chonkie Cloud](https://cloud.chonkie.ai) </br>\n**\ud83e\udd9b Cute CHONK mascot**: psst it's a pygmy hippo btw </br>\n**\u2764\ufe0f [Moto Moto](#acknowledgements)'s favorite python library** </br>\n\n**Chonkie** is a chunking library that \"**just works**\" \u2728\n\n## Installation\n\nTo install chonkie, run:\n\n```bash\npip install chonkie\n```\n\nChonkie follows the rule of minimum installs.\nHave a favorite chunker? Read our [docs](https://docs.chonkie.ai) to install only what you need\nDon't want to think about it? Simply install `all` (Not recommended for production environments)\n\n```bash\npip install chonkie[all]\n```\n\n## Basic Usage\n\nHere's a basic example to get you started:\n\n```python\n# First import the chunker you want from Chonkie\nfrom chonkie import RecursiveChunker\n\n# Initialize the chunker\nchunker = RecursiveChunker()\n\n# Chunk some text\nchunks = chunker(\"Chonkie is the goodest boi! My favorite chunking hippo hehe.\")\n\n# Access chunks\nfor chunk in chunks:\n print(f\"Chunk: {chunk.text}\")\n print(f\"Tokens: {chunk.token_count}\")\n```\n\nCheck out more usage examples in the [docs](https://docs.chonkie.ai)!\n\n## The Chonkie Pipeline\n\nChonkie processes text using a pipeline approach to transform raw documents into refined, usable chunks. This allows for flexibility and efficiency in handling different chunking strategies. We call this pipeline `CHOMP` (short for _'CHOnkie's Multi-step Pipeline'_).\n\nHere's a conceptual overview of the pipeline, as illustrated in the diagram:\n\n\n\nThe main stages are:\n\n1. **\ud83d\udcc4 Document**: The starting point \u2013 your input text data. It can be in any format!\n2. **\ud83d\udc68\u200d\ud83c\udf73 Chef**: This stage handles initial text preprocessing. It might involve cleaning, normalization, or other preparatory steps to get the text ready for chunking. While this is optional, it is recommended to use the `Chef` stage to clean your text before chunking.\n3. **\ud83e\udd9b Chunker**: The core component you select (e.g., RecursiveChunker, SentenceChunker). It applies its specific logic to split the preprocessed text into initial chunks based on the chosen strategy and parameters.\n4. **\ud83c\udfed Refinery**: After initial chunking, the Refinery performs post-processing. This can include merging small chunks based on overlap, adding embeddings, or adding additional context to the chunks. It helps ensure the quality and consistency of the output. You can have multiple `Refineries` to apply different post-processing steps.\n5. **\ud83e\udd17 Friends**: The pipeline's produces the final results which can be either exported to be saved or ingested into your vector database. Chonkie offers `Porters` to export the chunks and `Handshakes` to ingest the chunks into your vector database.\n - **\ud83d\udc34 Porters**: Porters can save the chunks to a file or a database. Currently, only `JSON` is supported for exporting the chunks.\n - **\ud83e\udd1d Handshakes**: Handshakes provide a unified interface for ingesting the chunks into your preferred vector databases.\n\nThis modular pipeline allows Chonkie to be both powerful and easy to configure for various text chunking needs.\n\n## Chunkers\n\nChonkie provides several chunkers to help you split your text efficiently for RAG applications. Here's a quick overview of the available chunkers:\n\n| Name | Alias | Description |\n|------------------|------------|------------------------------------------------------------------------------------------------------------|\n| `TokenChunker` | `token` | Splits text into fixed-size token chunks. |\n| `SentenceChunker`| `sentence` | Splits text into chunks based on sentences. |\n| `RecursiveChunker`| `recursive`| Splits text hierarchically using customizable rules to create semantically meaningful chunks. |\n| `SemanticChunker`| `semantic` | Splits text into chunks based on semantic similarity. Inspired by the work of [Greg Kamradt](https://github.com/gkamradt). |\n| `SDPMChunker` | `sdpm` | Splits text using a Semantic Double-Pass Merge approach. |\n| `LateChunker` | `late` | Embeds text and then splits it to have better chunk embeddings. |\n| `CodeChunker` | `code` | Splits code into structurally meaningful chunks. |\n| `NeuralChunker` | `neural` | Splits text using a neural model. |\n| `SlumberChunker` | `slumber` | Splits text using an LLM to find semantically meaningful chunks. Also known as _\"AgenticChunker\"_. |\n\nMore on these methods and the approaches taken inside the [docs](https://docs.chonkie.ai)\n\n## Integrations\n\nChonkie boasts 19+ integrations across tokenizers, embedding providers, LLMs, porters, and vector databases, ensuring it fits seamlessly into your existing workflow.\n\n<details>\n<summary><strong>\ud83e\ude93 Slice 'n' Dice! Chonkie supports 5+ ways to tokenize! </strong></summary>\n\nChoose from supported tokenizers or provide your own custom token counting function. Flexibility first!\n\n| Name | Description | Optional Install |\n|----------------|----------------------------------------------------------------|--------------------|\n| `character` | Basic character-level tokenizer. **Default tokenizer.** | `default` |\n| `word` | Basic word-level tokenizer. | `default` |\n| `tokenizers` | Load any tokenizer from the Hugging Face `tokenizers` library. | `default` |\n| `tiktoken` | Use OpenAI's `tiktoken` library (e.g., for `gpt-4`). | `chonkie[tiktoken]`|\n| `transformers` | Load tokenizers via `AutoTokenizer` from HF `transformers`. | `chonkie[transformers]`|\n\n`default` indicates that the feature is available with the default `pip install chonkie`.\n\nTo use a custom token counter, you can pass in any function that takes a string and returns an integer! Something like this:\n\n```python\ndef custom_token_counter(text: str) -> int:\n return len(text)\n\nchunker = RecursiveChunker(tokenizer_or_token_counter=custom_token_counter)\n```\n\nYou can use this to extend Chonkie to support any tokenization scheme you want!\n\n</details>\n\n<details>\n<summary><strong>\ud83e\udde0 Embed like a boss! Chonkie links up with 7+ embedding pals!</strong></summary>\n\nSeamlessly works with various embedding model providers. Bring your favorite embeddings to the CHONK party! Use `AutoEmbeddings` to load models easily.\n\n| Provider / Alias | Class | Description | Optional Install |\n|-------------------------|---------------------------------|----------------------------------------------|-------------------|\n| `model2vec` | `Model2VecEmbeddings` | Use `Model2Vec` models. | `chonkie[model2vec]` |\n| `sentence-transformers` | `SentenceTransformerEmbeddings` | Use any `sentence-transformers` model. | `chonkie[st]` |\n| `openai` | `OpenAIEmbeddings` | Use OpenAI's embedding API. | `chonkie[openai]` |\n| `cohere` | `CohereEmbeddings` | Use Cohere's embedding API. | `chonkie[cohere]` |\n| `gemini` | `GeminiEmbeddings` | Use Google's Gemini embedding API. | `chonkie[gemini]` |\n| `jina` | `JinaEmbeddings` | Use Jina AI's embedding API. | `chonkie[jina]` |\n| `voyageai` | `VoyageAIEmbeddings` | Use Voyage AI's embedding API. | `chonkie[voyageai]` |\n\n</details>\n\n<details>\n<summary><strong>\ud83e\uddde\u200d\u2642\ufe0f Power Up with Genies! Chonkie supports 2+ LLM providers!</strong></summary>\n\nGenies provide interfaces to interact with Large Language Models (LLMs) for advanced chunking strategies or other tasks within the pipeline.\n\n| Genie Name | Class | Description | Optional Install |\n|--------------|---------------|----------------------------------|----------------------|\n| `gemini` | `GeminiGenie` | Interact with Google Gemini APIs. | `chonkie[gemini]` |\n| `openai` | `OpenAIGenie` | Interact with OpenAI APIs. | `chonkie[openai]` |\n\nYou can also use the `OpenAIGenie` to interact with any LLM provider that supports the OpenAI API format, by simply changing the `model`, `base_url`, and `api_key` parameters. For example, here's how to use the `OpenAIGenie` to interact with the `Llama-4-Maverick` model via OpenRouter:\n\n```python\nfrom chonkie import OpenAIGenie\n\ngenie = OpenAIGenie(model=\"meta-llama/llama-4-maverick\",\n base_url=\"https://openrouter.ai/api/v1\",\n api_key=\"your_api_key\")\n```\n\n</details>\n\n<details>\n<summary><strong>\ud83d\udc34 Exporting CHONKs! Chonkie supports 1+ Porter!</strong></summary>\n\nPorters help you save your chunks easily.\n\n| Porter Name | Class | Description | Optional Install |\n|-------------|--------------|-----------------------------|-----------------|\n| `json` | `JSONPorter` | Export chunks to a JSON file. | `default` |\n\n</details>\n\n<details>\n<summary><strong>\ud83e\udd1d Shake hands with your DB! Chonkie connects with 4+ vector stores!</strong></summary>\n\nHandshakes provide a unified interface to ingest chunks directly into your favorite vector databases.\n\n| Handshake Name | Class | Description | Optional Install |\n|----------------|-----------------------|-----------------------------------------|---------------------------|\n| `chroma` | `ChromaHandshake` | Ingest chunks into ChromaDB. | `chonkie[chroma]` |\n| `qdrant` | `QdrantHandshake` | Ingest chunks into Qdrant. | `chonkie[qdrant]` |\n| `pgvector` | `PgvectorHandshake` | Ingest chunks into PostgreSQL with pgvector. | `chonkie[pgvector]` |\n| `turbopuffer` | `TurbopufferHandshake`| Ingest chunks into Turbopuffer. | `chonkie[turbopuffer]` |\n\n</details>\n\n\n\nWith Chonkie's wide range of integrations, you can easily plug it into your existing infrastructure and start CHONKING!\n\n## Benchmarks\n\n> \"I may be smol hippo, but I pack a big punch!\" \ud83e\udd9b\n\nChonkie is not just cute, it's also fast and efficient! Here's how it stacks up against the competition:\n\n**Size**\ud83d\udce6\n\n- **Default Install:** 15MB (vs 80-171MB for alternatives)\n- **With Semantic:** Still 10x lighter than the closest competition!\n\n**Speed**\u26a1\n\n- **Token Chunking:** 33x faster than the slowest alternative\n- **Sentence Chunking:** Almost 2x faster than competitors\n- **Semantic Chunking:** Up to 2.5x faster than others\n\nCheck out our detailed [benchmarks](BENCHMARKS.md) to see how Chonkie races past the competition! \ud83c\udfc3\u200d\u2642\ufe0f\ud83d\udca8\n\n## Contributing\n\nWant to help grow Chonkie? Check out [CONTRIBUTING.md](CONTRIBUTING.md) to get started! Whether you're fixing bugs, adding features, or improving docs, every contribution helps make Chonkie a better CHONK for everyone.\n\nRemember: No contribution is too small for this tiny hippo! \ud83e\udd9b\n\n## Acknowledgements\n\nChonkie would like to CHONK its way through a special thanks to all the users and contributors who have helped make this library what it is today! Your feedback, issue reports, and improvements have helped make Chonkie the CHONKIEST it can be.\n\nAnd of course, special thanks to [Moto Moto](https://www.youtube.com/watch?v=I0zZC4wtqDQ&t=5s) for endorsing Chonkie with his famous quote:\n> \"I like them big, I like them chonkie.\" ~ Moto Moto\n\n## Citation\n\nIf you use Chonkie in your research, please cite it as follows:\n\n```bibtex\n@software{chonkie2025,\n author = {Minhas, Bhavnick AND Nigam, Shreyash},\n title = {Chonkie: A no-nonsense fast, lightweight, and efficient text chunking library},\n year = {2025},\n publisher = {GitHub},\n howpublished = {\\url{https://github.com/chonkie-inc/chonkie}},\n}\n```\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2025 Chonkie\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ",
"summary": "\ud83e\udd9b CHONK your texts with Chonkie \u2728 - The no-nonsense chunking library",
"version": "1.1.1",
"project_urls": {
"Bug Tracker": "https://github.com/chonkie-inc/chonkie/issues",
"Documentation": "https://docs.chonkie.ai",
"Homepage": "https://github.com/chonkie-inc/chonkie"
},
"split_keywords": [
"chunking",
" rag",
" retrieval-augmented-generation",
" nlp",
" natural-language-processing",
" text-processing",
" text-analysis",
" text-chunking",
" artificial-intelligence",
" machine-learning"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "cc7a223cf6f5abd9013abf4ad6b8ae4ded04ef9dd3eef99f792685040537c9bd",
"md5": "00e7f494615bb70f0084e6b163e12151",
"sha256": "c56cff89f38ff5cc06b2e8b4c9a802b85b77ba8ecdda6896f5dba6b0c54d4303"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp310-cp310-macosx_10_9_x86_64.whl",
"has_sig": false,
"md5_digest": "00e7f494615bb70f0084e6b163e12151",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.9",
"size": 349857,
"upload_time": "2025-07-18T05:08:00",
"upload_time_iso_8601": "2025-07-18T05:08:00.481706Z",
"url": "https://files.pythonhosted.org/packages/cc/7a/223cf6f5abd9013abf4ad6b8ae4ded04ef9dd3eef99f792685040537c9bd/chonkie-1.1.1-cp310-cp310-macosx_10_9_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8b896d718ae192f1a8bee149aa1d17fc19a06f45dfdfbbc404611b8278c05a01",
"md5": "3f7dff2b25327eba546e355f964fc4b2",
"sha256": "8bb0d88b4254a9bac7b494349ee3c94f94d3ee2f8cd4970d23e0c0ef3e6392a4"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "3f7dff2b25327eba546e355f964fc4b2",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.9",
"size": 611560,
"upload_time": "2025-07-18T05:08:01",
"upload_time_iso_8601": "2025-07-18T05:08:01.788997Z",
"url": "https://files.pythonhosted.org/packages/8b/89/6d718ae192f1a8bee149aa1d17fc19a06f45dfdfbbc404611b8278c05a01/chonkie-1.1.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8c01e9a7704705218c4024e1af977d9ce897112e6ea5926809d79f4d30978cd6",
"md5": "c8795d549c4a85337cb2ea97357ea1a3",
"sha256": "515f200597f86a4e877d8f57cb21e50b5ab67df123f796cf87396c9ad3dd5f73"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp310-cp310-win_amd64.whl",
"has_sig": false,
"md5_digest": "c8795d549c4a85337cb2ea97357ea1a3",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.9",
"size": 349388,
"upload_time": "2025-07-18T05:08:03",
"upload_time_iso_8601": "2025-07-18T05:08:03.065848Z",
"url": "https://files.pythonhosted.org/packages/8c/01/e9a7704705218c4024e1af977d9ce897112e6ea5926809d79f4d30978cd6/chonkie-1.1.1-cp310-cp310-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "df4dca7788f8eb17005f6fb68f933617e057b8cb038dc3f8653063215635be84",
"md5": "df5f67adde56b4a5b37a2fb1a6d0f4ab",
"sha256": "ebb19d8888f7c4cac6c32bd3fd050cfec3067e21fb4bb02ea95795d19a8954cb"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp311-cp311-macosx_10_9_x86_64.whl",
"has_sig": false,
"md5_digest": "df5f67adde56b4a5b37a2fb1a6d0f4ab",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.9",
"size": 350604,
"upload_time": "2025-07-18T05:08:04",
"upload_time_iso_8601": "2025-07-18T05:08:04.282381Z",
"url": "https://files.pythonhosted.org/packages/df/4d/ca7788f8eb17005f6fb68f933617e057b8cb038dc3f8653063215635be84/chonkie-1.1.1-cp311-cp311-macosx_10_9_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "789259fd9c4e8345c25506442537af5a83cffa61c7d075f7675dab505ac63dad",
"md5": "3629e7258c3b5b974b12a7bcaa00faa9",
"sha256": "3f0f85f1682809564377fdbdbf36d1d170f5ec904ad81297c7bc1ad96617033c"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "3629e7258c3b5b974b12a7bcaa00faa9",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.9",
"size": 632951,
"upload_time": "2025-07-18T05:08:06",
"upload_time_iso_8601": "2025-07-18T05:08:06.343573Z",
"url": "https://files.pythonhosted.org/packages/78/92/59fd9c4e8345c25506442537af5a83cffa61c7d075f7675dab505ac63dad/chonkie-1.1.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "fa4aa7d9b36fad4f30731f001299829707c6d4da28885d845ea64ed5c5332ead",
"md5": "af0d0cd0ee02765694ad7cf3a53609aa",
"sha256": "0644f2664b563550c520f33f54586cfe6fcc0892de79078e41c2e1e1488e8d81"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp311-cp311-win_amd64.whl",
"has_sig": false,
"md5_digest": "af0d0cd0ee02765694ad7cf3a53609aa",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.9",
"size": 349372,
"upload_time": "2025-07-18T05:08:07",
"upload_time_iso_8601": "2025-07-18T05:08:07.683906Z",
"url": "https://files.pythonhosted.org/packages/fa/4a/a7d9b36fad4f30731f001299829707c6d4da28885d845ea64ed5c5332ead/chonkie-1.1.1-cp311-cp311-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1286e4d51c0434a3dc140007f040bd7b2fa7479dd0c402c854335683c5be535a",
"md5": "c9fc238140bd52a832228f898ef31222",
"sha256": "c9e3249d55173c08fb9692992ce5021704a83c91c01615f1386748b0cb24a25e"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp312-cp312-macosx_10_13_x86_64.whl",
"has_sig": false,
"md5_digest": "c9fc238140bd52a832228f898ef31222",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.9",
"size": 350657,
"upload_time": "2025-07-18T05:08:08",
"upload_time_iso_8601": "2025-07-18T05:08:08.718066Z",
"url": "https://files.pythonhosted.org/packages/12/86/e4d51c0434a3dc140007f040bd7b2fa7479dd0c402c854335683c5be535a/chonkie-1.1.1-cp312-cp312-macosx_10_13_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4827cb12efacc6986df836939fdec93ad230fd0b8ce5f7a22d579f22fa2f3e57",
"md5": "5a0f1d6f1c000aa7b98a5840481e4bd2",
"sha256": "acfc4f6827e2bba98c31805fcd71feba654957140c42970a3bd995eff89625b7"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "5a0f1d6f1c000aa7b98a5840481e4bd2",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.9",
"size": 657579,
"upload_time": "2025-07-18T05:08:09",
"upload_time_iso_8601": "2025-07-18T05:08:09.652035Z",
"url": "https://files.pythonhosted.org/packages/48/27/cb12efacc6986df836939fdec93ad230fd0b8ce5f7a22d579f22fa2f3e57/chonkie-1.1.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "52510dd22ee961cff21410d7b3067dac9f498dd8e56c42dbb188ce2669d2ee8e",
"md5": "94bd6b070ca1e0c7c414764fe5a75dfb",
"sha256": "afad359a02f3db46d3fcad2639d062ead5a3ad2bae45cb9f511ae0efdec79656"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp312-cp312-win_amd64.whl",
"has_sig": false,
"md5_digest": "94bd6b070ca1e0c7c414764fe5a75dfb",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.9",
"size": 349686,
"upload_time": "2025-07-18T05:08:10",
"upload_time_iso_8601": "2025-07-18T05:08:10.689011Z",
"url": "https://files.pythonhosted.org/packages/52/51/0dd22ee961cff21410d7b3067dac9f498dd8e56c42dbb188ce2669d2ee8e/chonkie-1.1.1-cp312-cp312-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "77ea371ecd17e2105f22c407d8a4ee0ccac88bdf7f86f8df2221379c36ea563b",
"md5": "e3e4148a23287f75d4e23ff38b644c59",
"sha256": "ecf8f99a9f5981d399b22d69c050a8dab4e376d221f6bb298bbc126571315049"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp313-cp313-macosx_10_13_x86_64.whl",
"has_sig": false,
"md5_digest": "e3e4148a23287f75d4e23ff38b644c59",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.9",
"size": 349622,
"upload_time": "2025-07-18T05:08:11",
"upload_time_iso_8601": "2025-07-18T05:08:11.856801Z",
"url": "https://files.pythonhosted.org/packages/77/ea/371ecd17e2105f22c407d8a4ee0ccac88bdf7f86f8df2221379c36ea563b/chonkie-1.1.1-cp313-cp313-macosx_10_13_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2f50c6c44ed8785624a6397456c5ca188ef3e67971780b04aaa602c3053846cf",
"md5": "6326e3e8aadb20947b5df146961c476b",
"sha256": "653f53702f286df1348657702f2660010cabb2dee90c8783978d5adfa818fec4"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "6326e3e8aadb20947b5df146961c476b",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.9",
"size": 651452,
"upload_time": "2025-07-18T05:08:12",
"upload_time_iso_8601": "2025-07-18T05:08:12.872373Z",
"url": "https://files.pythonhosted.org/packages/2f/50/c6c44ed8785624a6397456c5ca188ef3e67971780b04aaa602c3053846cf/chonkie-1.1.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9def3d870ea09dbef27128667a36358aaf68e4a01d19281cc63217da4273ab4c",
"md5": "be6c3bb13083c9f22200da6c5b3ce4ae",
"sha256": "5905b31adf73006b832f4bd20f8f31a180b31114d9c75e9b8ff2ca74d12a2172"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp313-cp313-win_amd64.whl",
"has_sig": false,
"md5_digest": "be6c3bb13083c9f22200da6c5b3ce4ae",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.9",
"size": 349063,
"upload_time": "2025-07-18T05:08:14",
"upload_time_iso_8601": "2025-07-18T05:08:14.231439Z",
"url": "https://files.pythonhosted.org/packages/9d/ef/3d870ea09dbef27128667a36358aaf68e4a01d19281cc63217da4273ab4c/chonkie-1.1.1-cp313-cp313-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9768aeb177cb3b521c10b5bae572a2631d12f514242ef0ee432f540735c85db0",
"md5": "cc869e1dec3688134195d88676daa4ad",
"sha256": "11130d189252dabace35967d2b8d16d325448fec2e0b8a926fcd333c0093737b"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl",
"has_sig": false,
"md5_digest": "cc869e1dec3688134195d88676daa4ad",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 350332,
"upload_time": "2025-07-18T05:08:15",
"upload_time_iso_8601": "2025-07-18T05:08:15.234718Z",
"url": "https://files.pythonhosted.org/packages/97/68/aeb177cb3b521c10b5bae572a2631d12f514242ef0ee432f540735c85db0/chonkie-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "57e3dd07c75356536b431537d1692a63adf7c85581cb33dd7a6fcd1d03554262",
"md5": "d4d7cc8df5a6f01da0ce5e8bb2029a0e",
"sha256": "35038142af6141c26bc777c195a440c56693d9def7ffc4e647cec8e0493904a6"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "d4d7cc8df5a6f01da0ce5e8bb2029a0e",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 617350,
"upload_time": "2025-07-18T05:08:16",
"upload_time_iso_8601": "2025-07-18T05:08:16.535252Z",
"url": "https://files.pythonhosted.org/packages/57/e3/dd07c75356536b431537d1692a63adf7c85581cb33dd7a6fcd1d03554262/chonkie-1.1.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8d1c0bc85b89d784c2f59cb6374659890608477d9de677b97502f6b001e844db",
"md5": "5d3d7b0d9bfa32edfdc7968224423bc8",
"sha256": "e2e00362bbc697dc345dcdf3b77853bd3848a1377e79074aad64ce8dac319d42"
},
"downloads": -1,
"filename": "chonkie-1.1.1-cp39-cp39-win_amd64.whl",
"has_sig": false,
"md5_digest": "5d3d7b0d9bfa32edfdc7968224423bc8",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.9",
"size": 349803,
"upload_time": "2025-07-18T05:08:17",
"upload_time_iso_8601": "2025-07-18T05:08:17.977151Z",
"url": "https://files.pythonhosted.org/packages/8d/1c/0bc85b89d784c2f59cb6374659890608477d9de677b97502f6b001e844db/chonkie-1.1.1-cp39-cp39-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ebcb5d95cecf45c07faff0e8057f10187ad3b1127c6a34f2c57fb5d5dbae2fdf",
"md5": "e98457d76c0008a7550a89bcc3f062c5",
"sha256": "391f90bd137fba5b82ed34411082a2abe46b6d12c00d75d73655f59f6c1ae504"
},
"downloads": -1,
"filename": "chonkie-1.1.1.tar.gz",
"has_sig": false,
"md5_digest": "e98457d76c0008a7550a89bcc3f062c5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 266261,
"upload_time": "2025-07-18T05:08:19",
"upload_time_iso_8601": "2025-07-18T05:08:19.285208Z",
"url": "https://files.pythonhosted.org/packages/eb/cb/5d95cecf45c07faff0e8057f10187ad3b1127c6a34f2c57fb5d5dbae2fdf/chonkie-1.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-18 05:08:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "chonkie-inc",
"github_project": "chonkie",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "chonkie"
}