flash-tokenizer


Nameflash-tokenizer JSON
Version 0.9.2 PyPI version JSON
download
home_pageNone
SummaryFlashBertTokenizer implementation with C++ backend
upload_time2025-03-11 16:37:34
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseMIT
keywords
VCS
bugtrack_url
requirements build twine pybind11 numpy setuptools wheel transformers torch
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/FlashTokenizer_main_dark.png?raw=true">
    <img alt="FlashTokenizer" src="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/FlashTokenizer_main_light.png?raw=true" width=60%>
  </picture>
</p>
<h1 align="center">
Tokenizer Library for LLM Serving
</h1>


## EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING


FlashTokenizer is a high-performance tokenizer implementation in C++ of the BertTokenizer used for LLM inference. It has the highest speed and accuracy of any tokenizer, such as [FlashAttention](https://github.com/Dao-AILab/flash-attention) and [FlashInfer](https://github.com/flashinfer-ai/flashinfer), and is 4-5 times faster than BertTokenizerFast in transformers.

> [!NOTE]  
> `FlashBertTokenizer` is 4x faster than `transformers.BertTokenizerFast` and 15.5x faster than `transformers.BertTokenizer`.



<p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/Banner_dark.png?raw=true">
    <img alt="Banner" src="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/Banner_light.png?raw=true" width=100%>
  </picture>
</p>


<p>
<img align="left" src="https://img.shields.io/badge/success-0B86F1?style=flat&logo=python&logoColor=white&label=MacOS_build">
<img align="left" src="https://img.shields.io/badge/success-0B86F1?style=flat&logo=python&logoColor=white&label=Windows_build">
<img align="left" src="https://img.shields.io/badge/success-0B86F1?style=flat&logo=python&logoColor=white&label=Linux_build">
</p><br>

* * *

### FlashTokenizer includes the following core features

> [!TIP]
> 
>  * Implemented in C++17 and is fastest when built with GNUC.
>     * MacOS: `g++(14.2.0)` is faster than `clang++(16.0.0)`.
>     * Windows: `g++(8.1.0)-MinGW64` is faster than `Visual Studio 2019`.
>     * Ubuntu: `g++(11.4.0)` is faster than `clang++(14.0.0)`. 
>
> * Equally fast in Python via pybind11.
> * Blingfire was difficult to use in practice due to its low accuracy, but FlashBertTokenizer has both high accuracy and high speed.
> * Although it's only implemented as a single thread, it's capable of 40K RPS in C++ and 25K RPS in Python, and it's thread-safe, so you can go even faster with multi-threading if you need to.


## News

> [!IMPORTANT]  
> [Mar 10 2025] Performance improvements through faster token mapping with robin_hood and memory copy minimization with **std::list**.
>
> | Container   | Elapsed Time | Max RPS | Description                                                  |
> | ----------- | ------------ | ------- | ------------------------------------------------------------ |
>| std::list   | 10.3458      | 39660.5 | When combining containers, std::list is the fastest because it doesn't allocate extra memory and just appends to the end. |
> | std::deque  | 15.3494      | 26473.1 | Because it is organized in chunks, it requires memory allocation even when combining containers and has the slowest performance due to its low cache hit rather than contiguous memory. |
> | std::vector | 11.9718      | 33913.3 | It allocates new memory each time when combining containers, but it has a high cache hit for fast performance. |
> 
> #### Token Ids Map Table Performance Test.
> 
>Token and Ids Map used the fastest unordered_flat_map as shown in the test results below.
> 
>| Map                                                | Elapsed Time(Access) |
> | -------------------------------------------------- | -------------------- |
>| ✅ robin_hood::unordered_flat_map<std::string, int> | 0.914775             |
> | robin_hood::unordered_node_map<std::string, int>   | 0.961003             |
>| robin_hood::unordered_map<std::string, int>        | 0.917136             |
> | std::unordered_map<std::string, int, XXHash>       | 1.1506               |
> | std::unordered_map<std::string, int>               | 1.20015              |
> 
> XXHash is implemented as follows.
> 
> ```c++
> #define XXH_STATIC_LINKING_ONLY
>#define XXH_INLINE_ALL
> #include "xxhash.h"
>struct XXHash {
> size_t operator()(const std::string &s) const {
>      return XXH3_64bits(s.data(), s.size());
>  }
> };
>  ```
>    
>    
>    
> [Mar 09 2025] Completed development of flash-tokenizer for BertTokenizer.



## 1. Installation

### Requirements

 * g++ / clang++ / MSVC
 * python3.7 ~ 3.12

### Install from [PIP](https://pypi.org/project/flash-tokenizer/)
```bash
pip install -U flash-tokenizer
```

### Install from Source
```bash
git clone https://github.com/NLPOptimize/flash-tokenizer
cd flash-tokenizer
pip install -r requirements.txt
python -m build # `*.whl` file will be created in the `dist` folder.
```


## 2. Usage

```python
from flash_tokenizer import FlashBertTokenizer
tokenizer = FlashBertTokenizer("path/to/vocab.txt", do_lower_case=True)
# Tokenize text
ids = tokenizer("Hello, world!")
print(ids)
```

## 3. Other Implementations


<p>
<img src="https://i.imgur.com/fl77i1r.png" width=150/>
<img src="https://i.imgur.com/ZAoveWv.png" width=150/>
<img src="https://i.imgur.com/njsBDGx.png" width=150/>
<img src="https://i.imgur.com/zSjigxk.png" width=150/>
<img src="https://i.imgur.com/OJD5fbn.png" width=150/>
</p>


Most [BERT](https://arxiv.org/abs/1810.04805)-based models use the [WordPiece Tokenizer](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf), whose code can be found [here](https://github.com/google-research/bert/blob/master/tokenization.py).
(A simple implementation of Huggingface can be found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py)).

Since the BertTokenizer is a CPU intensive algorithm, inference can be a bottleneck, and unoptimized tokenizers can be severely slow. A good example is the [BidirectionalWordpieceTokenizer](https://github.com/snunlp/KR-BERT/blob/master/krbert_tensorflow/tokenization_ranked.py) introduced in [KR-BERT](https://arxiv.org/abs/2008.03979). Most of the code is the same, but the algorithm traverses the sub token backwards and writes a larger value compared to the forward traversal. The paper claims accuracy improvements, but it's hard to find other quantitative metrics, and the accuracy improvements aren't significant, and the tokenizer is seriously slowed down.

* transformers (Rust Impl, PyO3)
* paddlenlp (C++ Impl, pybind)
* tensorflow-text (C++ Impl, pybind)
* blingfire (C++ Impl, Native binary call)

Most developers will either use `transformers.BertTokenizer` or `transformers.AutoTokenizer`, but using `AutoTokenizer` will return `transformers.BertTokenizerFast`.

Naturally, it's faster than BertTokenizer, but the results aren't exactly the same, which means you're already giving up 100% accuracy starting with the tokenizer.

BertTokenizer is not only provided by transformers. [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) and [tensorflow-text](https://www.tensorflow.org/text) also provide BertTokenizer.

Then there's [Blingfire](https://github.com/microsoft/BlingFire), which is developed by Microsoft and is being abandoned.

PaddleNLP requires PaddlePaddle and provides tokenizer functionality starting with version 3.0rc. You can install it as follows

```bash
##### Install PaddlePaddle, PaddleNLP
python -m pip install paddlepaddle==3.0.0b1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
pip install --upgrade paddlenlp==3.0.0b3
##### Install transformers
pip install transformers==4.47.1
##### Install tf-text
pip install tensorflow-text==2.18.1
##### Install blingfire
pip install blingfire
```


With the exception of blingfire, vocab.txt is all you need to run the tokenizer right away. 
(blingfire also requires only vocab.txt and can be used after 8 hours of learning).

The implementations we'll look at in detail are `PaddleNLP's BertTokenizerFast` and `blingfire`.

* `blingfire`: Uses a [Deterministic Finite State Machine (DFSM)](https://github.com/microsoft/BlingFire/blob/master/doc/Bling_Fire_Tokenizer_Algorithms.pdf) to eliminate one linear scan and unnecessary comparisons, resulting in a time of O(n), which is impressive.
  * **Advantages**: **5-10x faster than other implementations**.
  * **Disadvantages**: Long training time (8 hours) and lower accuracy than other implementations. (+Difficult to get help due to de facto development hiatus).
* `PaddleNLP`: As shown in the experiments below, PaddleNLP is always faster than BertTokenizerFast (HF) to the same number of decimal places, and is always faster on any OS, whether X86 or Arm.
  * **Advantages**:  **Internal implementation is in C++** Compared to `transformers.BertTokenizerFast` implemented in Rust, it is 1.2x faster while outputting exactly the same values.
    * You can't specify `pt(pytorch tensor)` in `return_tensors`, but this is not a problem.[^1]
  * **Disadvantages**: none, other than the need to install PaddlePaddle and PaddleNLP.

## 4. Performance test

### 4.1 Performance test (Batch text encoding)


The graph below compares `transformers.BertTokenizerFast` and `paddlenlp.transformers.bert.tokenizer_fast.BertTokenizerFast` for batch size.

Both libraries are faster to return as `np.ndarray`. Perhaps the implementations have logic to convert to `pt` or `pd` at the end, which takes longer.



<p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/BatchTest_dark.png?raw=true">
    <img alt="batchtest" src="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/BatchTest_light.png?raw=true" width=100%>
  </picture>
</p>




|   BatchSize |   transformers(pt) |   paddlenlp(pd) |   transformers(np) |   paddlenlp(np) |
|-------------|--------------------|-----------------|--------------------|-----------------|
|           1 |           2.32744  |        1.74695  |           1.87685  |        1.56597  |
|           2 |           1.87427  |        1.53865  |           1.50911  |        1.45918  |
|           4 |           1.54254  |        1.13622  |           1.12902  |        1.07593  |
|           8 |           1.25432  |        0.821463 |           0.850269 |        0.798163 |
|          16 |           1.09129  |        0.640243 |           0.67293  |        0.617309 |
|          32 |           0.994335 |        0.528553 |           0.587379 |        0.519887 |
|          64 |           0.971175 |        0.476652 |           0.537753 |        0.471145 |
|         128 |           0.952003 |        0.478113 |           0.531592 |        0.451384 |

[^1]: As you can see in the graph above, returning to `pt(pytorch tensor)'` becomes very slow. 

### 4.2 Performance test (Single text encoding)

Accuracy is the result of measuring `transformers.BertTokenizer` as a baseline. If even one of the `input_ids` is incorrect, the answer is considered incorrect.
Surprisingly, the performance of `tensorflow-text` is much faster than before. However, there is still no advantage for `tensorflow-text' when comparing the four libraries.



| Tokenizer             | Elapsed Time (s) |   titles | Accuracy (%) |
|-----------------------|----------------|----------|------------|
| BertTokenizer(Huggingface)     |       255.651  |  404,464 |   100 (Baseline)   |
| ✨ **FlashBertTokenizer**    | ~~19.1325~~ ➡️ **16.526** 🔺 |  404,464 | ~~99.3248~~ ➡️ 99.8442 🔺 |
| BertTokenizerFast(HF) |        73.3019 |  404,464 |    99.8615 |
| BertTokenizerFast(PP) |        64.0603 |  404,464 |    99.8615 |
| FastBertTokenizer(TF) |        77.6923 |  404,464 |    99.8507 |
| Blingfire             |        11.5904 |  404,464 |    96.8979 |

For both `single text` and `batch text`, PaddleNLP's implementation is always faster than HuggingFace's implementation, and the results are exactly the same, so there is no unique advantage of HuggingFace's `transformers.BertTokenizerFast`.

Now you may have to make a decision between `speed (blingfire) vs `balance (PaddleNLP).

BertTokenizer requires a fast [single-core CPU](https://www.cpubenchmark.net/singleThread.html) to get fast results.

The `flash-tokenizer`, which I implemented because I didn't like the other tokenizers, has a clear advantage in both speed and accuracy.


<p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceGraph_dark.png?raw=true">
    <img alt="FlashTokenizer" src="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceGraph_light.png?raw=true" width=100%>
  </picture>
</p>

<p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceBar_dark.jpg?raw=true">
    <img alt="FlashTokenizer" src="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceBar_light.jpg?raw=true" width=100%>
  </picture>
</p>



```mermaid
%%{ init: { "er" : { "layoutDirection" : "LR" } } }%%
erDiagram
    Text ||--o{ Preprocess : tokenize
    Preprocess o{--|| Inference : memcpy_h2d
    Inference o{--|| Postprocess : memcpy_d2h
```


## 5. Case where the result is different from BertTokenizer 

<p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/WrongAnswer_dark.png?raw=true">
    <img alt="WA" src="https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/WrongAnswer_light.png?raw=true" width=100%>
  </picture>
</p>



As can be seen from the above relationship, if `transformers.BertTokenizerFast` is wrong, then `tensorflow-text's FastBertTokenizer` and `FlashBertTokenizer` are also wrong, and the difference set between `FlashBertTokenizer` and `FastBertTokenizer(TF)` is different.




## 6. Compatibility

FlashBertTokenizer can be used with any framework.  CUDA version compatibility for each framework is also important for fast inference of LLMs.

 * [PyTorch](https://pytorch.org/) no longer supports installation using conda.
 * [ONNXRUNTIME](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#cuda-12x) is separated by CUDA version.
 * PyTorch is also looking to ditch CUDA 12.x in favor of the newer CUDA 12.8. However, the trend is to keep CUDA 11.8 in all frameworks.
   * CUDA 12.x was made for the newest GPUs, Hopper and Blackwell, and on GPUs like Volta, CUDA 11.8 is faster than CUDA 12.x.



| DL Framework | Version | OS   | CPU  | CUDA 11.8 | CUDA 12.3 | CUDA 12.4 | CUDA 12.6 | CUDA 12.8 |
| ------------ | ----|---- | ---- | --------- | ----|----- | --------- | --------- |
| PyTorch | 2.6| Linux, Windows | ⚪|⚪|❌|⚪| ⚪ |    ❌      |
| PyTorch | 2.7|Linux, Windows|⚪|⚪|❌|❌|⚪|⚪|
| ONNXRUNTIME(11) | 1.20.x| Linux, Windows|⚪|⚪|❌|❌|❌|❌|
| ONNXRUNTIME(12) | 1.20.x| Linux, Windows|⚪|❌|⚪|⚪|⚪|⚪|
| PaddlePaddle | 3.0-beta | Linux, Windows|⚪|⚪|❌|❌|❌|❌|


## 7. GPU Tokenizer

You can run WordPiece Tokenizer on GPUs on [rapids(cudf)](https://docs.rapids.ai/).
 * [Implemention](https://github.com/rapidsai/cudf/blob/0e99ec3ec15b8b0ebe68bd884c7d22d600e9259e/python/cudf/cudf/core/wordpiece_tokenize.py#L10)
 * [Example](https://github.com/rapidsai/cudf/blob/0e99ec3ec15b8b0ebe68bd884c7d22d600e9259e/python/cudf/cudf/tests/text/test_subword_tokenizer.py#L244)

As you can see in [how to install rapids](https://docs.rapids.ai/install/), it only supports Linux and the CUDA version is not the same as other frameworks, so [docker](https://hub.docker.com/r/rapidsai/base) is the best choice, which is faster than CPU for batch processing but slower than CPU for streaming processing.

## TODO

- [ ] [BidirectionalWordPieceTokenizer](https://github.com/snunlp/KR-BERT/blob/master/krbert_tensorflow/tokenization_ranked.py)
- [ ] BatchEncoder with Multithreading. 
- [ ] CUDA Version.
- [ ] Replace `std::list` to `boost::intrusive::list`.


## Implemention Problem

> [!WARNING]  
> The following data structures are not applicable or are slower.
>
> * `std::list<std::reference_wrapper<std::string>>`
> * `std::string_view`
> * `std::pmr::list<std::pmr::string>`
>
> Using robbin_hood's fastest unordered_flat_map as a cache for BasicTokenizer and WordpieceTokenizer actually makes them slower, despite 95% cache hits, due to access time.



## Acknowledgement

FlashTokenizer is inspired by [FlashAttention](https://github.com/Dao-AILab/flash-attention), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [FastBertTokenizer](https://github.com/georg-jung/FastBertTokenizer) and [tokenizers-cpp](https://github.com/mlc-ai/tokenizers-cpp) projects.


## References

* https://medium.com/@techhara/which-bert-tokenizer-is-faster-b832aa978b46
* https://medium.com/@atharv6f_47401/wordpiece-tokenization-a-bpe-variant-73cc48865cbf
* https://www.restack.io/p/transformer-models-bert-answer-fast-berttokenizerfast-cat-ai
* https://medium.com/@anmolkohli/my-notes-on-bert-tokenizer-and-model-98dc22d0b64
* https://nocomplexity.com/documents/fossml/nlpframeworks.html
* https://github.com/martinus/robin-hood-hashing
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "flash-tokenizer",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "spring <springnode@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/bc/38/0165c2ec30e859373c20efe26eb2300aaa96a6b9c3f615990f682e21c64a/flash_tokenizer-0.9.2.tar.gz",
    "platform": null,
    "description": "\n<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/FlashTokenizer_main_dark.png?raw=true\">\n    <img alt=\"FlashTokenizer\" src=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/FlashTokenizer_main_light.png?raw=true\" width=60%>\n  </picture>\n</p>\n<h1 align=\"center\">\nTokenizer Library for LLM Serving\n</h1>\n\n\n## EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING\n\n\nFlashTokenizer is a high-performance tokenizer implementation in C++ of the BertTokenizer used for LLM inference. It has the highest speed and accuracy of any tokenizer, such as [FlashAttention](https://github.com/Dao-AILab/flash-attention) and [FlashInfer](https://github.com/flashinfer-ai/flashinfer), and is 4-5 times faster than BertTokenizerFast in transformers.\n\n> [!NOTE]  \n> `FlashBertTokenizer` is 4x faster than `transformers.BertTokenizerFast` and 15.5x faster than `transformers.BertTokenizer`.\n\n\n\n<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/Banner_dark.png?raw=true\">\n    <img alt=\"Banner\" src=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/Banner_light.png?raw=true\" width=100%>\n  </picture>\n</p>\n\n\n<p>\n<img align=\"left\" src=\"https://img.shields.io/badge/success-0B86F1?style=flat&logo=python&logoColor=white&label=MacOS_build\">\n<img align=\"left\" src=\"https://img.shields.io/badge/success-0B86F1?style=flat&logo=python&logoColor=white&label=Windows_build\">\n<img align=\"left\" src=\"https://img.shields.io/badge/success-0B86F1?style=flat&logo=python&logoColor=white&label=Linux_build\">\n</p><br>\n\n* * *\n\n### FlashTokenizer includes the following core features\n\n> [!TIP]\n> \n>  * Implemented in C++17 and is fastest when built with GNUC.\n>     * MacOS: `g++(14.2.0)` is faster than `clang++(16.0.0)`.\n>     * Windows: `g++(8.1.0)-MinGW64` is faster than `Visual Studio 2019`.\n>     * Ubuntu: `g++(11.4.0)` is faster than `clang++(14.0.0)`. \n>\n> * Equally fast in Python via pybind11.\n> * Blingfire was difficult to use in practice due to its low accuracy, but FlashBertTokenizer has both high accuracy and high speed.\n> * Although it's only implemented as a single thread, it's capable of 40K RPS in C++ and 25K RPS in Python, and it's thread-safe, so you can go even faster with multi-threading if you need to.\n\n\n## News\n\n> [!IMPORTANT]  \n> [Mar 10 2025] Performance improvements through faster token mapping with robin_hood and memory copy minimization with **std::list**.\n>\n> | Container   | Elapsed Time | Max RPS | Description                                                  |\n> | ----------- | ------------ | ------- | ------------------------------------------------------------ |\n>| std::list   | 10.3458      | 39660.5 | When combining containers, std::list is the fastest because it doesn't allocate extra memory and just appends to the end. |\n> | std::deque  | 15.3494      | 26473.1 | Because it is organized in chunks, it requires memory allocation even when combining containers and has the slowest performance due to its low cache hit rather than contiguous memory. |\n> | std::vector | 11.9718      | 33913.3 | It allocates new memory each time when combining containers, but it has a high cache hit for fast performance. |\n> \n> #### Token Ids Map Table Performance Test.\n> \n>Token and Ids Map used the fastest unordered_flat_map as shown in the test results below.\n> \n>| Map                                                | Elapsed Time(Access) |\n> | -------------------------------------------------- | -------------------- |\n>| \u2705 robin_hood::unordered_flat_map<std::string, int> | 0.914775             |\n> | robin_hood::unordered_node_map<std::string, int>   | 0.961003             |\n>| robin_hood::unordered_map<std::string, int>        | 0.917136             |\n> | std::unordered_map<std::string, int, XXHash>       | 1.1506               |\n> | std::unordered_map<std::string, int>               | 1.20015              |\n> \n> XXHash is implemented as follows.\n> \n> ```c++\n> #define XXH_STATIC_LINKING_ONLY\n>#define XXH_INLINE_ALL\n> #include \"xxhash.h\"\n>struct XXHash {\n> size_t operator()(const std::string &s) const {\n>      return XXH3_64bits(s.data(), s.size());\n>  }\n> };\n>  ```\n>    \n>    \n>    \n> [Mar 09 2025] Completed development of flash-tokenizer for BertTokenizer.\n\n\n\n## 1. Installation\n\n### Requirements\n\n * g++ / clang++ / MSVC\n * python3.7 ~ 3.12\n\n### Install from [PIP](https://pypi.org/project/flash-tokenizer/)\n```bash\npip install -U flash-tokenizer\n```\n\n### Install from Source\n```bash\ngit clone https://github.com/NLPOptimize/flash-tokenizer\ncd flash-tokenizer\npip install -r requirements.txt\npython -m build # `*.whl` file will be created in the `dist` folder.\n```\n\n\n## 2. Usage\n\n```python\nfrom flash_tokenizer import FlashBertTokenizer\ntokenizer = FlashBertTokenizer(\"path/to/vocab.txt\", do_lower_case=True)\n# Tokenize text\nids = tokenizer(\"Hello, world!\")\nprint(ids)\n```\n\n## 3. Other Implementations\n\n\n<p>\n<img src=\"https://i.imgur.com/fl77i1r.png\" width=150/>\n<img src=\"https://i.imgur.com/ZAoveWv.png\" width=150/>\n<img src=\"https://i.imgur.com/njsBDGx.png\" width=150/>\n<img src=\"https://i.imgur.com/zSjigxk.png\" width=150/>\n<img src=\"https://i.imgur.com/OJD5fbn.png\" width=150/>\n</p>\n\n\nMost [BERT](https://arxiv.org/abs/1810.04805)-based models use the [WordPiece Tokenizer](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf), whose code can be found [here](https://github.com/google-research/bert/blob/master/tokenization.py).\n(A simple implementation of Huggingface can be found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py)).\n\nSince the BertTokenizer is a CPU intensive algorithm, inference can be a bottleneck, and unoptimized tokenizers can be severely slow. A good example is the [BidirectionalWordpieceTokenizer](https://github.com/snunlp/KR-BERT/blob/master/krbert_tensorflow/tokenization_ranked.py) introduced in [KR-BERT](https://arxiv.org/abs/2008.03979). Most of the code is the same, but the algorithm traverses the sub token backwards and writes a larger value compared to the forward traversal. The paper claims accuracy improvements, but it's hard to find other quantitative metrics, and the accuracy improvements aren't significant, and the tokenizer is seriously slowed down.\n\n* transformers (Rust Impl, PyO3)\n* paddlenlp (C++ Impl, pybind)\n* tensorflow-text (C++ Impl, pybind)\n* blingfire (C++ Impl, Native binary call)\n\nMost developers will either use `transformers.BertTokenizer` or `transformers.AutoTokenizer`, but using `AutoTokenizer` will return `transformers.BertTokenizerFast`.\n\nNaturally, it's faster than BertTokenizer, but the results aren't exactly the same, which means you're already giving up 100% accuracy starting with the tokenizer.\n\nBertTokenizer is not only provided by transformers. [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) and [tensorflow-text](https://www.tensorflow.org/text) also provide BertTokenizer.\n\nThen there's [Blingfire](https://github.com/microsoft/BlingFire), which is developed by Microsoft and is being abandoned.\n\nPaddleNLP requires PaddlePaddle and provides tokenizer functionality starting with version 3.0rc. You can install it as follows\n\n```bash\n##### Install PaddlePaddle, PaddleNLP\npython -m pip install paddlepaddle==3.0.0b1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/\npip install --upgrade paddlenlp==3.0.0b3\n##### Install transformers\npip install transformers==4.47.1\n##### Install tf-text\npip install tensorflow-text==2.18.1\n##### Install blingfire\npip install blingfire\n```\n\n\nWith the exception of blingfire, vocab.txt is all you need to run the tokenizer right away. \n(blingfire also requires only vocab.txt and can be used after 8 hours of learning).\n\nThe implementations we'll look at in detail are `PaddleNLP's BertTokenizerFast` and `blingfire`.\n\n* `blingfire`: Uses a [Deterministic Finite State Machine (DFSM)](https://github.com/microsoft/BlingFire/blob/master/doc/Bling_Fire_Tokenizer_Algorithms.pdf) to eliminate one linear scan and unnecessary comparisons, resulting in a time of O(n), which is impressive.\n  * **Advantages**: **5-10x faster than other implementations**.\n  * **Disadvantages**: Long training time (8 hours) and lower accuracy than other implementations. (+Difficult to get help due to de facto development hiatus).\n* `PaddleNLP`: As shown in the experiments below, PaddleNLP is always faster than BertTokenizerFast (HF) to the same number of decimal places, and is always faster on any OS, whether X86 or Arm.\n  * **Advantages**:  **Internal implementation is in C++** Compared to `transformers.BertTokenizerFast` implemented in Rust, it is 1.2x faster while outputting exactly the same values.\n    * You can't specify `pt(pytorch tensor)` in `return_tensors`, but this is not a problem.[^1]\n  * **Disadvantages**: none, other than the need to install PaddlePaddle and PaddleNLP.\n\n## 4. Performance test\n\n### 4.1 Performance test (Batch text encoding)\n\n\nThe graph below compares `transformers.BertTokenizerFast` and `paddlenlp.transformers.bert.tokenizer_fast.BertTokenizerFast` for batch size.\n\nBoth libraries are faster to return as `np.ndarray`. Perhaps the implementations have logic to convert to `pt` or `pd` at the end, which takes longer.\n\n\n\n<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/BatchTest_dark.png?raw=true\">\n    <img alt=\"batchtest\" src=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/BatchTest_light.png?raw=true\" width=100%>\n  </picture>\n</p>\n\n\n\n\n|   BatchSize |   transformers(pt) |   paddlenlp(pd) |   transformers(np) |   paddlenlp(np) |\n|-------------|--------------------|-----------------|--------------------|-----------------|\n|           1 |           2.32744  |        1.74695  |           1.87685  |        1.56597  |\n|           2 |           1.87427  |        1.53865  |           1.50911  |        1.45918  |\n|           4 |           1.54254  |        1.13622  |           1.12902  |        1.07593  |\n|           8 |           1.25432  |        0.821463 |           0.850269 |        0.798163 |\n|          16 |           1.09129  |        0.640243 |           0.67293  |        0.617309 |\n|          32 |           0.994335 |        0.528553 |           0.587379 |        0.519887 |\n|          64 |           0.971175 |        0.476652 |           0.537753 |        0.471145 |\n|         128 |           0.952003 |        0.478113 |           0.531592 |        0.451384 |\n\n[^1]: As you can see in the graph above, returning to `pt(pytorch tensor)'` becomes very slow. \n\n### 4.2 Performance test (Single text encoding)\n\nAccuracy is the result of measuring `transformers.BertTokenizer` as a baseline. If even one of the `input_ids` is incorrect, the answer is considered incorrect.\nSurprisingly, the performance of `tensorflow-text` is much faster than before. However, there is still no advantage for `tensorflow-text' when comparing the four libraries.\n\n\n\n| Tokenizer             | Elapsed Time (s) |   titles | Accuracy (%) |\n|-----------------------|----------------|----------|------------|\n| BertTokenizer(Huggingface)     |       255.651  |  404,464 |   100 (Baseline)   |\n| \u2728 **FlashBertTokenizer**    | ~~19.1325~~ \u27a1\ufe0f **16.526** \ud83d\udd3a |  404,464 | ~~99.3248~~ \u27a1\ufe0f 99.8442 \ud83d\udd3a |\n| BertTokenizerFast(HF) |        73.3019 |  404,464 |    99.8615 |\n| BertTokenizerFast(PP) |        64.0603 |  404,464 |    99.8615 |\n| FastBertTokenizer(TF) |        77.6923 |  404,464 |    99.8507 |\n| Blingfire             |        11.5904 |  404,464 |    96.8979 |\n\nFor both `single text` and `batch text`, PaddleNLP's implementation is always faster than HuggingFace's implementation, and the results are exactly the same, so there is no unique advantage of HuggingFace's `transformers.BertTokenizerFast`.\n\nNow you may have to make a decision between `speed (blingfire) vs `balance (PaddleNLP).\n\nBertTokenizer requires a fast [single-core CPU](https://www.cpubenchmark.net/singleThread.html) to get fast results.\n\nThe `flash-tokenizer`, which I implemented because I didn't like the other tokenizers, has a clear advantage in both speed and accuracy.\n\n\n<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceGraph_dark.png?raw=true\">\n    <img alt=\"FlashTokenizer\" src=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceGraph_light.png?raw=true\" width=100%>\n  </picture>\n</p>\n\n<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceBar_dark.jpg?raw=true\">\n    <img alt=\"FlashTokenizer\" src=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/TokenizerPerformanceBar_light.jpg?raw=true\" width=100%>\n  </picture>\n</p>\n\n\n\n```mermaid\n%%{ init: { \"er\" : { \"layoutDirection\" : \"LR\" } } }%%\nerDiagram\n    Text ||--o{ Preprocess : tokenize\n    Preprocess o{--|| Inference : memcpy_h2d\n    Inference o{--|| Postprocess : memcpy_d2h\n```\n\n\n## 5. Case where the result is different from BertTokenizer \n\n<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/WrongAnswer_dark.png?raw=true\">\n    <img alt=\"WA\" src=\"https://github.com/NLPOptimize/flash-tokenizer/blob/main/assets/WrongAnswer_light.png?raw=true\" width=100%>\n  </picture>\n</p>\n\n\n\nAs can be seen from the above relationship, if `transformers.BertTokenizerFast` is wrong, then `tensorflow-text's FastBertTokenizer` and `FlashBertTokenizer` are also wrong, and the difference set between `FlashBertTokenizer` and `FastBertTokenizer(TF)` is different.\n\n\n\n\n## 6. Compatibility\n\nFlashBertTokenizer can be used with any framework.  CUDA version compatibility for each framework is also important for fast inference of LLMs.\n\n * [PyTorch](https://pytorch.org/) no longer supports installation using conda.\n * [ONNXRUNTIME](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#cuda-12x) is separated by CUDA version.\n * PyTorch is also looking to ditch CUDA 12.x in favor of the newer CUDA 12.8. However, the trend is to keep CUDA 11.8 in all frameworks.\n   * CUDA 12.x was made for the newest GPUs, Hopper and Blackwell, and on GPUs like Volta, CUDA 11.8 is faster than CUDA 12.x.\n\n\n\n| DL Framework | Version | OS   | CPU  | CUDA 11.8 | CUDA 12.3 | CUDA 12.4 | CUDA 12.6 | CUDA 12.8 |\n| ------------ | ----|---- | ---- | --------- | ----|----- | --------- | --------- |\n| PyTorch | 2.6| Linux, Windows | \u26aa|\u26aa|\u274c|\u26aa| \u26aa |    \u274c      |\n| PyTorch | 2.7|Linux, Windows|\u26aa|\u26aa|\u274c|\u274c|\u26aa|\u26aa|\n| ONNXRUNTIME(11) | 1.20.x| Linux, Windows|\u26aa|\u26aa|\u274c|\u274c|\u274c|\u274c|\n| ONNXRUNTIME(12) | 1.20.x| Linux, Windows|\u26aa|\u274c|\u26aa|\u26aa|\u26aa|\u26aa|\n| PaddlePaddle | 3.0-beta | Linux, Windows|\u26aa|\u26aa|\u274c|\u274c|\u274c|\u274c|\n\n\n## 7. GPU Tokenizer\n\nYou can run WordPiece Tokenizer on GPUs on [rapids(cudf)](https://docs.rapids.ai/).\n * [Implemention](https://github.com/rapidsai/cudf/blob/0e99ec3ec15b8b0ebe68bd884c7d22d600e9259e/python/cudf/cudf/core/wordpiece_tokenize.py#L10)\n * [Example](https://github.com/rapidsai/cudf/blob/0e99ec3ec15b8b0ebe68bd884c7d22d600e9259e/python/cudf/cudf/tests/text/test_subword_tokenizer.py#L244)\n\nAs you can see in [how to install rapids](https://docs.rapids.ai/install/), it only supports Linux and the CUDA version is not the same as other frameworks, so [docker](https://hub.docker.com/r/rapidsai/base) is the best choice, which is faster than CPU for batch processing but slower than CPU for streaming processing.\n\n## TODO\n\n- [ ] [BidirectionalWordPieceTokenizer](https://github.com/snunlp/KR-BERT/blob/master/krbert_tensorflow/tokenization_ranked.py)\n- [ ] BatchEncoder with Multithreading. \n- [ ] CUDA Version.\n- [ ] Replace `std::list` to `boost::intrusive::list`.\n\n\n## Implemention Problem\n\n> [!WARNING]  \n> The following data structures are not applicable or are slower.\n>\n> * `std::list<std::reference_wrapper<std::string>>`\n> * `std::string_view`\n> * `std::pmr::list<std::pmr::string>`\n>\n> Using robbin_hood's fastest unordered_flat_map as a cache for BasicTokenizer and WordpieceTokenizer actually makes them slower, despite 95% cache hits, due to access time.\n\n\n\n## Acknowledgement\n\nFlashTokenizer is inspired by [FlashAttention](https://github.com/Dao-AILab/flash-attention), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [FastBertTokenizer](https://github.com/georg-jung/FastBertTokenizer) and [tokenizers-cpp](https://github.com/mlc-ai/tokenizers-cpp) projects.\n\n\n## References\n\n* https://medium.com/@techhara/which-bert-tokenizer-is-faster-b832aa978b46\n* https://medium.com/@atharv6f_47401/wordpiece-tokenization-a-bpe-variant-73cc48865cbf\n* https://www.restack.io/p/transformer-models-bert-answer-fast-berttokenizerfast-cat-ai\n* https://medium.com/@anmolkohli/my-notes-on-bert-tokenizer-and-model-98dc22d0b64\n* https://nocomplexity.com/documents/fossml/nlpframeworks.html\n* https://github.com/martinus/robin-hood-hashing",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "FlashBertTokenizer implementation with C++ backend",
    "version": "0.9.2",
    "project_urls": {
        "Homepage": "https://github.com/NLPOptimize/flash-tokenizer",
        "Issues": "https://github.com/NLPOptimize/flash-tokenizer/issues"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "84cdc64d96ebe8cc6dcd52c8f03fd30c79830b8e715c309f9d10125deb36df9a",
                "md5": "2b06ac1318445c64fcdb879b359c8b70",
                "sha256": "ef9feb2a37cb1a3894e4ffe4bb8a754af34e58cc252fe8797353f395034acc51"
            },
            "downloads": -1,
            "filename": "flash_tokenizer-0.9.2-cp313-cp313-macosx_15_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "2b06ac1318445c64fcdb879b359c8b70",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.7",
            "size": 114309,
            "upload_time": "2025-03-11T16:37:32",
            "upload_time_iso_8601": "2025-03-11T16:37:32.637382Z",
            "url": "https://files.pythonhosted.org/packages/84/cd/c64d96ebe8cc6dcd52c8f03fd30c79830b8e715c309f9d10125deb36df9a/flash_tokenizer-0.9.2-cp313-cp313-macosx_15_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "bc380165c2ec30e859373c20efe26eb2300aaa96a6b9c3f615990f682e21c64a",
                "md5": "c54659cc09a8a021ca725c3c28888306",
                "sha256": "db2a1e80ec95e78cf99ca8740db1576753357f899d15fa7a76c7aacc0434c352"
            },
            "downloads": -1,
            "filename": "flash_tokenizer-0.9.2.tar.gz",
            "has_sig": false,
            "md5_digest": "c54659cc09a8a021ca725c3c28888306",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 5436510,
            "upload_time": "2025-03-11T16:37:34",
            "upload_time_iso_8601": "2025-03-11T16:37:34.840751Z",
            "url": "https://files.pythonhosted.org/packages/bc/38/0165c2ec30e859373c20efe26eb2300aaa96a6b9c3f615990f682e21c64a/flash_tokenizer-0.9.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-03-11 16:37:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NLPOptimize",
    "github_project": "flash-tokenizer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "build",
            "specs": []
        },
        {
            "name": "twine",
            "specs": []
        },
        {
            "name": "pybind11",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "setuptools",
            "specs": []
        },
        {
            "name": "wheel",
            "specs": []
        },
        {
            "name": "transformers",
            "specs": []
        },
        {
            "name": "torch",
            "specs": []
        }
    ],
    "lcname": "flash-tokenizer"
}
        
Elapsed time: 1.48395s