botok


Namebotok JSON
Version 0.8.12 PyPI version JSON
download
home_pagehttps://github.com/Esukhia/botok
SummaryTibetan Word Tokenizer
upload_time2023-05-17 11:36:37
maintainer
docs_urlNone
authorEsukhia development team
requires_python>=3.6
licenseApache2
keywords nlp computational_linguistics tibetan tokenizer token
VCS
bugtrack_url
requirements black isort pytest coveralls covdefaults
Travis-CI
coveralls test coverage No coveralls.
            <h1 align="center">
  <br>
  <a href="https://openpecha.org"><img src="https://avatars.githubusercontent.com/u/82142807?s=400&u=19e108a15566f3a1449bafb03b8dd706a72aebcd&v=4" alt="OpenPecha" width="150"></a>
  <br>
</h1>

<h3 align="center">Botok – Python Tibetan Tokenizer</h3>

<!-- Replace the title of the repository -->

<p align="center">
    <a><img src="https://img.shields.io/github/release/Esukhia/botok.svg" alt="GitHub release"></a> 
    <a href="https://botok.readthedocs.io/en/latest/?badge=latest"><img src="https://readthedocs.org/projects/botok/badge/?version=latest" alt="Documentation Status"></a> 
    <a href="https://travis-ci.org/Esukhia/botok"><img src="https://travis-ci.org/Esukhia/botok.svg?branch=master" alt="Build Status"></a> 
    <a href="https://coveralls.io/github/Esukhia/botok?branch=master"><img src="https://coveralls.io/repos/github/Esukhia/botok/badge.svg?branch=master" alt="Coverage Status"></a> 
    <a href="https://black.readthedocs.io/en/stable/"><img src="https://img.shields.io/badge/code%20style-black-000000.svg" alt="Code style: black"></a> 
</p>

<p align="center">
  <a href="#description">Description</a> •
  <a href="#install">Install</a> •
  <a href="#example">Example</a> •
  <a href="#commentedexample">Commented Example</a> •
  <a href="#docs">Docs</a> •
  <a href="#owners">Owners</a> •
  <a href="#Acknowledgements">Acknowledgements</a> •
  <a href="#Maintainance">Maintainance</a> •
  <a href="#License">License</a>
</p>
<hr>

## Description

Botok tokenizes Tibetan text into words with optional attributes such as lemma, POS, clean form.

## Install
Requires to have Python3 installed.

    pip3 install botok

## Example

```
from botok import WordTokenizer
from botok.config import Config
from pathlib import Path

def get_tokens(wt, text):
    tokens = wt.tokenize(text, split_affixes=False)
    return tokens

if __name__ == "__main__":
    config = Config(dialect_name="general", base_path= Path.home())
    wt = WordTokenizer(config=config)
    text = "བཀྲ་ཤིས་བདེ་ལེགས་ཞུས་རྒྱུ་ཡིན་ སེམས་པ་སྐྱིད་པོ་འདུག།"
    tokens = get_tokens(wt, text)
    for token in tokens:
        print(token)
```

https://user-images.githubusercontent.com/24893704/148767959-31cc0a69-4c83-4841-8a1d-028d376e4677.mp4

## Commented Example

```python
>>> from botok import Text

>>> # input is a multi-line input string
>>> in_str = """ལེ གས། བཀྲ་ཤིས་མཐའི་ ༆ ཤི་བཀྲ་ཤིས་  tr 
... བདེ་་ལེ གས། བཀྲ་ཤིས་བདེ་ལེགས་༡༢༣ཀཀ། 
... མཐའི་རྒྱ་མཚོར་གནས་པའི་ཉས་ཆུ་འཐུང་།། །།མཁའ།"""


### STEP1: instanciating Text

>>> # A. on a string
>>> t = Text(in_str)

>>> # B. on a file
... # note all following operations can be applied to files in this way.
>>> from pathlib import Path
>>> in_file = Path.cwd() / 'test.txt'

>>> # file content:
>>> in_file.read_text()
'བཀྲ་ཤིས་བདེ་ལེགས།།\n'

>>> t = Text(in_file)
>>> t.tokenize_chunks_plaintext

>>> # checking an output file has been written:
... # BOM is added by default so that notepad in Windows doesn't scramble the line breaks
>>> out_file = Path.cwd() / 'test_pybo.txt'
>>> out_file.read_text()
'\ufeffབཀྲ་ ཤིས་ བདེ་ ལེགས །།'

### STEP2: properties will perform actions on the input string:
### note: original spaces are replaced by underscores.

>>> # OUTPUT1: chunks are meaningful groups of chars from the input string.
... # see how punctuations, numerals, non-bo and syllables are all neatly grouped.
>>> t.tokenize_chunks_plaintext
'ལེ_གས །_ བཀྲ་ ཤིས་ མཐའི་ _༆_ ཤི་ བཀྲ་ ཤིས་__ tr_\n བདེ་་ ལེ_གས །_ བཀྲ་ ཤིས་ བདེ་ ལེགས་ ༡༢༣ ཀཀ །_\n མཐའི་ རྒྱ་ མཚོར་ གནས་ པའི་ ཉས་ ཆུ་ འཐུང་ །།_།། མཁའ །'

>>> # OUTPUT2: could as well be acheived by in_str.split(' ')
>>> t.tokenize_on_spaces
'ལེ གས། བཀྲ་ཤིས་མཐའི་ ༆ ཤི་བཀྲ་ཤིས་ tr བདེ་་ལེ གས། བཀྲ་ཤིས་བདེ་ལེགས་༡༢༣ཀཀ། མཐའི་རྒྱ་མཚོར་གནས་པའི་ཉས་ཆུ་འཐུང་།། །།མཁའ།'

>>> # OUTPUT3: segments in words.
... # see how བདེ་་ལེ_གས was still recognized as a single word, even with the space and the double tsek.
... # the affixed particles are separated from the hosting word: མཐ འི་ རྒྱ་མཚོ ར་ གནས་པ འི་ ཉ ས་
>>> t.tokenize_words_raw_text
Loading Trie... (2s.)
'ལེ_གས །_ བཀྲ་ཤིས་ མཐ འི་ _༆_ ཤི་ བཀྲ་ཤིས་_ tr_ བདེ་་ལེ_གས །_ བཀྲ་ཤིས་ བདེ་ལེགས་ ༡༢༣ ཀཀ །_ མཐ འི་ རྒྱ་མཚོ ར་ གནས་པ འི་ ཉ ས་ ཆུ་ འཐུང་ །།_།། མཁའ །'
>>> t.tokenize_words_raw_lines
'ལེ_གས །_ བཀྲ་ཤིས་ མཐ འི་ _༆_ ཤི་ བཀྲ་ཤིས་__ tr_\n བདེ་་ལེ_གས །_ བཀྲ་ཤིས་ བདེ་ལེགས་ ༡༢༣ ཀཀ །_\n མཐ འི་ རྒྱ་མཚོ ར་ གནས་པ འི་ ཉ ས་ ཆུ་ འཐུང་ །།_།། མཁའ །'

>>> # OUTPUT4: segments in words, then calculates the number of occurences of each word found
... # by default, it counts in_str's substrings in the output, which is why we have བདེ་་ལེ གས	1, བདེ་ལེགས་	1
... # this behaviour can easily be modified to take into account the words that pybo recognized instead (see advanced usage)
>>> print(t.list_word_types)
འི་	3
། 	2
བཀྲ་ཤིས་	2
མཐ	2
ལེ གས	1
 ༆ 	1
ཤི་	1
བཀྲ་ཤིས་  	1
tr \n	1
བདེ་་ལེ གས	1
བདེ་ལེགས་	1
༡༢༣	1
ཀཀ	1
། \n	1
རྒྱ་མཚོ	1
ར་	1
གནས་པ	1
ཉ	1
ས་	1
ཆུ་	1
འཐུང་	1
།། །།	1
མཁའ	1
།	1
```

##### Custom dialect pack:

In order to use custom dialect pack:

- You need to prepare your dialect pack in same folder structure like [general dialect pack](https://github.com/Esukhia/botok-data/tree/master/dialect_packs/general)
- Then you need to instaintiate a config object where you will pass dialect name and path
- You can instaintiate your tokenizer object using that config object
- Your tokenizer will be using your custom dialect pack and it will be using trie pickled file in future to build the custom trie.

## Docs

No documentations.

<!-- This section must link to the docs which are in the root of the repository in /docs -->


## Owners

- [@drupchen](https://github.com/drupchen)
- [@eroux](https://github.com/eroux)
- [@ngawangtrinley](https://github.com/ngawangtrinley)
- [@10zinten](https://github.com/10zinten)
- [@kaldan007](https://github.com/kaldan007)

<!-- This section lists the owners of the repo -->


## Acknowledgements

**botok** is an open source library for Tibetan NLP.

We are always open to cooperation in introducing new features, tool integrations and testing solutions.

Many thanks to the companies and organizations who have supported botok's development, especially:

* [Khyentse Foundation](https://khyentsefoundation.org) for contributing USD22,000 to kickstart the project 
* The [Barom/Esukhia canon project](http://www.barom.org) for sponsoring training data curation
* [BDRC](https://tbrc.org) for contributing 2 staff for 6 months for data curation

## Maintainance

Build the source dist:

```
rm -rf dist/
python3 setup.py clean sdist
```

and upload on twine (version >= `1.11.0`) with:

```
twine upload dist/*
```

## License

The Python code is Copyright (C) 2019 Esukhia, provided under [Apache 2](LICENSE). 

contributors:
 * [Drupchen](https://github.com/drupchen)
 * [Élie Roux](https://github.com/eroux)
 * [Ngawang Trinley](https://github.com/ngawangtrinley)
 * [Mikko Kotila](https://github.com/mikkokotila)
 * [Thubten Rinzin](https://github.com/thubtenrigzin)

 * [Tenzin](https://github.com/10zinten)
 * Joyce Mackzenzie for reworking the logo

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Esukhia/botok",
    "name": "botok",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "nlp computational_linguistics tibetan tokenizer token",
    "author": "Esukhia development team",
    "author_email": "esukhiadev@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/fa/cd/224fb390540ce02357e8d9274f9a55731b4b87c4850b8653884f7a24841a/botok-0.8.12.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">\n  <br>\n  <a href=\"https://openpecha.org\"><img src=\"https://avatars.githubusercontent.com/u/82142807?s=400&u=19e108a15566f3a1449bafb03b8dd706a72aebcd&v=4\" alt=\"OpenPecha\" width=\"150\"></a>\n  <br>\n</h1>\n\n<h3 align=\"center\">Botok \u2013 Python Tibetan Tokenizer</h3>\n\n<!-- Replace the title of the repository -->\n\n<p align=\"center\">\n    <a><img src=\"https://img.shields.io/github/release/Esukhia/botok.svg\" alt=\"GitHub release\"></a> \n    <a href=\"https://botok.readthedocs.io/en/latest/?badge=latest\"><img src=\"https://readthedocs.org/projects/botok/badge/?version=latest\" alt=\"Documentation Status\"></a> \n    <a href=\"https://travis-ci.org/Esukhia/botok\"><img src=\"https://travis-ci.org/Esukhia/botok.svg?branch=master\" alt=\"Build Status\"></a> \n    <a href=\"https://coveralls.io/github/Esukhia/botok?branch=master\"><img src=\"https://coveralls.io/repos/github/Esukhia/botok/badge.svg?branch=master\" alt=\"Coverage Status\"></a> \n    <a href=\"https://black.readthedocs.io/en/stable/\"><img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\" alt=\"Code style: black\"></a> \n</p>\n\n<p align=\"center\">\n  <a href=\"#description\">Description</a> \u2022\n  <a href=\"#install\">Install</a> \u2022\n  <a href=\"#example\">Example</a> \u2022\n  <a href=\"#commentedexample\">Commented Example</a> \u2022\n  <a href=\"#docs\">Docs</a> \u2022\n  <a href=\"#owners\">Owners</a> \u2022\n  <a href=\"#Acknowledgements\">Acknowledgements</a> \u2022\n  <a href=\"#Maintainance\">Maintainance</a> \u2022\n  <a href=\"#License\">License</a>\n</p>\n<hr>\n\n## Description\n\nBotok tokenizes Tibetan text into words with optional attributes such as lemma, POS, clean form.\n\n## Install\nRequires to have Python3 installed.\n\n    pip3 install botok\n\n## Example\n\n```\nfrom botok import WordTokenizer\nfrom botok.config import Config\nfrom pathlib import Path\n\ndef get_tokens(wt, text):\n    tokens = wt.tokenize(text, split_affixes=False)\n    return tokens\n\nif __name__ == \"__main__\":\n    config = Config(dialect_name=\"general\", base_path= Path.home())\n    wt = WordTokenizer(config=config)\n    text = \"\u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b\u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0b\u0f5e\u0f74\u0f66\u0f0b\u0f62\u0f92\u0fb1\u0f74\u0f0b\u0f61\u0f72\u0f53\u0f0b \u0f66\u0f7a\u0f58\u0f66\u0f0b\u0f54\u0f0b\u0f66\u0f90\u0fb1\u0f72\u0f51\u0f0b\u0f54\u0f7c\u0f0b\u0f60\u0f51\u0f74\u0f42\u0f0d\"\n    tokens = get_tokens(wt, text)\n    for token in tokens:\n        print(token)\n```\n\nhttps://user-images.githubusercontent.com/24893704/148767959-31cc0a69-4c83-4841-8a1d-028d376e4677.mp4\n\n## Commented Example\n\n```python\n>>> from botok import Text\n\n>>> # input is a multi-line input string\n>>> in_str = \"\"\"\u0f63\u0f7a \u0f42\u0f66\u0f0d \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b\u0f58\u0f50\u0f60\u0f72\u0f0b \u0f06 \u0f64\u0f72\u0f0b\u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b  tr \n... \u0f56\u0f51\u0f7a\u0f0b\u0f0b\u0f63\u0f7a \u0f42\u0f66\u0f0d \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b\u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0b\u0f21\u0f22\u0f23\u0f40\u0f40\u0f0d \n... \u0f58\u0f50\u0f60\u0f72\u0f0b\u0f62\u0f92\u0fb1\u0f0b\u0f58\u0f5a\u0f7c\u0f62\u0f0b\u0f42\u0f53\u0f66\u0f0b\u0f54\u0f60\u0f72\u0f0b\u0f49\u0f66\u0f0b\u0f46\u0f74\u0f0b\u0f60\u0f50\u0f74\u0f44\u0f0b\u0f0d\u0f0d \u0f0d\u0f0d\u0f58\u0f41\u0f60\u0f0d\"\"\"\n\n\n### STEP1: instanciating Text\n\n>>> # A. on a string\n>>> t = Text(in_str)\n\n>>> # B. on a file\n... # note all following operations can be applied to files in this way.\n>>> from pathlib import Path\n>>> in_file = Path.cwd() / 'test.txt'\n\n>>> # file content:\n>>> in_file.read_text()\n'\u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b\u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0d\u0f0d\\n'\n\n>>> t = Text(in_file)\n>>> t.tokenize_chunks_plaintext\n\n>>> # checking an output file has been written:\n... # BOM is added by default so that notepad in Windows doesn't scramble the line breaks\n>>> out_file = Path.cwd() / 'test_pybo.txt'\n>>> out_file.read_text()\n'\\ufeff\u0f56\u0f40\u0fb2\u0f0b \u0f64\u0f72\u0f66\u0f0b \u0f56\u0f51\u0f7a\u0f0b \u0f63\u0f7a\u0f42\u0f66 \u0f0d\u0f0d'\n\n### STEP2: properties will perform actions on the input string:\n### note: original spaces are replaced by underscores.\n\n>>> # OUTPUT1: chunks are meaningful groups of chars from the input string.\n... # see how punctuations, numerals, non-bo and syllables are all neatly grouped.\n>>> t.tokenize_chunks_plaintext\n'\u0f63\u0f7a_\u0f42\u0f66 \u0f0d_ \u0f56\u0f40\u0fb2\u0f0b \u0f64\u0f72\u0f66\u0f0b \u0f58\u0f50\u0f60\u0f72\u0f0b _\u0f06_ \u0f64\u0f72\u0f0b \u0f56\u0f40\u0fb2\u0f0b \u0f64\u0f72\u0f66\u0f0b__ tr_\\n \u0f56\u0f51\u0f7a\u0f0b\u0f0b \u0f63\u0f7a_\u0f42\u0f66 \u0f0d_ \u0f56\u0f40\u0fb2\u0f0b \u0f64\u0f72\u0f66\u0f0b \u0f56\u0f51\u0f7a\u0f0b \u0f63\u0f7a\u0f42\u0f66\u0f0b \u0f21\u0f22\u0f23 \u0f40\u0f40 \u0f0d_\\n \u0f58\u0f50\u0f60\u0f72\u0f0b \u0f62\u0f92\u0fb1\u0f0b \u0f58\u0f5a\u0f7c\u0f62\u0f0b \u0f42\u0f53\u0f66\u0f0b \u0f54\u0f60\u0f72\u0f0b \u0f49\u0f66\u0f0b \u0f46\u0f74\u0f0b \u0f60\u0f50\u0f74\u0f44\u0f0b \u0f0d\u0f0d_\u0f0d\u0f0d \u0f58\u0f41\u0f60 \u0f0d'\n\n>>> # OUTPUT2: could as well be acheived by in_str.split(' ')\n>>> t.tokenize_on_spaces\n'\u0f63\u0f7a \u0f42\u0f66\u0f0d \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b\u0f58\u0f50\u0f60\u0f72\u0f0b \u0f06 \u0f64\u0f72\u0f0b\u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b tr \u0f56\u0f51\u0f7a\u0f0b\u0f0b\u0f63\u0f7a \u0f42\u0f66\u0f0d \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b\u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0b\u0f21\u0f22\u0f23\u0f40\u0f40\u0f0d \u0f58\u0f50\u0f60\u0f72\u0f0b\u0f62\u0f92\u0fb1\u0f0b\u0f58\u0f5a\u0f7c\u0f62\u0f0b\u0f42\u0f53\u0f66\u0f0b\u0f54\u0f60\u0f72\u0f0b\u0f49\u0f66\u0f0b\u0f46\u0f74\u0f0b\u0f60\u0f50\u0f74\u0f44\u0f0b\u0f0d\u0f0d \u0f0d\u0f0d\u0f58\u0f41\u0f60\u0f0d'\n\n>>> # OUTPUT3: segments in words.\n... # see how \u0f56\u0f51\u0f7a\u0f0b\u0f0b\u0f63\u0f7a_\u0f42\u0f66 was still recognized as a single word, even with the space and the double tsek.\n... # the affixed particles are separated from the hosting word: \u0f58\u0f50 \u0f60\u0f72\u0f0b \u0f62\u0f92\u0fb1\u0f0b\u0f58\u0f5a\u0f7c \u0f62\u0f0b \u0f42\u0f53\u0f66\u0f0b\u0f54 \u0f60\u0f72\u0f0b \u0f49 \u0f66\u0f0b\n>>> t.tokenize_words_raw_text\nLoading Trie... (2s.)\n'\u0f63\u0f7a_\u0f42\u0f66 \u0f0d_ \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b \u0f58\u0f50 \u0f60\u0f72\u0f0b _\u0f06_ \u0f64\u0f72\u0f0b \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b_ tr_ \u0f56\u0f51\u0f7a\u0f0b\u0f0b\u0f63\u0f7a_\u0f42\u0f66 \u0f0d_ \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b \u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0b \u0f21\u0f22\u0f23 \u0f40\u0f40 \u0f0d_ \u0f58\u0f50 \u0f60\u0f72\u0f0b \u0f62\u0f92\u0fb1\u0f0b\u0f58\u0f5a\u0f7c \u0f62\u0f0b \u0f42\u0f53\u0f66\u0f0b\u0f54 \u0f60\u0f72\u0f0b \u0f49 \u0f66\u0f0b \u0f46\u0f74\u0f0b \u0f60\u0f50\u0f74\u0f44\u0f0b \u0f0d\u0f0d_\u0f0d\u0f0d \u0f58\u0f41\u0f60 \u0f0d'\n>>> t.tokenize_words_raw_lines\n'\u0f63\u0f7a_\u0f42\u0f66 \u0f0d_ \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b \u0f58\u0f50 \u0f60\u0f72\u0f0b _\u0f06_ \u0f64\u0f72\u0f0b \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b__ tr_\\n \u0f56\u0f51\u0f7a\u0f0b\u0f0b\u0f63\u0f7a_\u0f42\u0f66 \u0f0d_ \u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b \u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0b \u0f21\u0f22\u0f23 \u0f40\u0f40 \u0f0d_\\n \u0f58\u0f50 \u0f60\u0f72\u0f0b \u0f62\u0f92\u0fb1\u0f0b\u0f58\u0f5a\u0f7c \u0f62\u0f0b \u0f42\u0f53\u0f66\u0f0b\u0f54 \u0f60\u0f72\u0f0b \u0f49 \u0f66\u0f0b \u0f46\u0f74\u0f0b \u0f60\u0f50\u0f74\u0f44\u0f0b \u0f0d\u0f0d_\u0f0d\u0f0d \u0f58\u0f41\u0f60 \u0f0d'\n\n>>> # OUTPUT4: segments in words, then calculates the number of occurences of each word found\n... # by default, it counts in_str's substrings in the output, which is why we have \u0f56\u0f51\u0f7a\u0f0b\u0f0b\u0f63\u0f7a \u0f42\u0f66\t1, \u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0b\t1\n... # this behaviour can easily be modified to take into account the words that pybo recognized instead (see advanced usage)\n>>> print(t.list_word_types)\n\u0f60\u0f72\u0f0b\t3\n\u0f0d \t2\n\u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b\t2\n\u0f58\u0f50\t2\n\u0f63\u0f7a \u0f42\u0f66\t1\n \u0f06 \t1\n\u0f64\u0f72\u0f0b\t1\n\u0f56\u0f40\u0fb2\u0f0b\u0f64\u0f72\u0f66\u0f0b  \t1\ntr \\n\t1\n\u0f56\u0f51\u0f7a\u0f0b\u0f0b\u0f63\u0f7a \u0f42\u0f66\t1\n\u0f56\u0f51\u0f7a\u0f0b\u0f63\u0f7a\u0f42\u0f66\u0f0b\t1\n\u0f21\u0f22\u0f23\t1\n\u0f40\u0f40\t1\n\u0f0d \\n\t1\n\u0f62\u0f92\u0fb1\u0f0b\u0f58\u0f5a\u0f7c\t1\n\u0f62\u0f0b\t1\n\u0f42\u0f53\u0f66\u0f0b\u0f54\t1\n\u0f49\t1\n\u0f66\u0f0b\t1\n\u0f46\u0f74\u0f0b\t1\n\u0f60\u0f50\u0f74\u0f44\u0f0b\t1\n\u0f0d\u0f0d \u0f0d\u0f0d\t1\n\u0f58\u0f41\u0f60\t1\n\u0f0d\t1\n```\n\n##### Custom dialect pack:\n\nIn order to use custom dialect pack:\n\n- You need to prepare your dialect pack in same folder structure like [general dialect pack](https://github.com/Esukhia/botok-data/tree/master/dialect_packs/general)\n- Then you need to instaintiate a config object where you will pass dialect name and path\n- You can instaintiate your tokenizer object using that config object\n- Your tokenizer will be using your custom dialect pack and it will be using trie pickled file in future to build the custom trie.\n\n## Docs\n\nNo documentations.\n\n<!-- This section must link to the docs which are in the root of the repository in /docs -->\n\n\n## Owners\n\n- [@drupchen](https://github.com/drupchen)\n- [@eroux](https://github.com/eroux)\n- [@ngawangtrinley](https://github.com/ngawangtrinley)\n- [@10zinten](https://github.com/10zinten)\n- [@kaldan007](https://github.com/kaldan007)\n\n<!-- This section lists the owners of the repo -->\n\n\n## Acknowledgements\n\n**botok** is an open source library for Tibetan NLP.\n\nWe are always open to cooperation in introducing new features, tool integrations and testing solutions.\n\nMany thanks to the companies and organizations who have supported botok's development, especially:\n\n* [Khyentse Foundation](https://khyentsefoundation.org) for contributing USD22,000 to kickstart the project \n* The [Barom/Esukhia canon project](http://www.barom.org) for sponsoring training data curation\n* [BDRC](https://tbrc.org) for contributing 2 staff for 6 months for data curation\n\n## Maintainance\n\nBuild the source dist:\n\n```\nrm -rf dist/\npython3 setup.py clean sdist\n```\n\nand upload on twine (version >= `1.11.0`) with:\n\n```\ntwine upload dist/*\n```\n\n## License\n\nThe Python code is Copyright (C) 2019 Esukhia, provided under [Apache 2](LICENSE). \n\ncontributors:\n * [Drupchen](https://github.com/drupchen)\n * [\u00c9lie Roux](https://github.com/eroux)\n * [Ngawang Trinley](https://github.com/ngawangtrinley)\n * [Mikko Kotila](https://github.com/mikkokotila)\n * [Thubten Rinzin](https://github.com/thubtenrigzin)\n\n * [Tenzin](https://github.com/10zinten)\n * Joyce Mackzenzie for reworking the logo\n",
    "bugtrack_url": null,
    "license": "Apache2",
    "summary": "Tibetan Word Tokenizer",
    "version": "0.8.12",
    "project_urls": {
        "Homepage": "https://github.com/Esukhia/botok",
        "Source": "https://github.com/Esukhia/botok",
        "Tracker": "https://github.com/Esukhia/botok/issues"
    },
    "split_keywords": [
        "nlp",
        "computational_linguistics",
        "tibetan",
        "tokenizer",
        "token"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "859c93f7d5670628eb22c326d1b57377ddceb050f9ed6ee918059d2dccec057d",
                "md5": "510f9ceb9728eb8a5b15e6f78ac9a3a3",
                "sha256": "ad7f7d8350f8c0a18430ab0a7465ac0f9e4aa566cb5acc2a0fb738dfa45c48f5"
            },
            "downloads": -1,
            "filename": "botok-0.8.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "510f9ceb9728eb8a5b15e6f78ac9a3a3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 79740,
            "upload_time": "2023-05-17T11:36:35",
            "upload_time_iso_8601": "2023-05-17T11:36:35.387496Z",
            "url": "https://files.pythonhosted.org/packages/85/9c/93f7d5670628eb22c326d1b57377ddceb050f9ed6ee918059d2dccec057d/botok-0.8.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "facd224fb390540ce02357e8d9274f9a55731b4b87c4850b8653884f7a24841a",
                "md5": "26f449844f49b490dfe87825316d3053",
                "sha256": "98bf755522725ead547d3de66af692c472570bf6ab2d335b19255bcb85213949"
            },
            "downloads": -1,
            "filename": "botok-0.8.12.tar.gz",
            "has_sig": false,
            "md5_digest": "26f449844f49b490dfe87825316d3053",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 68623,
            "upload_time": "2023-05-17T11:36:37",
            "upload_time_iso_8601": "2023-05-17T11:36:37.282080Z",
            "url": "https://files.pythonhosted.org/packages/fa/cd/224fb390540ce02357e8d9274f9a55731b4b87c4850b8653884f7a24841a/botok-0.8.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-17 11:36:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Esukhia",
    "github_project": "botok",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "black",
            "specs": []
        },
        {
            "name": "isort",
            "specs": []
        },
        {
            "name": "pytest",
            "specs": [
                [
                    ">=",
                    "5.0.0"
                ]
            ]
        },
        {
            "name": "coveralls",
            "specs": []
        },
        {
            "name": "covdefaults",
            "specs": []
        }
    ],
    "lcname": "botok"
}
        
Elapsed time: 0.06493s