Name | sinlib JSON |
Version |
0.1.5
JSON |
| download |
home_page | None |
Summary | Sinhala NLP Toolkit |
upload_time | 2024-09-03 06:29:07 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <=3.12,>=3.9.7 |
license | MIT License Copyright (c) [2024] [Ransaka Ravihara] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
nlp
sinhala
python
|
VCS |
|
bugtrack_url |
|
requirements |
numpy
torch
tqdm
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Sinlib
![Alt text](sinlib.png)
## Installation
Using pypi
`pip install sinlib`
## Basic usage
01. Tokenizer
```python
from sinlib import Tokenizer
corpus = """මේ අතර, පෙබරවාරි මාසයේ පළමු දින 08 තුළ පමණක් විදෙස් සංචාරකයන් 60,122 දෙනෙකු මෙරටට පැමිණ තිබේ.
ඒ අනුව මේ වසරේ ගත වූ කාලය තුළ සංචාරකයන් 268,375 දෙනෙකු දිවයිනට පැමිණ ඇති බව සංචාරක සංවර්ධන අධිකාරිය සඳහන් කරයි.
ඉන් වැඩි ම සංචාරකයන් පිරිසක් ඉන්දියාවෙන් පැමිණ ඇති අතර, එම සංඛ්යාව 42,768කි.
ඊට අමතර ව රුසියාවෙන් සංචාරකයන් 39,914ක්, බ්රිතාන්යයෙන් 22,278ක් සහ ජර්මනියෙන් සංචාරකයන් 18,016 දෙනෙකු පැමිණ ඇති බව වාර්තා වේ."""
tokenizer = Tokenizer()
tokenizer.train([corpus])
#encode text into tokens
encoding = tokenizer("මේ අතර, පෙබරවාරි මාසයේ පළමු")
#list tokens
[tokenizer.token_id_to_token_map[id] for id in encoding]
['මේ', ' ', 'අ', 'ත', 'ර', ',', ' ', 'පෙ', 'බ', 'ර', 'වා', 'රි', ' ', 'මා', 'ස', 'යේ', ' ', 'ප', 'ළ', 'මු']
```
02. Preprocessor
```python
sent = ['මෙය සිංහල වාක්යක්', 'මෙය සිංහල වාක්යක් සමග english character කීපයක්','This is complete english sentence']
print(sent)
#['මෙය සිංහල වාක්\u200dයක්', 'මෙය සිංහල වාක්\u200dයක් සමග english character කීපයක්', 'This is #complete english sentence']
from sinlib.preprocessing import get_sinhala_character_ratio
get_sinhala_character_ratio(sent)
#[0.9, 0.46875, 0.0]
```
03. Sinnhala Romanizer
```python
texts = ["hello, මේ මාසයේ ගත වූ දින 15ක කාලය තුළ කොළඹ නගරය ආශ්රිත ව", "මෑතකාලීන ව රට මුහුණ දුන් අභියෝගාත්මකම ආර්ථික කාරණාව ණය ප්රතිව්යුගතකරණය බව මුදල් රාජ්ය අමාත්ය ආචාර්ය රංජිත් සියඹ$$$ mahatha see more****"]
from sinlib import Romanizer
romanizer = Romanizer(char_mapper_fp = None, tokenizer_vocab_path = None)
romanizer(text)
#['hello, me masaye gatha wu dina 15ka kalaya thula kolaba nagaraya ashritha wa',
# 'methakaleena wa rata muhuna dun abhiyogathmakama arthika karanawa naya prathiwyugathakaranaya #bawa mudal rajya amathya acharya ranjith siyaba$$$ mahatha see more****']
```
Raw data
{
"_id": null,
"home_page": null,
"name": "sinlib",
"maintainer": null,
"docs_url": null,
"requires_python": "<=3.12,>=3.9.7",
"maintainer_email": null,
"keywords": "NLP, Sinhala, python",
"author": null,
"author_email": "Ransaka <ransaka.ravihara@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/f7/87/f18dc605de83f728ea32934dc42d412f4631659cc4a9a520728027734b73/sinlib-0.1.5.tar.gz",
"platform": null,
"description": "# Sinlib\n\n![Alt text](sinlib.png)\n\n## Installation\n\nUsing pypi\n`pip install sinlib`\n\n## Basic usage \n\n01. Tokenizer\n\n```python\nfrom sinlib import Tokenizer\n\ncorpus = \"\"\"\u0db8\u0dda \u0d85\u0dad\u0dbb, \u0db4\u0dd9\u0db6\u0dbb\u0dc0\u0dcf\u0dbb\u0dd2 \u0db8\u0dcf\u0dc3\u0dba\u0dda \u0db4\u0dc5\u0db8\u0dd4 \u0daf\u0dd2\u0db1 08 \u0dad\u0dd4\u0dc5 \u0db4\u0db8\u0dab\u0d9a\u0dca \u0dc0\u0dd2\u0daf\u0dd9\u0dc3\u0dca \u0dc3\u0d82\u0da0\u0dcf\u0dbb\u0d9a\u0dba\u0db1\u0dca 60,122 \u0daf\u0dd9\u0db1\u0dd9\u0d9a\u0dd4 \u0db8\u0dd9\u0dbb\u0da7\u0da7 \u0db4\u0dd0\u0db8\u0dd2\u0dab \u0dad\u0dd2\u0db6\u0dda.\n\u0d92 \u0d85\u0db1\u0dd4\u0dc0 \u0db8\u0dda \u0dc0\u0dc3\u0dbb\u0dda \u0d9c\u0dad \u0dc0\u0dd6 \u0d9a\u0dcf\u0dbd\u0dba \u0dad\u0dd4\u0dc5 \u0dc3\u0d82\u0da0\u0dcf\u0dbb\u0d9a\u0dba\u0db1\u0dca 268\u200d,375 \u0daf\u0dd9\u0db1\u0dd9\u0d9a\u0dd4 \u0daf\u0dd2\u0dc0\u0dba\u0dd2\u0db1\u0da7 \u0db4\u0dd0\u0db8\u0dd2\u0dab \u0d87\u0dad\u0dd2 \u0db6\u0dc0 \u0dc3\u0d82\u0da0\u0dcf\u0dbb\u0d9a \u0dc3\u0d82\u0dc0\u0dbb\u0dca\u0db0\u0db1 \u0d85\u0db0\u0dd2\u0d9a\u0dcf\u0dbb\u0dd2\u0dba \u0dc3\u0db3\u0dc4\u0db1\u0dca \u0d9a\u0dbb\u0dba\u0dd2.\n\u0d89\u0db1\u0dca \u0dc0\u0dd0\u0da9\u0dd2 \u0db8 \u0dc3\u0d82\u0da0\u0dcf\u0dbb\u0d9a\u0dba\u0db1\u0dca \u0db4\u0dd2\u0dbb\u0dd2\u0dc3\u0d9a\u0dca \u0d89\u0db1\u0dca\u0daf\u0dd2\u0dba\u0dcf\u0dc0\u0dd9\u0db1\u0dca \u0db4\u0dd0\u0db8\u0dd2\u0dab \u0d87\u0dad\u0dd2 \u0d85\u0dad\u0dbb, \u0d91\u0db8 \u0dc3\u0d82\u0d9b\u0dca\u200d\u0dba\u0dcf\u0dc0 42,768\u0d9a\u0dd2.\n\u0d8a\u0da7 \u0d85\u0db8\u0dad\u0dbb \u0dc0 \u0dbb\u0dd4\u0dc3\u0dd2\u0dba\u0dcf\u0dc0\u0dd9\u0db1\u0dca \u0dc3\u0d82\u0da0\u0dcf\u0dbb\u0d9a\u0dba\u0db1\u0dca 39,914\u0d9a\u0dca, \u0db6\u0dca\u200d\u0dbb\u0dd2\u0dad\u0dcf\u0db1\u0dca\u200d\u0dba\u0dba\u0dd9\u0db1\u0dca 22,278\u0d9a\u0dca \u0dc3\u0dc4 \u0da2\u0dbb\u0dca\u0db8\u0db1\u0dd2\u0dba\u0dd9\u0db1\u0dca \u0dc3\u0d82\u0da0\u0dcf\u0dbb\u0d9a\u0dba\u0db1\u0dca 18,016 \u0daf\u0dd9\u0db1\u0dd9\u0d9a\u0dd4 \u0db4\u0dd0\u0db8\u0dd2\u0dab \u0d87\u0dad\u0dd2 \u0db6\u0dc0 \u0dc0\u0dcf\u0dbb\u0dca\u0dad\u0dcf \u0dc0\u0dda.\"\"\"\n\ntokenizer = Tokenizer()\ntokenizer.train([corpus])\n\n#encode text into tokens\nencoding = tokenizer(\"\u0db8\u0dda \u0d85\u0dad\u0dbb, \u0db4\u0dd9\u0db6\u0dbb\u0dc0\u0dcf\u0dbb\u0dd2 \u0db8\u0dcf\u0dc3\u0dba\u0dda \u0db4\u0dc5\u0db8\u0dd4\")\n\n#list tokens\n[tokenizer.token_id_to_token_map[id] for id in encoding]\n['\u0db8\u0dda', ' ', '\u0d85', '\u0dad', '\u0dbb', ',', ' ', '\u0db4\u0dd9', '\u0db6', '\u0dbb', '\u0dc0\u0dcf', '\u0dbb\u0dd2', ' ', '\u0db8\u0dcf', '\u0dc3', '\u0dba\u0dda', ' ', '\u0db4', '\u0dc5', '\u0db8\u0dd4']\n```\n\n02. Preprocessor\n ```python\nsent = ['\u0db8\u0dd9\u0dba \u0dc3\u0dd2\u0d82\u0dc4\u0dbd \u0dc0\u0dcf\u0d9a\u0dca\u200d\u0dba\u0d9a\u0dca', '\u0db8\u0dd9\u0dba \u0dc3\u0dd2\u0d82\u0dc4\u0dbd \u0dc0\u0dcf\u0d9a\u0dca\u200d\u0dba\u0d9a\u0dca \u0dc3\u0db8\u0d9c english character \u0d9a\u0dd3\u0db4\u0dba\u0d9a\u0dca','This is complete english sentence']\nprint(sent)\n#['\u0db8\u0dd9\u0dba \u0dc3\u0dd2\u0d82\u0dc4\u0dbd \u0dc0\u0dcf\u0d9a\u0dca\\u200d\u0dba\u0d9a\u0dca', '\u0db8\u0dd9\u0dba \u0dc3\u0dd2\u0d82\u0dc4\u0dbd \u0dc0\u0dcf\u0d9a\u0dca\\u200d\u0dba\u0d9a\u0dca \u0dc3\u0db8\u0d9c english character \u0d9a\u0dd3\u0db4\u0dba\u0d9a\u0dca', 'This is #complete english sentence']\n\nfrom sinlib.preprocessing import get_sinhala_character_ratio\n\nget_sinhala_character_ratio(sent)\n#[0.9, 0.46875, 0.0]\n```\n\n03. Sinnhala Romanizer\n ```python\ntexts = [\"hello, \u0db8\u0dda \u0db8\u0dcf\u0dc3\u0dba\u0dda \u0d9c\u0dad \u0dc0\u0dd6 \u0daf\u0dd2\u0db1 15\u0d9a \u0d9a\u0dcf\u0dbd\u0dba \u0dad\u0dd4\u0dc5 \u0d9a\u0ddc\u0dc5\u0db9 \u0db1\u0d9c\u0dbb\u0dba \u0d86\u0dc1\u0dca\u200d\u0dbb\u0dd2\u0dad \u0dc0\", \"\u0db8\u0dd1\u0dad\u0d9a\u0dcf\u0dbd\u0dd3\u0db1 \u0dc0 \u0dbb\u0da7 \u0db8\u0dd4\u0dc4\u0dd4\u0dab \u0daf\u0dd4\u0db1\u0dca \u0d85\u0db7\u0dd2\u0dba\u0ddd\u0d9c\u0dcf\u0dad\u0dca\u0db8\u0d9a\u0db8 \u0d86\u0dbb\u0dca\u0dae\u0dd2\u0d9a \u0d9a\u0dcf\u0dbb\u0dab\u0dcf\u0dc0 \u0dab\u0dba \u0db4\u0dca\u200d\u0dbb\u0dad\u0dd2\u0dc0\u0dca\u200d\u0dba\u0dd4\u0d9c\u0dad\u0d9a\u0dbb\u0dab\u0dba \u0db6\u0dc0 \u0db8\u0dd4\u0daf\u0dbd\u0dca \u0dbb\u0dcf\u0da2\u0dca\u200d\u0dba \u0d85\u0db8\u0dcf\u0dad\u0dca\u200d\u0dba \u0d86\u0da0\u0dcf\u0dbb\u0dca\u0dba \u0dbb\u0d82\u0da2\u0dd2\u0dad\u0dca \u0dc3\u0dd2\u0dba\u0db9$$$ mahatha see more****\"]\n\nfrom sinlib import Romanizer\n\nromanizer = Romanizer(char_mapper_fp = None, tokenizer_vocab_path = None)\nromanizer(text)\n#['hello, me masaye gatha wu dina 15ka kalaya thula kolaba nagaraya ashritha wa',\n# 'methakaleena wa rata muhuna dun abhiyogathmakama arthika karanawa naya prathiwyugathakaranaya #bawa mudal rajya amathya acharya ranjith siyaba$$$ mahatha see more****']\n```\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) [2024] [Ransaka Ravihara] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "Sinhala NLP Toolkit",
"version": "0.1.5",
"project_urls": {
"Code": "https://github.com/Ransaka/sinlib",
"Docs": "https://github.com/Ransaka/sinlib"
},
"split_keywords": [
"nlp",
" sinhala",
" python"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4fe8ed2edd89668ebeace6f87bf1c82f8bb7205271c289af1763e7172c8ce6f3",
"md5": "7d761778cd212c78379d66a8467b311f",
"sha256": "ca9be0b49705add84e7b1a9e74a55f7a4bd507d6a3c5d8073ebdbfda64811cb7"
},
"downloads": -1,
"filename": "sinlib-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7d761778cd212c78379d66a8467b311f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<=3.12,>=3.9.7",
"size": 4225211,
"upload_time": "2024-09-03T06:29:06",
"upload_time_iso_8601": "2024-09-03T06:29:06.052162Z",
"url": "https://files.pythonhosted.org/packages/4f/e8/ed2edd89668ebeace6f87bf1c82f8bb7205271c289af1763e7172c8ce6f3/sinlib-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f787f18dc605de83f728ea32934dc42d412f4631659cc4a9a520728027734b73",
"md5": "7f9d9eda8034e46548e83c59031b9f49",
"sha256": "0c91cb31fbf70f036afcc874600b6ea7f60a95862985b1dcc01a60d3b6ef5c7c"
},
"downloads": -1,
"filename": "sinlib-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "7f9d9eda8034e46548e83c59031b9f49",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<=3.12,>=3.9.7",
"size": 4330738,
"upload_time": "2024-09-03T06:29:07",
"upload_time_iso_8601": "2024-09-03T06:29:07.650321Z",
"url": "https://files.pythonhosted.org/packages/f7/87/f18dc605de83f728ea32934dc42d412f4631659cc4a9a520728027734b73/sinlib-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-03 06:29:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Ransaka",
"github_project": "sinlib",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": []
},
{
"name": "torch",
"specs": []
},
{
"name": "tqdm",
"specs": []
}
],
"lcname": "sinlib"
}