uit-tokenizer


Nameuit-tokenizer JSON
Version 1.0 PyPI version JSON
download
home_pagehttps://github.com/it-dainb/uitnlp.git
SummaryUITNLP: A Python NLP Library for Vietnamese
upload_time2023-08-05 15:38:49
maintainer
docs_urlNone
authorThe UIT Natural Language Processing Group
requires_python>=3.6
licenseApache License 2.0
keywords natural-language-processing nlp natural-language-understanding uit-nlp vietnamese-word-segmentation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # UITNLP: A Python NLP Library for Vietnamese

## Installation

You can install this package from PyPI using [pip](http://www.pip-installer.org):

```
$ pip install uit-tokenizer
```

## Example

```python
#!/usr/bin/python
# -*- coding: utf-8 -*-
from uit_tokenizer import load_word_segmenter
word_segmenter = load_word_segmenter(feature_name='base_sep_sfx')
word_segmenter.segment(texts=['Chào mừng bạn đến với Trường Đại học Công nghệ Thông tin, ĐHQG-HCM.'], pre_tokenized=False, batch_size=4)
```

## Note
Currently, we have just wrappered the Vietnamese word segmentation method published in the following [our paper](https://link.springer.com/chapter/10.1007/978-981-15-6168-9_33):

    @InProceedings{10.1007/978-981-15-6168-9_33,
      author    = "Nguyen, Duc-Vu and Van Thin, Dang and Van Nguyen, Kiet and Nguyen, Ngan Luu-Thuy",
      editor    = "Nguyen, Le-Minh and Phan, Xuan-Hieu and Hasida, K{\^o}iti and Tojo, Satoshi",
      title     = "Vietnamese Word Segmentation with SVM: Ambiguity Reduction and Suffix Capture",
      booktitle = "Computational Linguistics",
      year      = "2020",
      publisher = "Springer Singapore",
      address   = "Singapore",
      pages     = "400--413",
      abstract  = "In this paper, we approach Vietnamese word segmentation as a binary classification by using the Support Vector Machine classifier. We inherit features from prior works such as n-gram of syllables, n-gram of syllable types, and checking conjunction of adjacent syllables in the dictionary. We propose two novel ways to feature extraction, one to reduce the overlap ambiguity and the other to increase the ability to predict unknown words containing suffixes. Different from UETsegmenter and RDRsegmenter, two state-of-the-art Vietnamese word segmentation methods, we do not employ the longest matching algorithm as an initial processing step or any post-processing technique. According to experimental results on benchmark Vietnamese datasets, our proposed method obtained a better {\$}{\$}{\backslash}text {\{}F{\}}{\_}{\{}1{\}}{\backslash}text {\{}-score{\}}{\$}{\$}F1-scorethan the prior state-of-the-art methods UETsegmenter, and RDRsegmenter.",
      isbn      = "978-981-15-6168-9"
    }

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/it-dainb/uitnlp.git",
    "name": "uit-tokenizer",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "natural-language-processing nlp natural-language-understanding uit-nlp vietnamese-word-segmentation",
    "author": "The UIT Natural Language Processing Group",
    "author_email": "vund@uit.edu.vn",
    "download_url": "https://files.pythonhosted.org/packages/31/8c/7dcb20d7631d1fd75f952dcfd6b13d9bbe538c5e94d59a6d05f072ecba53/uit_tokenizer-1.0.tar.gz",
    "platform": null,
    "description": "# UITNLP: A Python NLP Library for Vietnamese\n\n## Installation\n\nYou can install this package from PyPI using [pip](http://www.pip-installer.org):\n\n```\n$ pip install uit-tokenizer\n```\n\n## Example\n\n```python\n#!/usr/bin/python\n# -*- coding: utf-8 -*-\nfrom uit_tokenizer import load_word_segmenter\nword_segmenter = load_word_segmenter(feature_name='base_sep_sfx')\nword_segmenter.segment(texts=['Ch\u00e0o m\u1eebng b\u1ea1n \u0111\u1ebfn v\u1edbi Tr\u01b0\u1eddng \u0110\u1ea1i h\u1ecdc C\u00f4ng ngh\u1ec7 Th\u00f4ng tin, \u0110HQG-HCM.'], pre_tokenized=False, batch_size=4)\n```\n\n## Note\nCurrently, we have just wrappered the Vietnamese word segmentation method published in the following [our paper](https://link.springer.com/chapter/10.1007/978-981-15-6168-9_33):\n\n    @InProceedings{10.1007/978-981-15-6168-9_33,\n      author    = \"Nguyen, Duc-Vu and Van Thin, Dang and Van Nguyen, Kiet and Nguyen, Ngan Luu-Thuy\",\n      editor    = \"Nguyen, Le-Minh and Phan, Xuan-Hieu and Hasida, K{\\^o}iti and Tojo, Satoshi\",\n      title     = \"Vietnamese Word Segmentation with SVM: Ambiguity Reduction and Suffix Capture\",\n      booktitle = \"Computational Linguistics\",\n      year      = \"2020\",\n      publisher = \"Springer Singapore\",\n      address   = \"Singapore\",\n      pages     = \"400--413\",\n      abstract  = \"In this paper, we approach Vietnamese word segmentation as a binary classification by using the Support Vector Machine classifier. We inherit features from prior works such as n-gram of syllables, n-gram of syllable types, and checking conjunction of adjacent syllables in the dictionary. We propose two novel ways to feature extraction, one to reduce the overlap ambiguity and the other to increase the ability to predict unknown words containing suffixes. Different from UETsegmenter and RDRsegmenter, two state-of-the-art Vietnamese word segmentation methods, we do not employ the longest matching algorithm as an initial processing step or any post-processing technique. According to experimental results on benchmark Vietnamese datasets, our proposed method obtained a better {\\$}{\\$}{\\backslash}text {\\{}F{\\}}{\\_}{\\{}1{\\}}{\\backslash}text {\\{}-score{\\}}{\\$}{\\$}F1-scorethan the prior state-of-the-art methods UETsegmenter, and RDRsegmenter.\",\n      isbn      = \"978-981-15-6168-9\"\n    }\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "UITNLP: A Python NLP Library for Vietnamese",
    "version": "1.0",
    "project_urls": {
        "Homepage": "https://github.com/it-dainb/uitnlp.git"
    },
    "split_keywords": [
        "natural-language-processing",
        "nlp",
        "natural-language-understanding",
        "uit-nlp",
        "vietnamese-word-segmentation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d33363accff6418ff520c247204ff654883a337b2aef459f53914c5abe2566b7",
                "md5": "63b882c86d2f2909c3b9c6ec3ba09ef0",
                "sha256": "9bc90700241f19678fadb48da291516309f70c641ac66cc7a8e882149e26b4b7"
            },
            "downloads": -1,
            "filename": "uit_tokenizer-1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "63b882c86d2f2909c3b9c6ec3ba09ef0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 15089,
            "upload_time": "2023-08-05T15:38:48",
            "upload_time_iso_8601": "2023-08-05T15:38:48.232779Z",
            "url": "https://files.pythonhosted.org/packages/d3/33/63accff6418ff520c247204ff654883a337b2aef459f53914c5abe2566b7/uit_tokenizer-1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "318c7dcb20d7631d1fd75f952dcfd6b13d9bbe538c5e94d59a6d05f072ecba53",
                "md5": "0dba6ec3281bac9fb7c8e0b76c7087ef",
                "sha256": "c44e75defac36e9fbf96a911410c32df04435a81833fad98423db8db222f84c4"
            },
            "downloads": -1,
            "filename": "uit_tokenizer-1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0dba6ec3281bac9fb7c8e0b76c7087ef",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 16219,
            "upload_time": "2023-08-05T15:38:49",
            "upload_time_iso_8601": "2023-08-05T15:38:49.554981Z",
            "url": "https://files.pythonhosted.org/packages/31/8c/7dcb20d7631d1fd75f952dcfd6b13d9bbe538c5e94d59a6d05f072ecba53/uit_tokenizer-1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-05 15:38:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "it-dainb",
    "github_project": "uitnlp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "uit-tokenizer"
}
        
Elapsed time: 0.10591s