underthesea


Nameunderthesea JSON
Version 6.8.0 PyPI version JSON
download
home_pagehttps://github.com/undertheseanlp/underthesea
SummaryVietnamese NLP Toolkit
upload_time2023-09-22 23:50:53
maintainer
docs_urlNone
authorVu Anh
requires_python
licenseGNU General Public License v3
keywords underthesea
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <br>
  <img src="https://raw.githubusercontent.com/undertheseanlp/underthesea/main/img/logo.png"/>
  <br/>
</p>

<p align="center">
  <a href="https://pypi.python.org/pypi/underthesea">
    <img src="https://img.shields.io/pypi/v/underthesea.svg">
  </a>
  <a href="https://pypi.python.org/pypi/underthesea">
    <img src="https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9%20%7C%203.10%20%7C%203.11-blue">
  </a>
  <a href="http://undertheseanlp.com/">
    <img src="https://img.shields.io/badge/demo-live-brightgreen">
  </a>
  <a href="https://underthesea.readthedocs.io/en/latest/">
    <img src="https://img.shields.io/badge/docs-live-brightgreen">
  </a>
  <a href="https://colab.research.google.com/drive/1gD8dSMSE_uNacW4qJ-NSnvRT85xo9ZY2">
    <img src="https://img.shields.io/badge/colab-ff9f01?logo=google-colab&logoColor=white">
  </a>
  <a href="https://www.facebook.com/undertheseanlp/">
    <img src="https://img.shields.io/badge/Facebook-1877F2?logo=facebook&logoColor=white">
  </a>
  <a href="https://www.youtube.com/channel/UC9Jv1Qg49uprg6SjkyAqs9A">
    <img src="https://img.shields.io/badge/YouTube-FF0000?logo=youtube&logoColor=white">
  </a>
</p>

<br/>

<p align="center">
  <a href="https://github.com/undertheseanlp/underthesea/blob/main/contribute/SPONSORS.md">
    <img src="https://img.shields.io/badge/sponsors-6-red?style=social&logo=GithubSponsors">
  </a>
</p>

<h3 align="center">
Open-source Vietnamese Natural Language Process Toolkit
</h3>

`Underthesea` is:

🌊 **A Vietnamese NLP toolkit.** Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in [Vietnamese Natural Language Processing](https://github.com/undertheseanlp/underthesea). We provides extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing.

🌊 **An open-source software.** Underthesea is published under the [GNU General Public License v3.0](https://github.com/undertheseanlp/underthesea/blob/master/LICENSE) license. Permissions of this strong copyleft license are conditioned on making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license.

🎁 [**Support Us!**](#-support-us) Every bit of support helps us achieve our goals. Thank you so much. 💝💝💝

🎉 **Hey there!** Have you heard about **LLMs**, the **prompt-based models**? Well, guess what? Starting from Underthesea version 6.7.0, you can now dive deep with this **super-cool feature** for [text classification](https://github.com/undertheseanlp/underthesea/issues/682)! Dive in and make a splash! 💦🚀

## Installation


To install underthesea, simply:

```bash
$ pip install underthesea
✨🍰✨
```

Satisfaction, guaranteed.

## Tutorials

<details>
<summary><b><a href="">Sentence Segmentation</a></b> - Breaking text into individual sentences
<code>📜</code>
</summary>

- 📜 Usage

    ```python
    >>> from underthesea import sent_tokenize
    >>> text = 'Taylor cho biết lúc đầu cô cảm thấy ngại với cô bạn thân Amanda nhưng rồi mọi thứ trôi qua nhanh chóng. Amanda cũng thoải mái với mối quan hệ này.'

    >>> sent_tokenize(text)
    [
      "Taylor cho biết lúc đầu cô cảm thấy ngại với cô bạn thân Amanda nhưng rồi mọi thứ trôi qua nhanh chóng.",
      "Amanda cũng thoải mái với mối quan hệ này."
    ]
    ```
</details>

<details>
<summary><b><a href="">Text Normalization</a></b> - Standardizing textual data representation
<code>📜</code>
</summary>

- 📜 Usage

    ```python
    >>> from underthesea import text_normalize
    >>> text_normalize("Ðảm baỏ chất lựơng phòng thí nghịêm hoá học")
    "Đảm bảo chất lượng phòng thí nghiệm hóa học"
    ```
</details>

<details>
<summary><b><a href="">Word Segmentation</a></b> - Dividing text into individual words
<code>📜</code>
</summary>

- 📜 Usage

    ```python
    >>> from underthesea import word_tokenize
    >>> text = "Chàng trai 9X Quảng Trị khởi nghiệp từ nấm sò"
    
    >>> word_tokenize(text)
    ["Chàng trai", "9X", "Quảng Trị", "khởi nghiệp", "từ", "nấm", "sò"]
    
    >>> word_tokenize(sentence, format="text")
    "Chàng_trai 9X Quảng_Trị khởi_nghiệp từ nấm sò"
    
    >>> text = "Viện Nghiên Cứu chiến lược quốc gia về học máy"
    >>> fixed_words = ["Viện Nghiên Cứu", "học máy"]
    >>> word_tokenize(text, fixed_words=fixed_words)
    "Viện_Nghiên_Cứu chiến_lược quốc_gia về học_máy"
    ```
</details>

<details>
<summary><b><a href="">POS Tagging</a></b> - Labeling words with their part-of-speech
<code>📜</code>
</summary>

- 📜 Usage

    ```python
    >>> from underthesea import pos_tag
    >>> pos_tag('Chợ thịt chó nổi tiếng ở Sài Gòn bị truy quét')
    [('Chợ', 'N'),
     ('thịt', 'N'),
     ('chó', 'N'),
     ('nổi tiếng', 'A'),
     ('ở', 'E'),
     ('Sài Gòn', 'Np'),
     ('bị', 'V'),
     ('truy quét', 'V')]
    ```
</details>

<details><summary><b><a href="">Chunking</a></b> - Grouping words into meaningful phrases or units
<code>📜</code>
</summary>

- 📜 Usage

    ```python
    >>> from underthesea import chunk
    >>> text = 'Bác sĩ bây giờ có thể thản nhiên báo tin bệnh nhân bị ung thư?'
    >>> chunk(text)
    [('Bác sĩ', 'N', 'B-NP'),
     ('bây giờ', 'P', 'B-NP'),
     ('có thể', 'R', 'O'),
     ('thản nhiên', 'A', 'B-AP'),
     ('báo', 'V', 'B-VP'),
     ('tin', 'N', 'B-NP'),
     ('bệnh nhân', 'N', 'B-NP'),
     ('bị', 'V', 'B-VP'),
     ('ung thư', 'N', 'B-NP'),
     ('?', 'CH', 'O')]
    ```
</details>

<details>
<summary><b><a href="">Dependency Parsing</a></b> - Analyzing grammatical structure between words
<code>⚛️</code>
</summary>
<br/>

- ⚛️ Deep Learning Model
    
    ```bash
    $ pip install underthesea[deep]
    ```
    
    ```python
    >>> from underthesea import dependency_parse
    >>> text = 'Tối 29/11, Việt Nam thêm 2 ca mắc Covid-19'
    >>> dependency_parse(text)
    [('Tối', 5, 'obl:tmod'),
     ('29/11', 1, 'flat:date'),
     (',', 1, 'punct'),
     ('Việt Nam', 5, 'nsubj'),
     ('thêm', 0, 'root'),
     ('2', 7, 'nummod'),
     ('ca', 5, 'obj'),
     ('mắc', 7, 'nmod'),
     ('Covid-19', 8, 'nummod')]
    ```
</details>

<details>
<summary><b><a href="">Named Entity Recognition</a></b> -  Identifying named entities (e.g., names, locations)
<code>📜</code> <code>⚛️</code>
</summary>
<br/>

- 📜 Usage

    ```python
    >>> from underthesea import ner
    >>> text = 'Chưa tiết lộ lịch trình tới Việt Nam của Tổng thống Mỹ Donald Trump'
    >>> ner(text)
    [('Chưa', 'R', 'O', 'O'),
     ('tiết lộ', 'V', 'B-VP', 'O'),
     ('lịch trình', 'V', 'B-VP', 'O'),
     ('tới', 'E', 'B-PP', 'O'),
     ('Việt Nam', 'Np', 'B-NP', 'B-LOC'),
     ('của', 'E', 'B-PP', 'O'),
     ('Tổng thống', 'N', 'B-NP', 'O'),
     ('Mỹ', 'Np', 'B-NP', 'B-LOC'),
     ('Donald', 'Np', 'B-NP', 'B-PER'),
     ('Trump', 'Np', 'B-NP', 'I-PER')]
    ```
    
- ⚛️ Deep Learning Model

    ```bash
    $ pip install underthesea[deep]
    ```
    
    ```python
    >>> from underthesea import ner
    >>> text = "Bộ Công Thương xóa một tổng cục, giảm nhiều đầu mối"
    >>> ner(text, deep=True)
    [
      {'entity': 'B-ORG', 'word': 'Bộ'},
      {'entity': 'I-ORG', 'word': 'Công'},
      {'entity': 'I-ORG', 'word': 'Thương'}
    ]
    ```
</details>

<details>
<summary><b><a href="">Text Classification</a></b> - Categorizing text into predefined groups
<code>📜</code> <code>⚡</code>
</summary>

- 📜 Usage

    ```python
    >>> from underthesea import classify
    
    >>> classify('HLV đầu tiên ở Premier League bị sa thải sau 4 vòng đấu')
    ['The thao']
    
    >>> classify('Hội đồng tư vấn kinh doanh Asean vinh danh giải thưởng quốc tế')
    ['Kinh doanh']
    
    >> classify('Lãi suất từ BIDV rất ưu đãi', domain='bank')
    ['INTEREST_RATE']
    ```

- ⚡ Prompt-based Model

    ```bash
    $ pip install underthesea[prompt]
    $ export OPENAI_API_KEY=YOUR_KEY
    ```
    
    ```python
    >>> from underthesea import classify
    >>> text = "HLV ngoại đòi gần tỷ mỗi tháng dẫn dắt tuyển Việt Nam"
    >>> classify(text, model='prompt')
    Thể thao
    ```
</details>

<details>
<summary><b><a href="">Sentiment Analysis</a></b> - Determining text's emotional tone or sentiment
<code>📜</code>
</summary>

- 📜 Usage

    ```python
    >>> from underthesea import sentiment
    
    >>> sentiment('hàng kém chất lg,chăn đắp lên dính lông lá khắp người. thất vọng')
    'negative'
    >>> sentiment('Sản phẩm hơi nhỏ so với tưởng tượng nhưng chất lượng tốt, đóng gói cẩn thận.')
    'positive'
    
    >>> sentiment('Đky qua đường link ở bài viết này từ thứ 6 mà giờ chưa thấy ai lhe hết', domain='bank')
    ['CUSTOMER_SUPPORT#negative']
    >>> sentiment('Xem lại vẫn thấy xúc động và tự hào về BIDV của mình', domain='bank')
    ['TRADEMARK#positive']
    ```
</details>

<details>
<summary><b><a href="">Say 🗣️</a></b> - Converting written text into spoken audio
<code>⚛️</code>
</summary>

<br/>

Text to Speech API. Thanks to awesome work from [NTT123/vietTTS](https://github.com/ntt123/vietTTS)

Install extend dependencies and models

    ```bash
    $ pip install underthesea[wow]
    $ underthesea download-model VIET_TTS_V0_4_1
    ```

Usage examples in script

    ```python
    >>> from underthesea.pipeline.say import say
    
    >>> say("Cựu binh Mỹ trả nhật ký nhẹ lòng khi thấy cuộc sống hòa bình tại Việt Nam")
    A new audio file named `sound.wav` will be generated.
    ```

Usage examples in command line

    ```sh
    $ underthesea say "Cựu binh Mỹ trả nhật ký nhẹ lòng khi thấy cuộc sống hòa bình tại Việt Nam"
    ```
</details>

<details>
<summary><b><a href="">Vietnamese NLP Resources</a></b></summary>

<br/>

List resources

```bash
$ underthesea list-data
| Name                      | Type        | License | Year | Directory                          |
|---------------------------+-------------+---------+------+------------------------------------|
| CP_Vietnamese_VLC_v2_2022 | Plaintext   | Open    | 2023 | datasets/CP_Vietnamese_VLC_v2_2022 |
| UIT_ABSA_RESTAURANT       | Sentiment   | Open    | 2021 | datasets/UIT_ABSA_RESTAURANT       |
| UIT_ABSA_HOTEL            | Sentiment   | Open    | 2021 | datasets/UIT_ABSA_HOTEL            |
| SE_Vietnamese-UBS         | Sentiment   | Open    | 2020 | datasets/SE_Vietnamese-UBS         |
| CP_Vietnamese-UNC         | Plaintext   | Open    | 2020 | datasets/CP_Vietnamese-UNC         |
| DI_Vietnamese-UVD         | Dictionary  | Open    | 2020 | datasets/DI_Vietnamese-UVD         |
| UTS2017-BANK              | Categorized | Open    | 2017 | datasets/UTS2017-BANK              |
| VNTQ_SMALL                | Plaintext   | Open    | 2012 | datasets/LTA                       |
| VNTQ_BIG                  | Plaintext   | Open    | 2012 | datasets/LTA                       |
| VNESES                    | Plaintext   | Open    | 2012 | datasets/LTA                       |
| VNTC                      | Categorized | Open    | 2007 | datasets/VNTC                      |

$ underthesea list-data --all
```

Download resources

```bash
$ underthesea download-data CP_Vietnamese_VLC_v2_2022
Resource CP_Vietnamese_VLC_v2_2022 is downloaded in ~/.underthesea/datasets/CP_Vietnamese_VLC_v2_2022 folder
```

</details>

### Up Coming Features

* Automatic Speech Recognition
* Machine Translation
* Chatbot (Chat & Speak)

## Contributing

Do you want to contribute with underthesea development? Great! Please read more details at [CONTRIBUTING.rst](https://github.com/undertheseanlp/underthesea/blob/main/contribute/CONTRIBUTING.rst)

## 💝 Support Us

If you found this project helpful and would like to support our work, you can just buy us a coffee ☕.

Your support is our biggest encouragement 🎁!

<img src="https://raw.githubusercontent.com/undertheseanlp/underthesea/main/img/support.png"/>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/undertheseanlp/underthesea",
    "name": "underthesea",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "underthesea",
    "author": "Vu Anh",
    "author_email": "anhv.ict91@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/49/d8/c6c396874a70632b79a2fd9a4f4d9dc2a99dd77ef1c20b1fc5b725b99f05/underthesea-6.8.0.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <br>\n  <img src=\"https://raw.githubusercontent.com/undertheseanlp/underthesea/main/img/logo.png\"/>\n  <br/>\n</p>\n\n<p align=\"center\">\n  <a href=\"https://pypi.python.org/pypi/underthesea\">\n    <img src=\"https://img.shields.io/pypi/v/underthesea.svg\">\n  </a>\n  <a href=\"https://pypi.python.org/pypi/underthesea\">\n    <img src=\"https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9%20%7C%203.10%20%7C%203.11-blue\">\n  </a>\n  <a href=\"http://undertheseanlp.com/\">\n    <img src=\"https://img.shields.io/badge/demo-live-brightgreen\">\n  </a>\n  <a href=\"https://underthesea.readthedocs.io/en/latest/\">\n    <img src=\"https://img.shields.io/badge/docs-live-brightgreen\">\n  </a>\n  <a href=\"https://colab.research.google.com/drive/1gD8dSMSE_uNacW4qJ-NSnvRT85xo9ZY2\">\n    <img src=\"https://img.shields.io/badge/colab-ff9f01?logo=google-colab&logoColor=white\">\n  </a>\n  <a href=\"https://www.facebook.com/undertheseanlp/\">\n    <img src=\"https://img.shields.io/badge/Facebook-1877F2?logo=facebook&logoColor=white\">\n  </a>\n  <a href=\"https://www.youtube.com/channel/UC9Jv1Qg49uprg6SjkyAqs9A\">\n    <img src=\"https://img.shields.io/badge/YouTube-FF0000?logo=youtube&logoColor=white\">\n  </a>\n</p>\n\n<br/>\n\n<p align=\"center\">\n  <a href=\"https://github.com/undertheseanlp/underthesea/blob/main/contribute/SPONSORS.md\">\n    <img src=\"https://img.shields.io/badge/sponsors-6-red?style=social&logo=GithubSponsors\">\n  </a>\n</p>\n\n<h3 align=\"center\">\nOpen-source Vietnamese Natural Language Process Toolkit\n</h3>\n\n`Underthesea` is:\n\n\ud83c\udf0a **A Vietnamese NLP toolkit.** Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in [Vietnamese Natural Language Processing](https://github.com/undertheseanlp/underthesea). We provides extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing.\n\n\ud83c\udf0a **An open-source software.** Underthesea is published under the [GNU General Public License v3.0](https://github.com/undertheseanlp/underthesea/blob/master/LICENSE) license. Permissions of this strong copyleft license are conditioned on making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license.\n\n\ud83c\udf81 [**Support Us!**](#-support-us) Every bit of support helps us achieve our goals. Thank you so much. \ud83d\udc9d\ud83d\udc9d\ud83d\udc9d\n\n\ud83c\udf89 **Hey there!** Have you heard about **LLMs**, the **prompt-based models**? Well, guess what? Starting from Underthesea version 6.7.0, you can now dive deep with this **super-cool feature** for [text classification](https://github.com/undertheseanlp/underthesea/issues/682)! Dive in and make a splash! \ud83d\udca6\ud83d\ude80\n\n## Installation\n\n\nTo install underthesea, simply:\n\n```bash\n$ pip install underthesea\n\u2728\ud83c\udf70\u2728\n```\n\nSatisfaction, guaranteed.\n\n## Tutorials\n\n<details>\n<summary><b><a href=\"\">Sentence Segmentation</a></b> - Breaking text into individual sentences\n<code>\ud83d\udcdc</code>\n</summary>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import sent_tokenize\n    >>> text = 'Taylor cho bi\u1ebft l\u00fac \u0111\u1ea7u c\u00f4 c\u1ea3m th\u1ea5y ng\u1ea1i v\u1edbi c\u00f4 b\u1ea1n th\u00e2n Amanda nh\u01b0ng r\u1ed3i m\u1ecdi th\u1ee9 tr\u00f4i qua nhanh ch\u00f3ng. Amanda c\u0169ng tho\u1ea3i m\u00e1i v\u1edbi m\u1ed1i quan h\u1ec7 n\u00e0y.'\n\n    >>> sent_tokenize(text)\n    [\n      \"Taylor cho bi\u1ebft l\u00fac \u0111\u1ea7u c\u00f4 c\u1ea3m th\u1ea5y ng\u1ea1i v\u1edbi c\u00f4 b\u1ea1n th\u00e2n Amanda nh\u01b0ng r\u1ed3i m\u1ecdi th\u1ee9 tr\u00f4i qua nhanh ch\u00f3ng.\",\n      \"Amanda c\u0169ng tho\u1ea3i m\u00e1i v\u1edbi m\u1ed1i quan h\u1ec7 n\u00e0y.\"\n    ]\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Text Normalization</a></b> - Standardizing textual data representation\n<code>\ud83d\udcdc</code>\n</summary>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import text_normalize\n    >>> text_normalize(\"\u00d0\u1ea3m ba\u1ecf ch\u1ea5t l\u1ef1\u01a1ng ph\u00f2ng th\u00ed ngh\u1ecb\u00eam ho\u00e1 h\u1ecdc\")\n    \"\u0110\u1ea3m b\u1ea3o ch\u1ea5t l\u01b0\u1ee3ng ph\u00f2ng th\u00ed nghi\u1ec7m h\u00f3a h\u1ecdc\"\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Word Segmentation</a></b> - Dividing text into individual words\n<code>\ud83d\udcdc</code>\n</summary>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import word_tokenize\n    >>> text = \"Ch\u00e0ng trai 9X Qu\u1ea3ng Tr\u1ecb kh\u1edfi nghi\u1ec7p t\u1eeb n\u1ea5m s\u00f2\"\n    \n    >>> word_tokenize(text)\n    [\"Ch\u00e0ng trai\", \"9X\", \"Qu\u1ea3ng Tr\u1ecb\", \"kh\u1edfi nghi\u1ec7p\", \"t\u1eeb\", \"n\u1ea5m\", \"s\u00f2\"]\n    \n    >>> word_tokenize(sentence, format=\"text\")\n    \"Ch\u00e0ng_trai 9X Qu\u1ea3ng_Tr\u1ecb kh\u1edfi_nghi\u1ec7p t\u1eeb n\u1ea5m s\u00f2\"\n    \n    >>> text = \"Vi\u1ec7n Nghi\u00ean C\u1ee9u chi\u1ebfn l\u01b0\u1ee3c qu\u1ed1c gia v\u1ec1 h\u1ecdc m\u00e1y\"\n    >>> fixed_words = [\"Vi\u1ec7n Nghi\u00ean C\u1ee9u\", \"h\u1ecdc m\u00e1y\"]\n    >>> word_tokenize(text, fixed_words=fixed_words)\n    \"Vi\u1ec7n_Nghi\u00ean_C\u1ee9u chi\u1ebfn_l\u01b0\u1ee3c qu\u1ed1c_gia v\u1ec1 h\u1ecdc_m\u00e1y\"\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">POS Tagging</a></b> - Labeling words with their part-of-speech\n<code>\ud83d\udcdc</code>\n</summary>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import pos_tag\n    >>> pos_tag('Ch\u1ee3 th\u1ecbt ch\u00f3 n\u1ed5i ti\u1ebfng \u1edf S\u00e0i G\u00f2n b\u1ecb truy qu\u00e9t')\n    [('Ch\u1ee3', 'N'),\n     ('th\u1ecbt', 'N'),\n     ('ch\u00f3', 'N'),\n     ('n\u1ed5i ti\u1ebfng', 'A'),\n     ('\u1edf', 'E'),\n     ('S\u00e0i G\u00f2n', 'Np'),\n     ('b\u1ecb', 'V'),\n     ('truy qu\u00e9t', 'V')]\n    ```\n</details>\n\n<details><summary><b><a href=\"\">Chunking</a></b> - Grouping words into meaningful phrases or units\n<code>\ud83d\udcdc</code>\n</summary>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import chunk\n    >>> text = 'B\u00e1c s\u0129 b\u00e2y gi\u1edd c\u00f3 th\u1ec3 th\u1ea3n nhi\u00ean b\u00e1o tin b\u1ec7nh nh\u00e2n b\u1ecb ung th\u01b0?'\n    >>> chunk(text)\n    [('B\u00e1c s\u0129', 'N', 'B-NP'),\n     ('b\u00e2y gi\u1edd', 'P', 'B-NP'),\n     ('c\u00f3 th\u1ec3', 'R', 'O'),\n     ('th\u1ea3n nhi\u00ean', 'A', 'B-AP'),\n     ('b\u00e1o', 'V', 'B-VP'),\n     ('tin', 'N', 'B-NP'),\n     ('b\u1ec7nh nh\u00e2n', 'N', 'B-NP'),\n     ('b\u1ecb', 'V', 'B-VP'),\n     ('ung th\u01b0', 'N', 'B-NP'),\n     ('?', 'CH', 'O')]\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Dependency Parsing</a></b> - Analyzing grammatical structure between words\n<code>\u269b\ufe0f</code>\n</summary>\n<br/>\n\n- \u269b\ufe0f Deep Learning Model\n    \n    ```bash\n    $ pip install underthesea[deep]\n    ```\n    \n    ```python\n    >>> from underthesea import dependency_parse\n    >>> text = 'T\u1ed1i 29/11, Vi\u1ec7t Nam th\u00eam 2 ca m\u1eafc Covid-19'\n    >>> dependency_parse(text)\n    [('T\u1ed1i', 5, 'obl:tmod'),\n     ('29/11', 1, 'flat:date'),\n     (',', 1, 'punct'),\n     ('Vi\u1ec7t Nam', 5, 'nsubj'),\n     ('th\u00eam', 0, 'root'),\n     ('2', 7, 'nummod'),\n     ('ca', 5, 'obj'),\n     ('m\u1eafc', 7, 'nmod'),\n     ('Covid-19', 8, 'nummod')]\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Named Entity Recognition</a></b> -  Identifying named entities (e.g., names, locations)\n<code>\ud83d\udcdc</code> <code>\u269b\ufe0f</code>\n</summary>\n<br/>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import ner\n    >>> text = 'Ch\u01b0a ti\u1ebft l\u1ed9 l\u1ecbch tr\u00ecnh t\u1edbi Vi\u1ec7t Nam c\u1ee7a T\u1ed5ng th\u1ed1ng M\u1ef9 Donald Trump'\n    >>> ner(text)\n    [('Ch\u01b0a', 'R', 'O', 'O'),\n     ('ti\u1ebft l\u1ed9', 'V', 'B-VP', 'O'),\n     ('l\u1ecbch tr\u00ecnh', 'V', 'B-VP', 'O'),\n     ('t\u1edbi', 'E', 'B-PP', 'O'),\n     ('Vi\u1ec7t Nam', 'Np', 'B-NP', 'B-LOC'),\n     ('c\u1ee7a', 'E', 'B-PP', 'O'),\n     ('T\u1ed5ng th\u1ed1ng', 'N', 'B-NP', 'O'),\n     ('M\u1ef9', 'Np', 'B-NP', 'B-LOC'),\n     ('Donald', 'Np', 'B-NP', 'B-PER'),\n     ('Trump', 'Np', 'B-NP', 'I-PER')]\n    ```\n    \n- \u269b\ufe0f Deep Learning Model\n\n    ```bash\n    $ pip install underthesea[deep]\n    ```\n    \n    ```python\n    >>> from underthesea import ner\n    >>> text = \"B\u1ed9 C\u00f4ng Th\u01b0\u01a1ng x\u00f3a m\u1ed9t t\u1ed5ng c\u1ee5c, gi\u1ea3m nhi\u1ec1u \u0111\u1ea7u m\u1ed1i\"\n    >>> ner(text, deep=True)\n    [\n      {'entity': 'B-ORG', 'word': 'B\u1ed9'},\n      {'entity': 'I-ORG', 'word': 'C\u00f4ng'},\n      {'entity': 'I-ORG', 'word': 'Th\u01b0\u01a1ng'}\n    ]\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Text Classification</a></b> - Categorizing text into predefined groups\n<code>\ud83d\udcdc</code> <code>\u26a1</code>\n</summary>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import classify\n    \n    >>> classify('HLV \u0111\u1ea7u ti\u00ean \u1edf Premier League b\u1ecb sa th\u1ea3i sau 4 v\u00f2ng \u0111\u1ea5u')\n    ['The thao']\n    \n    >>> classify('H\u1ed9i \u0111\u1ed3ng t\u01b0 v\u1ea5n kinh doanh Asean vinh danh gi\u1ea3i th\u01b0\u1edfng qu\u1ed1c t\u1ebf')\n    ['Kinh doanh']\n    \n    >> classify('L\u00e3i su\u1ea5t t\u1eeb BIDV r\u1ea5t \u01b0u \u0111\u00e3i', domain='bank')\n    ['INTEREST_RATE']\n    ```\n\n- \u26a1 Prompt-based Model\n\n    ```bash\n    $ pip install underthesea[prompt]\n    $ export OPENAI_API_KEY=YOUR_KEY\n    ```\n    \n    ```python\n    >>> from underthesea import classify\n    >>> text = \"HLV ngo\u1ea1i \u0111\u00f2i g\u1ea7n t\u1ef7 m\u1ed7i th\u00e1ng d\u1eabn d\u1eaft tuy\u1ec3n Vi\u1ec7t Nam\"\n    >>> classify(text, model='prompt')\n    Th\u1ec3 thao\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Sentiment Analysis</a></b> - Determining text's emotional tone or sentiment\n<code>\ud83d\udcdc</code>\n</summary>\n\n- \ud83d\udcdc Usage\n\n    ```python\n    >>> from underthesea import sentiment\n    \n    >>> sentiment('h\u00e0ng k\u00e9m ch\u1ea5t lg,ch\u0103n \u0111\u1eafp l\u00ean d\u00ednh l\u00f4ng l\u00e1 kh\u1eafp ng\u01b0\u1eddi. th\u1ea5t v\u1ecdng')\n    'negative'\n    >>> sentiment('S\u1ea3n ph\u1ea9m h\u01a1i nh\u1ecf so v\u1edbi t\u01b0\u1edfng t\u01b0\u1ee3ng nh\u01b0ng ch\u1ea5t l\u01b0\u1ee3ng t\u1ed1t, \u0111\u00f3ng g\u00f3i c\u1ea9n th\u1eadn.')\n    'positive'\n    \n    >>> sentiment('\u0110ky qua \u0111\u01b0\u1eddng link \u1edf b\u00e0i vi\u1ebft n\u00e0y t\u1eeb th\u1ee9 6 m\u00e0 gi\u1edd ch\u01b0a th\u1ea5y ai lhe h\u1ebft', domain='bank')\n    ['CUSTOMER_SUPPORT#negative']\n    >>> sentiment('Xem l\u1ea1i v\u1eabn th\u1ea5y x\u00fac \u0111\u1ed9ng v\u00e0 t\u1ef1 h\u00e0o v\u1ec1 BIDV c\u1ee7a m\u00ecnh', domain='bank')\n    ['TRADEMARK#positive']\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Say \ud83d\udde3\ufe0f</a></b> - Converting written text into spoken audio\n<code>\u269b\ufe0f</code>\n</summary>\n\n<br/>\n\nText to Speech API. Thanks to awesome work from [NTT123/vietTTS](https://github.com/ntt123/vietTTS)\n\nInstall extend dependencies and models\n\n    ```bash\n    $ pip install underthesea[wow]\n    $ underthesea download-model VIET_TTS_V0_4_1\n    ```\n\nUsage examples in script\n\n    ```python\n    >>> from underthesea.pipeline.say import say\n    \n    >>> say(\"C\u1ef1u binh M\u1ef9 tr\u1ea3 nh\u1eadt k\u00fd nh\u1eb9 l\u00f2ng khi th\u1ea5y cu\u1ed9c s\u1ed1ng h\u00f2a b\u00ecnh t\u1ea1i Vi\u1ec7t Nam\")\n    A new audio file named `sound.wav` will be generated.\n    ```\n\nUsage examples in command line\n\n    ```sh\n    $ underthesea say \"C\u1ef1u binh M\u1ef9 tr\u1ea3 nh\u1eadt k\u00fd nh\u1eb9 l\u00f2ng khi th\u1ea5y cu\u1ed9c s\u1ed1ng h\u00f2a b\u00ecnh t\u1ea1i Vi\u1ec7t Nam\"\n    ```\n</details>\n\n<details>\n<summary><b><a href=\"\">Vietnamese NLP Resources</a></b></summary>\n\n<br/>\n\nList resources\n\n```bash\n$ underthesea list-data\n| Name                      | Type        | License | Year | Directory                          |\n|---------------------------+-------------+---------+------+------------------------------------|\n| CP_Vietnamese_VLC_v2_2022 | Plaintext   | Open    | 2023 | datasets/CP_Vietnamese_VLC_v2_2022 |\n| UIT_ABSA_RESTAURANT       | Sentiment   | Open    | 2021 | datasets/UIT_ABSA_RESTAURANT       |\n| UIT_ABSA_HOTEL            | Sentiment   | Open    | 2021 | datasets/UIT_ABSA_HOTEL            |\n| SE_Vietnamese-UBS         | Sentiment   | Open    | 2020 | datasets/SE_Vietnamese-UBS         |\n| CP_Vietnamese-UNC         | Plaintext   | Open    | 2020 | datasets/CP_Vietnamese-UNC         |\n| DI_Vietnamese-UVD         | Dictionary  | Open    | 2020 | datasets/DI_Vietnamese-UVD         |\n| UTS2017-BANK              | Categorized | Open    | 2017 | datasets/UTS2017-BANK              |\n| VNTQ_SMALL                | Plaintext   | Open    | 2012 | datasets/LTA                       |\n| VNTQ_BIG                  | Plaintext   | Open    | 2012 | datasets/LTA                       |\n| VNESES                    | Plaintext   | Open    | 2012 | datasets/LTA                       |\n| VNTC                      | Categorized | Open    | 2007 | datasets/VNTC                      |\n\n$ underthesea list-data --all\n```\n\nDownload resources\n\n```bash\n$ underthesea download-data CP_Vietnamese_VLC_v2_2022\nResource CP_Vietnamese_VLC_v2_2022 is downloaded in ~/.underthesea/datasets/CP_Vietnamese_VLC_v2_2022 folder\n```\n\n</details>\n\n### Up Coming Features\n\n* Automatic Speech Recognition\n* Machine Translation\n* Chatbot (Chat & Speak)\n\n## Contributing\n\nDo you want to contribute with underthesea development? Great! Please read more details at [CONTRIBUTING.rst](https://github.com/undertheseanlp/underthesea/blob/main/contribute/CONTRIBUTING.rst)\n\n## \ud83d\udc9d Support Us\n\nIf you found this project helpful and would like to support our work, you can just buy us a coffee \u2615.\n\nYour support is our biggest encouragement \ud83c\udf81!\n\n<img src=\"https://raw.githubusercontent.com/undertheseanlp/underthesea/main/img/support.png\"/>\n",
    "bugtrack_url": null,
    "license": "GNU General Public License v3",
    "summary": "Vietnamese NLP Toolkit",
    "version": "6.8.0",
    "project_urls": {
        "Homepage": "https://github.com/undertheseanlp/underthesea"
    },
    "split_keywords": [
        "underthesea"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c026a2615fb23899ec747eaf7d7b3d4ae6b1959b71ae2a8546e121a8ce375d9c",
                "md5": "d5cfa72c2ba78e609bb38233331acebd",
                "sha256": "4ab81ae620a957e47a3df2b134e35b83a3294bdf58bc5258ae9128c16697e86d"
            },
            "downloads": -1,
            "filename": "underthesea-6.8.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d5cfa72c2ba78e609bb38233331acebd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 20913563,
            "upload_time": "2023-09-22T23:50:49",
            "upload_time_iso_8601": "2023-09-22T23:50:49.875630Z",
            "url": "https://files.pythonhosted.org/packages/c0/26/a2615fb23899ec747eaf7d7b3d4ae6b1959b71ae2a8546e121a8ce375d9c/underthesea-6.8.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "49d8c6c396874a70632b79a2fd9a4f4d9dc2a99dd77ef1c20b1fc5b725b99f05",
                "md5": "37bb2b61d4c239f1e0e552e15865490c",
                "sha256": "f67ec7cb3da0c57a7ad900ec936ed1d68c809cc6bf223717961562a0687af9b7"
            },
            "downloads": -1,
            "filename": "underthesea-6.8.0.tar.gz",
            "has_sig": false,
            "md5_digest": "37bb2b61d4c239f1e0e552e15865490c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 21434001,
            "upload_time": "2023-09-22T23:50:53",
            "upload_time_iso_8601": "2023-09-22T23:50:53.245000Z",
            "url": "https://files.pythonhosted.org/packages/49/d8/c6c396874a70632b79a2fd9a4f4d9dc2a99dd77ef1c20b1fc5b725b99f05/underthesea-6.8.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-22 23:50:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "undertheseanlp",
    "github_project": "underthesea",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "underthesea"
}
        
Elapsed time: 0.12422s