that-nlp-library


Namethat-nlp-library JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/anhquan0412/that-nlp-library
SummaryAim to be a convenient NLP library with the help from HuggingFace
upload_time2024-01-02 09:03:53
maintainer
docs_urlNone
authorQuan Tran
requires_python>=3.9
licenseApache Software License 2.0
keywords nbdev python nlp natural language processing transformer deep learning envibert roberta gpt2 phobert
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Welcome to that-nlp-library

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Install

``` sh
pip install that_nlp_library
```

It is advised that you manually install torch (with your compatible cuda
version if you GPU). Typically it’s

``` sh
pip3 install torch --index-url https://download.pytorch.org/whl/cu118
```

Visit [Pytorch page](https://pytorch.org/) for more information

# High-Level Overview

## Supervised Learning

For supervised learning, the main pipeline contains 2 parts:

### **Text Data Controller: [`TextDataController`](https://anhquan0412.github.io/that-nlp-library/text_main.html#textdatacontroller) (for text processing)**

Here is a list of processings that you can use (in order). You also can
skip any processing if you want to.

![](images/text_processings.PNG)

Here is an example of the Text Controller for a classification task
(predict `Division Name`), without any text preprocessing. The code will
also tokenize your text field.

``` python3
tdc = TextDataController.from_csv('sample_data/Womens_Clothing_Reviews.csv',
                                  main_text='Review Text',
                                  label_names='Division Name',
                                  sup_types='classification',                                  
                                 )
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tdc.process_and_tokenize(tokenizer,max_length=100,shuffle_trn=True)
```

And here is an example when all processings are applied

``` python3
from underthesea import text_normalize
import nlpaug.augmenter.char as nac

# define the augmentation function
def nlp_aug(x,aug=None):
    results = aug.augment(x)
    if not isinstance(x,list): return results[0]
    return results
aug = nac.KeyboardAug(aug_char_max=3,aug_char_p=0.1,aug_word_p=0.07)
nearby_aug_func = partial(nlp_aug,aug=aug)

# initialize the TextDataController
tdc = TextDataController.from_csv(dset,
                                  main_text='Review Text',
                                  
                                  # metadatas
                                  metadatas='Title',
                                  
                                  # label
                                  label_names='Division Name',
                                  sup_types='classification',
                                  label_tfm_dict={'Division Name': lambda x: x if x!='Initmates' else 'Intimates'},
                                  
                                  # row filter
                                  filter_dict={'Review Text': lambda x: x is not None,
                                               'Division Name': lambda x: x is not None,
                                              },
                                              
                                  # text transformation
                                  content_transformation=[text_normalize,str.lower],
                                  
                                  # validation split
                                  val_ratio=0.2,
                                  stratify_cols=['Division Name'],
                                  
                                  # upsampling
                                  upsampling_list=[('Division Name',lambda x: x=='Intimates')]
                                  
                                  # text augmentation
                                  content_augmentations=nearby_aug_func
                                 )

tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tdc.process_and_tokenize(tokenizer,max_length=100,shuffle_trn=True)
```

For an in-depth tutorial on Text Controller for Supervised Learning
([`TextDataController`](https://anhquan0412.github.io/that-nlp-library/text_main.html#textdatacontroller)),
please visit
[here](https://anhquan0412.github.io/that-nlp-library/text_main.html)

This library also a **streamed version of Text Controller**
([`TextDataControllerStreaming`](https://anhquan0412.github.io/that-nlp-library/text_main_streaming.html#textdatacontrollerstreaming)),
allowing you to work with data without having it entirely on your hard
drive. You can still perform all the processings in the non-streamed
version, except for **Train/Validation split** (which means you have to
define your validation set beforehand), and **Upsampling**.

For more details on **streaming**, visit
[here](https://anhquan0412.github.io/that-nlp-library/text_main_streaming.html).

If you are curious on the time and space efficiency between streamed and
non-streamed version, visit the benchmark
[here](https://anhquan0412.github.io/that-nlp-library/text_main_benchmark.html)

### **Model and [`ModelController`](https://anhquan0412.github.io/that-nlp-library/model_main.html#modelcontroller)**

The library can perform the following:

- **Classification ([simple
  tutorial](https://anhquan0412.github.io/that-nlp-library/model_classification_tutorial.html))**

- **[Regression](https://anhquan0412.github.io/that-nlp-library/roberta_multihead_regression.html)**

- **[Multilabel
  classification](https://anhquan0412.github.io/that-nlp-library/roberta_multilabel.html)**

- **[Multiheads](https://anhquan0412.github.io//that-nlp-library/roberta_multihead.html)**,
  where each head can be either classification or regression

  - “Multihead” is when your model needs to predict multiple outputs at
    once, for example, given a sentence (e.g. a review on an e-commerce
    site), you have to predict what category the sentence is about, and
    the sentiment of the sentence, and maybe the rating of the sentence.

  - For the above example, this is a 3-head problem: classification (for
    category), classification (for sentiment), and regression (for
    rating from 1 to 5)

- For 2-head classification where there’s hierarchical relationship
  between the first output and the second output (e.g. the first output
  is level 1 clothing category, and the second output is the level 2
  clothing subcategory), you can utilize two specific approaches for
  this use-case: training with [conditional
  probability](https://anhquan0412.github.io/that-nlp-library/roberta_conditional_prob.html),
  or with [deep hierarchical
  classification](https://anhquan0412.github.io/that-nlp-library/roberta_dhc.html)

### Decoupling of Text Controller and Model Controller

In this library, you can either use
[`TextDataController`](https://anhquan0412.github.io/that-nlp-library/text_main.html#textdatacontroller)
only to handle all the text processings, and have the final
processed-HuggingFace-DatasetDict returned to you. But if you have your
own processed DatasetDict, you can skip the text controller and use only
the
[`ModelController`](https://anhquan0412.github.io/that-nlp-library/model_main.html#modelcontroller)
for training your data. There’s a quick tutorial on this decoupling
[here](https://anhquan0412.github.io/that-nlp-library/model_classification_tutorial.html#train-model-with-only-a-tokenized-datasetdict-no-textdatacontroller)

## Language Modeling

For language modeling, the main pipeline also contains 2 parts

### Text Data Controlelr for Language Model: [`TextDataLMController`](https://anhquan0412.github.io/that-nlp-library/text_main_lm.html#textdatalmcontroller)

Similarly to `TextDatController`,
[`TextDataLMController`](https://anhquan0412.github.io/that-nlp-library/text_main_lm.html#textdatalmcontroller)
also provide a list of processings (except for **Label Processing**,
**Upsampling** and **Text Augmentation**). The controller also allow
tokenization line-by-line or by token concatenation. Visit the tutorial
[here](https://anhquan0412.github.io/that-nlp-library/text_main_lm.html)

There’s also a streamed version
([`TextDataLMControllerStreaming`](https://anhquan0412.github.io/that-nlp-library/text_main_lm_streaming.html#textdatalmcontrollerstreaming))

### Language Model Controller: [`ModelLMController`](https://anhquan0412.github.io/that-nlp-library/model_lm_main.html#modellmcontroller)

The library can train a [masked language
modeling](https://anhquan0412.github.io/that-nlp-library/model_lm_roberta_tutorial.html)
(BERT, roBERTa …) or a [causal language
model](https://anhquan0412.github.io/that-nlp-library/model_lm_gpt2_tutorial.html)
(GPT) either from scratch or from existing pretrained language models.

### Hidden States Extraction

The library also allow you to [extract the hidden
states](https://anhquan0412.github.io/that-nlp-library/hidden_states.html)
of your choice, for further analysis

# Documentation

Visit <https://anhquan0412.github.io/that-nlp-library/>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/anhquan0412/that-nlp-library",
    "name": "that-nlp-library",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "nbdev,python,nlp,natural language processing,transformer,deep learning,envibert,roberta,gpt2,phobert",
    "author": "Quan Tran",
    "author_email": "anhquan0412@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/2c/ee/bb10897d1d0c0a4a7e97b1eff0d65836f6d8a4dbd31cb6b059a45afb7043/that-nlp-library-0.2.1.tar.gz",
    "platform": null,
    "description": "# Welcome to that-nlp-library\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Install\n\n``` sh\npip install that_nlp_library\n```\n\nIt is advised that you manually install torch (with your compatible cuda\nversion if you GPU). Typically it\u2019s\n\n``` sh\npip3 install torch --index-url https://download.pytorch.org/whl/cu118\n```\n\nVisit [Pytorch page](https://pytorch.org/) for more information\n\n# High-Level Overview\n\n## Supervised Learning\n\nFor supervised learning, the main pipeline contains 2 parts:\n\n### **Text Data Controller: [`TextDataController`](https://anhquan0412.github.io/that-nlp-library/text_main.html#textdatacontroller) (for text processing)**\n\nHere is a list of processings that you can use (in order). You also can\nskip any processing if you want to.\n\n![](images/text_processings.PNG)\n\nHere is an example of the Text Controller for a classification task\n(predict `Division Name`), without any text preprocessing. The code will\nalso tokenize your text field.\n\n``` python3\ntdc = TextDataController.from_csv('sample_data/Womens_Clothing_Reviews.csv',\n                                  main_text='Review Text',\n                                  label_names='Division Name',\n                                  sup_types='classification',                                  \n                                 )\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\ntdc.process_and_tokenize(tokenizer,max_length=100,shuffle_trn=True)\n```\n\nAnd here is an example when all processings are applied\n\n``` python3\nfrom underthesea import text_normalize\nimport nlpaug.augmenter.char as nac\n\n# define the augmentation function\ndef nlp_aug(x,aug=None):\n    results = aug.augment(x)\n    if not isinstance(x,list): return results[0]\n    return results\naug = nac.KeyboardAug(aug_char_max=3,aug_char_p=0.1,aug_word_p=0.07)\nnearby_aug_func = partial(nlp_aug,aug=aug)\n\n# initialize the TextDataController\ntdc = TextDataController.from_csv(dset,\n                                  main_text='Review Text',\n                                  \n                                  # metadatas\n                                  metadatas='Title',\n                                  \n                                  # label\n                                  label_names='Division Name',\n                                  sup_types='classification',\n                                  label_tfm_dict={'Division Name': lambda x: x if x!='Initmates' else 'Intimates'},\n                                  \n                                  # row filter\n                                  filter_dict={'Review Text': lambda x: x is not None,\n                                               'Division Name': lambda x: x is not None,\n                                              },\n                                              \n                                  # text transformation\n                                  content_transformation=[text_normalize,str.lower],\n                                  \n                                  # validation split\n                                  val_ratio=0.2,\n                                  stratify_cols=['Division Name'],\n                                  \n                                  # upsampling\n                                  upsampling_list=[('Division Name',lambda x: x=='Intimates')]\n                                  \n                                  # text augmentation\n                                  content_augmentations=nearby_aug_func\n                                 )\n\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\ntdc.process_and_tokenize(tokenizer,max_length=100,shuffle_trn=True)\n```\n\nFor an in-depth tutorial on Text Controller for Supervised Learning\n([`TextDataController`](https://anhquan0412.github.io/that-nlp-library/text_main.html#textdatacontroller)),\nplease visit\n[here](https://anhquan0412.github.io/that-nlp-library/text_main.html)\n\nThis library also a **streamed version of Text Controller**\n([`TextDataControllerStreaming`](https://anhquan0412.github.io/that-nlp-library/text_main_streaming.html#textdatacontrollerstreaming)),\nallowing you to work with data without having it entirely on your hard\ndrive. You can still perform all the processings in the non-streamed\nversion, except for **Train/Validation split** (which means you have to\ndefine your validation set beforehand), and **Upsampling**.\n\nFor more details on **streaming**, visit\n[here](https://anhquan0412.github.io/that-nlp-library/text_main_streaming.html).\n\nIf you are curious on the time and space efficiency between streamed and\nnon-streamed version, visit the benchmark\n[here](https://anhquan0412.github.io/that-nlp-library/text_main_benchmark.html)\n\n### **Model and [`ModelController`](https://anhquan0412.github.io/that-nlp-library/model_main.html#modelcontroller)**\n\nThe library can perform the following:\n\n- **Classification ([simple\n  tutorial](https://anhquan0412.github.io/that-nlp-library/model_classification_tutorial.html))**\n\n- **[Regression](https://anhquan0412.github.io/that-nlp-library/roberta_multihead_regression.html)**\n\n- **[Multilabel\n  classification](https://anhquan0412.github.io/that-nlp-library/roberta_multilabel.html)**\n\n- **[Multiheads](https://anhquan0412.github.io//that-nlp-library/roberta_multihead.html)**,\n  where each head can be either classification or regression\n\n  - \u201cMultihead\u201d is when your model needs to predict multiple outputs at\n    once, for example, given a sentence (e.g.\u00a0a review on an e-commerce\n    site), you have to predict what category the sentence is about, and\n    the sentiment of the sentence, and maybe the rating of the sentence.\n\n  - For the above example, this is a 3-head problem: classification (for\n    category), classification (for sentiment), and regression (for\n    rating from 1 to 5)\n\n- For 2-head classification where there\u2019s hierarchical relationship\n  between the first output and the second output (e.g.\u00a0the first output\n  is level 1 clothing category, and the second output is the level 2\n  clothing subcategory), you can utilize two specific approaches for\n  this use-case: training with [conditional\n  probability](https://anhquan0412.github.io/that-nlp-library/roberta_conditional_prob.html),\n  or with [deep hierarchical\n  classification](https://anhquan0412.github.io/that-nlp-library/roberta_dhc.html)\n\n### Decoupling of Text Controller and Model Controller\n\nIn this library, you can either use\n[`TextDataController`](https://anhquan0412.github.io/that-nlp-library/text_main.html#textdatacontroller)\nonly to handle all the text processings, and have the final\nprocessed-HuggingFace-DatasetDict returned to you. But if you have your\nown processed DatasetDict, you can skip the text controller and use only\nthe\n[`ModelController`](https://anhquan0412.github.io/that-nlp-library/model_main.html#modelcontroller)\nfor training your data. There\u2019s a quick tutorial on this decoupling\n[here](https://anhquan0412.github.io/that-nlp-library/model_classification_tutorial.html#train-model-with-only-a-tokenized-datasetdict-no-textdatacontroller)\n\n## Language Modeling\n\nFor language modeling, the main pipeline also contains 2 parts\n\n### Text Data Controlelr for Language Model: [`TextDataLMController`](https://anhquan0412.github.io/that-nlp-library/text_main_lm.html#textdatalmcontroller)\n\nSimilarly to `TextDatController`,\n[`TextDataLMController`](https://anhquan0412.github.io/that-nlp-library/text_main_lm.html#textdatalmcontroller)\nalso provide a list of processings (except for **Label Processing**,\n**Upsampling** and **Text Augmentation**). The controller also allow\ntokenization line-by-line or by token concatenation. Visit the tutorial\n[here](https://anhquan0412.github.io/that-nlp-library/text_main_lm.html)\n\nThere\u2019s also a streamed version\n([`TextDataLMControllerStreaming`](https://anhquan0412.github.io/that-nlp-library/text_main_lm_streaming.html#textdatalmcontrollerstreaming))\n\n### Language Model Controller: [`ModelLMController`](https://anhquan0412.github.io/that-nlp-library/model_lm_main.html#modellmcontroller)\n\nThe library can train a [masked language\nmodeling](https://anhquan0412.github.io/that-nlp-library/model_lm_roberta_tutorial.html)\n(BERT, roBERTa \u2026) or a [causal language\nmodel](https://anhquan0412.github.io/that-nlp-library/model_lm_gpt2_tutorial.html)\n(GPT) either from scratch or from existing pretrained language models.\n\n### Hidden States Extraction\n\nThe library also allow you to [extract the hidden\nstates](https://anhquan0412.github.io/that-nlp-library/hidden_states.html)\nof your choice, for further analysis\n\n# Documentation\n\nVisit <https://anhquan0412.github.io/that-nlp-library/>\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "Aim to be a convenient NLP library with the help from HuggingFace",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/anhquan0412/that-nlp-library"
    },
    "split_keywords": [
        "nbdev",
        "python",
        "nlp",
        "natural language processing",
        "transformer",
        "deep learning",
        "envibert",
        "roberta",
        "gpt2",
        "phobert"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b80f763bbef69486f26eb026bcac7373465af2976a6a6de3212d9ac5f67ee71",
                "md5": "49c3deace242bdbff47eed3b7dbe0d55",
                "sha256": "c8e0541650d887dfcc5b268bb798c5890812d7c1d893a0bfaa3ada3ee1745099"
            },
            "downloads": -1,
            "filename": "that_nlp_library-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "49c3deace242bdbff47eed3b7dbe0d55",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 60238,
            "upload_time": "2024-01-02T09:03:51",
            "upload_time_iso_8601": "2024-01-02T09:03:51.228013Z",
            "url": "https://files.pythonhosted.org/packages/0b/80/f763bbef69486f26eb026bcac7373465af2976a6a6de3212d9ac5f67ee71/that_nlp_library-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2ceebb10897d1d0c0a4a7e97b1eff0d65836f6d8a4dbd31cb6b059a45afb7043",
                "md5": "4e4cd8781cb02f4186a2164bcd5bee30",
                "sha256": "cf4211bee2618e1ecf8e5bbbd274f7ad04bbf47f93ebf80943f77d84572be54a"
            },
            "downloads": -1,
            "filename": "that-nlp-library-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "4e4cd8781cb02f4186a2164bcd5bee30",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 47126,
            "upload_time": "2024-01-02T09:03:53",
            "upload_time_iso_8601": "2024-01-02T09:03:53.621477Z",
            "url": "https://files.pythonhosted.org/packages/2c/ee/bb10897d1d0c0a4a7e97b1eff0d65836f6d8a4dbd31cb6b059a45afb7043/that-nlp-library-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-02 09:03:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "anhquan0412",
    "github_project": "that-nlp-library",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "that-nlp-library"
}
        
Elapsed time: 0.16388s