textaugment


Nametextaugment JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttps://github.com/dsfsi/textaugment
SummaryA library for augmenting text for natural language processing applications.
upload_time2023-11-16 20:49:06
maintainer
docs_urlNone
authorJoseph Sefara
requires_python
licenseMIT
keywords text augmentation python natural language processing nlp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            

# [TextAugment: Improving Short Text Classification through Global Augmentation Methods](https://arxiv.org/abs/1907.03752) 

[![licence](https://img.shields.io/github/license/dsfsi/textaugment.svg?maxAge=3600)](https://github.com/dsfsi/textaugment/blob/master/LICENCE) [![GitHub release](https://img.shields.io/github/release/dsfsi/textaugment.svg?maxAge=3600)](https://github.com/dsfsi/textaugment/releases) [![Wheel](https://img.shields.io/pypi/wheel/textaugment.svg?maxAge=3600)](https://pypi.python.org/pypi/textaugment) [![python](https://img.shields.io/pypi/pyversions/textaugment.svg?maxAge=3600)](https://pypi.org/project/textaugment/) [![TotalDownloads](https://pepy.tech/badge/textaugment)](https://pypi.org/project/textaugment/) [![Downloads](https://static.pepy.tech/badge/textaugment/month)](https://pypi.org/project/textaugment/) [![LNCS](https://img.shields.io/badge/LNCS-Book%20Chapter-B31B1B.svg)](https://link.springer.com/chapter/10.1007%2F978-3-030-57321-8_21) [![arxiv](https://img.shields.io/badge/cs.CL-arXiv%3A1907.03752-B31B1B.svg)](https://arxiv.org/abs/1907.03752)

## You have just found TextAugment.

TextAugment is a Python 3 library for augmenting text for natural language processing applications. TextAugment stands on the giant shoulders of [NLTK](https://www.nltk.org/), [Gensim v3.x](https://radimrehurek.com/gensim/), and [TextBlob](https://textblob.readthedocs.io/) and plays nicely with them.

# Table of Contents

- [Features](#Features)
- [Citation Paper](#citation-paper) 
	- [Requirements](#Requirements)
	- [Installation](#Installation)
	- [How to use](#How-to-use)
		- [Word2vec-based augmentation](#Word2vec-based-augmentation)
		- [WordNet-based augmentation](#WordNet-based-augmentation)
		- [RTT-based augmentation](#RTT-based-augmentation)
- [Easy data augmentation (EDA)](#eda-easy-data-augmentation-techniques-for-boosting-performance-on-text-classification-tasks)
- [An easier data augmentation (AEDA)](#aeda-an-easier-data-augmentation-technique-for-text-classification)
- [Mixup augmentation](#mixup-augmentation)
  - [Implementation](#Implementation)
- [Acknowledgements](#Acknowledgements)

## Features

- Generate synthetic data for improving model performance without manual effort
- Simple, lightweight, easy-to-use library.
- Plug and play to any machine learning frameworks (e.g. PyTorch, TensorFlow, Scikit-learn)
- Support textual data

## Citation Paper

**[Improving short text classification through global augmentation methods](https://link.springer.com/chapter/10.1007%2F978-3-030-57321-8_21)**.



![alt text](https://raw.githubusercontent.com/dsfsi/textaugment/master/augment.png "Augmentation methods")

### Requirements

* Python 3

The following software packages are dependencies and will be installed automatically.

```shell
$ pip install numpy nltk gensim==3.8.3 textblob googletrans 

```
The following code downloads NLTK corpus for [wordnet](http://www.nltk.org/howto/wordnet.html).
```python
nltk.download('wordnet')
```
The following code downloads [NLTK tokenizer](https://www.nltk.org/_modules/nltk/tokenize/punkt.html). This tokenizer divides a text into a list of sentences by using an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. 
```python
nltk.download('punkt')
```
The following code downloads default [NLTK part-of-speech tagger](https://www.nltk.org/_modules/nltk/tag.html) model. A part-of-speech tagger processes a sequence of words, and attaches a part of speech tag to each word.
```python
nltk.download('averaged_perceptron_tagger')
```
Use gensim to load a pre-trained word2vec model. Like [Google News from Google drive](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit).
```python
import gensim
model = gensim.models.KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary=True)
```
You can also use gensim to load Facebook's Fasttext [English](https://fasttext.cc/docs/en/english-vectors.html) and [Multilingual models](https://fasttext.cc/docs/en/crawl-vectors.html)
```
import gensim
model = gensim.models.fasttext.load_facebook_model('./cc.en.300.bin.gz')
```

Or training one from scratch using your data or the following public dataset:

- [Text8 Wiki](http://mattmahoney.net/dc/enwik9.zip)

- [Dataset from "One Billion Word Language Modeling Benchmark"](http://www.statmt.org/lm-benchmark/1-billion-word-language-modeling-benchmark-r13output.tar.gz)

### Installation

Install from pip [Recommended] 
```sh
$ pip install textaugment
or install latest release
$ pip install git+git@github.com:dsfsi/textaugment.git
```

Install from source
```sh
$ git clone git@github.com:dsfsi/textaugment.git
$ cd textaugment
$ python setup.py install
```

### How to use

There are three types of augmentations which can be used:

- word2vec 

```python
from textaugment import Word2vec
```
- fasttext 

```python
from textaugment import Fasttext
```

- wordnet 
```python
from textaugment import Wordnet
```
- translate (This will require internet access)
```python
from textaugment import Translate
```
#### Fasttext/Word2vec-based augmentation

[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/word2vec_example.ipynb)

**Basic example**

```python
>>> from textaugment import Word2vec, Fasttext
>>> t = Word2vec(model='path/to/gensim/model'or 'gensim model itself')
>>> t.augment('The stories are good')
The films are good
>>> t = Fasttext(model='path/to/gensim/model'or 'gensim model itself')
>>> t.augment('The stories are good')
The films are good
```
**Advanced example**

```python
>>> runs = 1 # By default.
>>> v = False # verbose mode to replace all the words. If enabled runs is not effective. Used in this paper (https://www.cs.cmu.edu/~diyiy/docs/emnlp_wang_2015.pdf)
>>> p = 0.5 # The probability of success of an individual trial. (0.1<p<1.0), default is 0.5. Used by Geometric distribution to selects words from a sentence.

>>> word = Word2vec(model='path/to/gensim/model'or'gensim model itself', runs=5, v=False, p=0.5)
>>> word.augment('The stories are good', top_n=10)
The movies are excellent
>>> fast = Fasttext(model='path/to/gensim/model'or'gensim model itself', runs=5, v=False, p=0.5)
>>> fast.augment('The stories are good', top_n=10)
The movies are excellent
```
#### WordNet-based augmentation
**Basic example**
```python
>>> import nltk
>>> nltk.download('punkt')
>>> nltk.download('wordnet')
>>> from textaugment import Wordnet
>>> t = Wordnet()
>>> t.augment('In the afternoon, John is going to town')
In the afternoon, John is walking to town
```
**Advanced example**

```python
>>> v = True # enable verbs augmentation. By default is True.
>>> n = False # enable nouns augmentation. By default is False.
>>> runs = 1 # number of times to augment a sentence. By default is 1.
>>> p = 0.5 # The probability of success of an individual trial. (0.1<p<1.0), default is 0.5. Used by Geometric distribution to selects words from a sentence.

>>> t = Wordnet(v=False ,n=True, p=0.5)
>>> t.augment('In the afternoon, John is going to town', top_n=10)
In the afternoon, Joseph is going to town.
```
#### RTT-based augmentation
**Example**
```python
>>> src = "en" # source language of the sentence
>>> to = "fr" # target language
>>> from textaugment import Translate
>>> t = Translate(src="en", to="fr")
>>> t.augment('In the afternoon, John is going to town')
In the afternoon John goes to town
```
# EDA: Easy data augmentation techniques for boosting performance on text classification tasks 
## This is the implementation of EDA by Jason Wei and Kai Zou. 

https://www.aclweb.org/anthology/D19-1670.pdf

[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/eda_example.ipynb)

#### Synonym Replacement
Randomly choose *n* words from the sentence that are not stop words. Replace each of these words with
one of its synonyms chosen at random. 

**Basic example**
```python
>>> from textaugment import EDA
>>> t = EDA()
>>> t.synonym_replacement("John is going to town", top_n=10)
John is give out to town
```

#### Random Deletion
Randomly remove each word in the sentence with probability *p*.

**Basic example**
```python
>>> from textaugment import EDA
>>> t = EDA()
>>> t.random_deletion("John is going to town", p=0.2)
is going to town
```

#### Random Swap
Randomly choose two words in the sentence and swap their positions. Do this n times.

**Basic example**
```python
>>> from textaugment import EDA
>>> t = EDA()
>>> t.random_swap("John is going to town")
John town going to is
```

#### Random Insertion 
Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times

**Basic example**
```python
>>> from textaugment import EDA
>>> t = EDA()
>>> t.random_insertion("John is going to town")
John is going to make up town
```

# AEDA: An easier data augmentation technique for text classification

This is the implementation of AEDA by Karimi et al, a variant of EDA. It is based on the random insertion of punctuation marks.

https://aclanthology.org/2021.findings-emnlp.234.pdf

## Implementation
[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/eda_example.ipynb)

#### Random Insertion of Punctuation Marks

**Basic example**
```python
>>> from textaugment import AEDA
>>> t = AEDA()
>>> t.punct_insertion("John is going to town")
! John is going to town
```

# Mixup augmentation

This is the implementation of mixup augmentation by [Hongyi Zhang, Moustapha Cisse, Yann Dauphin, David Lopez-Paz](https://openreview.net/forum?id=r1Ddp1-Rb) adapted to NLP. 

Used in [Augmenting Data with Mixup for Sentence Classification: An Empirical Study](https://arxiv.org/abs/1905.08941). 

Mixup is a generic and straightforward data augmentation principle. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularises the neural network to favour simple linear behaviour in-between training examples. 

## Implementation

[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/mixup_example_using_IMDB_sentiment.ipynb)

## Built with ❤ on
* [Python](http://python.org/)

## Authors
* [Joseph Sefara](https://za.linkedin.com/in/josephsefara) (http://www.speechtech.co.za)
* [Vukosi Marivate](http://www.vima.co.za) (http://www.vima.co.za)

## Acknowledgements
Cite this [paper](https://link.springer.com/chapter/10.1007%2F978-3-030-57321-8_21) when using this library. [Arxiv Version](https://arxiv.org/abs/1907.03752)

```
@inproceedings{marivate2020improving,
  title={Improving short text classification through global augmentation methods},
  author={Marivate, Vukosi and Sefara, Tshephisho},
  booktitle={International Cross-Domain Conference for Machine Learning and Knowledge Extraction},
  pages={385--399},
  year={2020},
  organization={Springer}
}
```

## Licence
MIT licensed. See the bundled [LICENCE](https://github.com/dsfsi/textaugment/blob/master/LICENCE) file for more details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/dsfsi/textaugment",
    "name": "textaugment",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "text augmentation,python,natural language processing,nlp",
    "author": "Joseph Sefara",
    "author_email": "sefaratj@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/14/06/c8655c49032f0ffe351bbd0a3bc20f80674beaa6867510af04a879bff098/textaugment-2.0.0.tar.gz",
    "platform": null,
    "description": "\n\n# [TextAugment: Improving Short Text Classification through Global Augmentation Methods](https://arxiv.org/abs/1907.03752) \n\n[![licence](https://img.shields.io/github/license/dsfsi/textaugment.svg?maxAge=3600)](https://github.com/dsfsi/textaugment/blob/master/LICENCE) [![GitHub release](https://img.shields.io/github/release/dsfsi/textaugment.svg?maxAge=3600)](https://github.com/dsfsi/textaugment/releases) [![Wheel](https://img.shields.io/pypi/wheel/textaugment.svg?maxAge=3600)](https://pypi.python.org/pypi/textaugment) [![python](https://img.shields.io/pypi/pyversions/textaugment.svg?maxAge=3600)](https://pypi.org/project/textaugment/) [![TotalDownloads](https://pepy.tech/badge/textaugment)](https://pypi.org/project/textaugment/) [![Downloads](https://static.pepy.tech/badge/textaugment/month)](https://pypi.org/project/textaugment/) [![LNCS](https://img.shields.io/badge/LNCS-Book%20Chapter-B31B1B.svg)](https://link.springer.com/chapter/10.1007%2F978-3-030-57321-8_21) [![arxiv](https://img.shields.io/badge/cs.CL-arXiv%3A1907.03752-B31B1B.svg)](https://arxiv.org/abs/1907.03752)\n\n## You have just found TextAugment.\n\nTextAugment is a Python 3 library for augmenting text for natural language processing applications. TextAugment stands on the giant shoulders of [NLTK](https://www.nltk.org/), [Gensim v3.x](https://radimrehurek.com/gensim/), and [TextBlob](https://textblob.readthedocs.io/) and plays nicely with them.\n\n# Table of Contents\n\n- [Features](#Features)\n- [Citation Paper](#citation-paper) \n\t- [Requirements](#Requirements)\n\t- [Installation](#Installation)\n\t- [How to use](#How-to-use)\n\t\t- [Word2vec-based augmentation](#Word2vec-based-augmentation)\n\t\t- [WordNet-based augmentation](#WordNet-based-augmentation)\n\t\t- [RTT-based augmentation](#RTT-based-augmentation)\n- [Easy data augmentation (EDA)](#eda-easy-data-augmentation-techniques-for-boosting-performance-on-text-classification-tasks)\n- [An easier data augmentation (AEDA)](#aeda-an-easier-data-augmentation-technique-for-text-classification)\n- [Mixup augmentation](#mixup-augmentation)\n  - [Implementation](#Implementation)\n- [Acknowledgements](#Acknowledgements)\n\n## Features\n\n- Generate synthetic data for improving model performance without manual effort\n- Simple, lightweight, easy-to-use library.\n- Plug and play to any machine learning frameworks (e.g. PyTorch, TensorFlow, Scikit-learn)\n- Support textual data\n\n## Citation Paper\n\n**[Improving short text classification through global augmentation methods](https://link.springer.com/chapter/10.1007%2F978-3-030-57321-8_21)**.\n\n\n\n![alt text](https://raw.githubusercontent.com/dsfsi/textaugment/master/augment.png \"Augmentation methods\")\n\n### Requirements\n\n* Python 3\n\nThe following software packages are dependencies and will be installed automatically.\n\n```shell\n$ pip install numpy nltk gensim==3.8.3 textblob googletrans \n\n```\nThe following code downloads NLTK corpus for [wordnet](http://www.nltk.org/howto/wordnet.html).\n```python\nnltk.download('wordnet')\n```\nThe following code downloads [NLTK tokenizer](https://www.nltk.org/_modules/nltk/tokenize/punkt.html). This tokenizer divides a text into a list of sentences by using an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. \n```python\nnltk.download('punkt')\n```\nThe following code downloads default [NLTK part-of-speech tagger](https://www.nltk.org/_modules/nltk/tag.html) model. A part-of-speech tagger processes a sequence of words, and attaches a part of speech tag to each word.\n```python\nnltk.download('averaged_perceptron_tagger')\n```\nUse gensim to load a pre-trained word2vec model. Like [Google News from Google drive](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit).\n```python\nimport gensim\nmodel = gensim.models.KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary=True)\n```\nYou can also use gensim to load Facebook's Fasttext [English](https://fasttext.cc/docs/en/english-vectors.html) and [Multilingual models](https://fasttext.cc/docs/en/crawl-vectors.html)\n```\nimport gensim\nmodel = gensim.models.fasttext.load_facebook_model('./cc.en.300.bin.gz')\n```\n\nOr training one from scratch using your data or the following public dataset:\n\n- [Text8 Wiki](http://mattmahoney.net/dc/enwik9.zip)\n\n- [Dataset from \"One Billion Word Language Modeling Benchmark\"](http://www.statmt.org/lm-benchmark/1-billion-word-language-modeling-benchmark-r13output.tar.gz)\n\n### Installation\n\nInstall from pip [Recommended] \n```sh\n$ pip install textaugment\nor install latest release\n$ pip install git+git@github.com:dsfsi/textaugment.git\n```\n\nInstall from source\n```sh\n$ git clone git@github.com:dsfsi/textaugment.git\n$ cd textaugment\n$ python setup.py install\n```\n\n### How to use\n\nThere are three types of augmentations which can be used:\n\n- word2vec \n\n```python\nfrom textaugment import Word2vec\n```\n- fasttext \n\n```python\nfrom textaugment import Fasttext\n```\n\n- wordnet \n```python\nfrom textaugment import Wordnet\n```\n- translate (This will require internet access)\n```python\nfrom textaugment import Translate\n```\n#### Fasttext/Word2vec-based augmentation\n\n[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/word2vec_example.ipynb)\n\n**Basic example**\n\n```python\n>>> from textaugment import Word2vec, Fasttext\n>>> t = Word2vec(model='path/to/gensim/model'or 'gensim model itself')\n>>> t.augment('The stories are good')\nThe films are good\n>>> t = Fasttext(model='path/to/gensim/model'or 'gensim model itself')\n>>> t.augment('The stories are good')\nThe films are good\n```\n**Advanced example**\n\n```python\n>>> runs = 1 # By default.\n>>> v = False # verbose mode to replace all the words. If enabled runs is not effective. Used in this paper (https://www.cs.cmu.edu/~diyiy/docs/emnlp_wang_2015.pdf)\n>>> p = 0.5 # The probability of success of an individual trial. (0.1<p<1.0), default is 0.5. Used by Geometric distribution to selects words from a sentence.\n\n>>> word = Word2vec(model='path/to/gensim/model'or'gensim model itself', runs=5, v=False, p=0.5)\n>>> word.augment('The stories are good', top_n=10)\nThe movies are excellent\n>>> fast = Fasttext(model='path/to/gensim/model'or'gensim model itself', runs=5, v=False, p=0.5)\n>>> fast.augment('The stories are good', top_n=10)\nThe movies are excellent\n```\n#### WordNet-based augmentation\n**Basic example**\n```python\n>>> import nltk\n>>> nltk.download('punkt')\n>>> nltk.download('wordnet')\n>>> from textaugment import Wordnet\n>>> t = Wordnet()\n>>> t.augment('In the afternoon, John is going to town')\nIn the afternoon, John is walking to town\n```\n**Advanced example**\n\n```python\n>>> v = True # enable verbs augmentation. By default is True.\n>>> n = False # enable nouns augmentation. By default is False.\n>>> runs = 1 # number of times to augment a sentence. By default is 1.\n>>> p = 0.5 # The probability of success of an individual trial. (0.1<p<1.0), default is 0.5. Used by Geometric distribution to selects words from a sentence.\n\n>>> t = Wordnet(v=False ,n=True, p=0.5)\n>>> t.augment('In the afternoon, John is going to town', top_n=10)\nIn the afternoon, Joseph is going to town.\n```\n#### RTT-based augmentation\n**Example**\n```python\n>>> src = \"en\" # source language of the sentence\n>>> to = \"fr\" # target language\n>>> from textaugment import Translate\n>>> t = Translate(src=\"en\", to=\"fr\")\n>>> t.augment('In the afternoon, John is going to town')\nIn the afternoon John goes to town\n```\n# EDA: Easy data augmentation techniques for boosting performance on text classification tasks \n## This is the implementation of EDA by Jason Wei and Kai Zou. \n\nhttps://www.aclweb.org/anthology/D19-1670.pdf\n\n[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/eda_example.ipynb)\n\n#### Synonym Replacement\nRandomly choose *n* words from the sentence that are not stop words. Replace each of these words with\none of its synonyms chosen at random. \n\n**Basic example**\n```python\n>>> from textaugment import EDA\n>>> t = EDA()\n>>> t.synonym_replacement(\"John is going to town\", top_n=10)\nJohn is give out to town\n```\n\n#### Random Deletion\nRandomly remove each word in the sentence with probability *p*.\n\n**Basic example**\n```python\n>>> from textaugment import EDA\n>>> t = EDA()\n>>> t.random_deletion(\"John is going to town\", p=0.2)\nis going to town\n```\n\n#### Random Swap\nRandomly choose two words in the sentence and swap their positions. Do this n times.\n\n**Basic example**\n```python\n>>> from textaugment import EDA\n>>> t = EDA()\n>>> t.random_swap(\"John is going to town\")\nJohn town going to is\n```\n\n#### Random Insertion \nFind a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times\n\n**Basic example**\n```python\n>>> from textaugment import EDA\n>>> t = EDA()\n>>> t.random_insertion(\"John is going to town\")\nJohn is going to make up town\n```\n\n# AEDA: An easier data augmentation technique for text classification\n\nThis is the implementation of AEDA by Karimi et al, a variant of EDA. It is based on the random insertion of punctuation marks.\n\nhttps://aclanthology.org/2021.findings-emnlp.234.pdf\n\n## Implementation\n[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/eda_example.ipynb)\n\n#### Random Insertion of Punctuation Marks\n\n**Basic example**\n```python\n>>> from textaugment import AEDA\n>>> t = AEDA()\n>>> t.punct_insertion(\"John is going to town\")\n! John is going to town\n```\n\n# Mixup augmentation\n\nThis is the implementation of mixup augmentation by [Hongyi Zhang, Moustapha Cisse, Yann Dauphin, David Lopez-Paz](https://openreview.net/forum?id=r1Ddp1-Rb) adapted to NLP. \n\nUsed in [Augmenting Data with Mixup for Sentence Classification: An Empirical Study](https://arxiv.org/abs/1905.08941). \n\nMixup is a generic and straightforward data augmentation principle. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularises the neural network to favour simple linear behaviour in-between training examples. \n\n## Implementation\n\n[See this notebook for an example](https://github.com/dsfsi/textaugment/blob/master/examples/mixup_example_using_IMDB_sentiment.ipynb)\n\n## Built with \u2764 on\n* [Python](http://python.org/)\n\n## Authors\n* [Joseph Sefara](https://za.linkedin.com/in/josephsefara) (http://www.speechtech.co.za)\n* [Vukosi Marivate](http://www.vima.co.za) (http://www.vima.co.za)\n\n## Acknowledgements\nCite this [paper](https://link.springer.com/chapter/10.1007%2F978-3-030-57321-8_21) when using this library. [Arxiv Version](https://arxiv.org/abs/1907.03752)\n\n```\n@inproceedings{marivate2020improving,\n  title={Improving short text classification through global augmentation methods},\n  author={Marivate, Vukosi and Sefara, Tshephisho},\n  booktitle={International Cross-Domain Conference for Machine Learning and Knowledge Extraction},\n  pages={385--399},\n  year={2020},\n  organization={Springer}\n}\n```\n\n## Licence\nMIT licensed. See the bundled [LICENCE](https://github.com/dsfsi/textaugment/blob/master/LICENCE) file for more details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A library for augmenting text for natural language processing applications.",
    "version": "2.0.0",
    "project_urls": {
        "Homepage": "https://github.com/dsfsi/textaugment"
    },
    "split_keywords": [
        "text augmentation",
        "python",
        "natural language processing",
        "nlp"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fdfda2a4b69a8fe92e5c79783a408cc393a24994cf425b39ac01cd41453fedc7",
                "md5": "fa447288daf066c075ee7f41840af025",
                "sha256": "823ea30743711375d6ae95ca81eae5e350e3acc13641d60d368f94283fc22182"
            },
            "downloads": -1,
            "filename": "textaugment-2.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fa447288daf066c075ee7f41840af025",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 19276,
            "upload_time": "2023-11-16T20:49:04",
            "upload_time_iso_8601": "2023-11-16T20:49:04.996899Z",
            "url": "https://files.pythonhosted.org/packages/fd/fd/a2a4b69a8fe92e5c79783a408cc393a24994cf425b39ac01cd41453fedc7/textaugment-2.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1406c8655c49032f0ffe351bbd0a3bc20f80674beaa6867510af04a879bff098",
                "md5": "3bb2fb0c3ee77789ae9bbb0c068e9b4b",
                "sha256": "1964518a1a27e53919ea3199801a01f956441f028c787af69c45c799b9520c83"
            },
            "downloads": -1,
            "filename": "textaugment-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3bb2fb0c3ee77789ae9bbb0c068e9b4b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 20357,
            "upload_time": "2023-11-16T20:49:06",
            "upload_time_iso_8601": "2023-11-16T20:49:06.947471Z",
            "url": "https://files.pythonhosted.org/packages/14/06/c8655c49032f0ffe351bbd0a3bc20f80674beaa6867510af04a879bff098/textaugment-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-16 20:49:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dsfsi",
    "github_project": "textaugment",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "textaugment"
}
        
Elapsed time: 0.13709s