clean-text


Nameclean-text JSON
Version 0.6.0 PyPI version JSON
download
home_page
SummaryFunctions to preprocess and normalize text.
upload_time2022-02-02 21:45:52
maintainer
docs_urlNone
authorJohannes Filter
requires_python>=3.7
licenseApache-2.0
keywords natural-language-processing text-cleaning text-preprocessing text-normalization user-generated-content
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # `clean-text` [![Build Status](https://img.shields.io/github/workflow/status/jfilter/clean-text/Test)](https://github.com/jfilter/clean-text/actions/workflows/test.yml) [![PyPI](https://img.shields.io/pypi/v/clean-text.svg)](https://pypi.org/project/clean-text/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/clean-text.svg)](https://pypi.org/project/clean-text/) [![PyPI - Downloads](https://img.shields.io/pypi/dm/clean-text)](https://pypistats.org/packages/clean-text)

User-generated content on the Web and in social media is often dirty. Preprocess your scraped data with `clean-text` to create a normalized text representation. For instance, turn this corrupted input:

```txt
A bunch of \\u2018new\\u2019 references, including [Moana](https://en.wikipedia.org/wiki/Moana_%282016_film%29).


»Yóù àré     rïght <3!«
```

into this clean output:

```txt
A bunch of 'new' references, including [moana](<URL>).

"you are right <3!"
```

`clean-text` uses [ftfy](https://github.com/LuminosoInsight/python-ftfy), [unidecode](https://github.com/takluyver/Unidecode) and numerous hand-crafted rules, i.e., RegEx.

## Installation

To install the GPL-licensed package [unidecode](https://github.com/takluyver/Unidecode) alongside:

```bash
pip install clean-text[gpl]
```

You may want to abstain from GPL:

```bash
pip install clean-text
```

NB: This package is named `clean-text` and not `cleantext`.

If [unidecode](https://github.com/takluyver/Unidecode) is not available, `clean-text` will resort to Python's [unicodedata.normalize](https://docs.python.org/3.7/library/unicodedata.html#unicodedata.normalize) for [transliteration](https://en.wikipedia.org/wiki/Transliteration).
Transliteration to closest ASCII symbols involes manually mappings, i.e., `ê` to `e`.
`unidecode`'s mapping is superiour but unicodedata's are sufficent.
However, you may want to disable this feature altogether depending on your data and use case.

To make it clear: There are **inconsistencies** between processing text with or without `unidecode`.

## Usage

```python
from cleantext import clean

clean("some input",
    fix_unicode=True,               # fix various unicode errors
    to_ascii=True,                  # transliterate to closest ASCII representation
    lower=True,                     # lowercase text
    no_line_breaks=False,           # fully strip line breaks as opposed to only normalizing them
    no_urls=False,                  # replace all URLs with a special token
    no_emails=False,                # replace all email addresses with a special token
    no_phone_numbers=False,         # replace all phone numbers with a special token
    no_numbers=False,               # replace all numbers with a special token
    no_digits=False,                # replace all digits with a special token
    no_currency_symbols=False,      # replace all currency symbols with a special token
    no_punct=False,                 # remove punctuations
    replace_with_punct="",          # instead of removing punctuations you may replace them
    replace_with_url="<URL>",
    replace_with_email="<EMAIL>",
    replace_with_phone_number="<PHONE>",
    replace_with_number="<NUMBER>",
    replace_with_digit="0",
    replace_with_currency_symbol="<CUR>",
    lang="en"                       # set to 'de' for German special handling
)
```

Carefully choose the arguments that fit your task. The default parameters are listed above.

You may also only use specific functions for cleaning. For this, take a look at the [source code](https://github.com/jfilter/clean-text/blob/main/cleantext/clean.py).

### Supported languages

So far, only English and German are fully supported.
It should work for the majority of western languages.
If you need some special handling for your language, feel free to contribute. 🙃

### Using `clean-text` with `scikit-learn`

There is also **scikit-learn** compatible API to use in your pipelines.
All of the parameters above work here as well.

```bash
pip install clean-text[gpl,sklearn]
pip install clean-text[sklearn]
```

```python
from cleantext.sklearn import CleanTransformer

cleaner = CleanTransformer(no_punct=False, lower=False)

cleaner.transform(['Happily clean your text!', 'Another Input'])
```

## Development

[Use poetry.](https://python-poetry.org/)

## Contributing

If you have a **question**, found a **bug** or want to propose a new **feature**, have a look at the [issues page](https://github.com/jfilter/clean-text/issues).

**Pull requests** are especially welcomed when they fix bugs or improve the code quality.

If you don't like the output of `clean-text`, consider adding a [test](https://github.com/jfilter/clean-text/tree/main/tests) with your specific input and desired output.

## Related Work

### Generic text cleaning packages

-   https://github.com/pudo/normality
-   https://github.com/davidmogar/cucco
-   https://github.com/lyeoni/prenlp
-   https://github.com/s/preprocessor
-   https://github.com/artefactory/NLPretext
-   https://github.com/cbaziotis/ekphrasis

### Full-blown NLP libraries with some text cleaning

-   https://github.com/chartbeat-labs/textacy
-   https://github.com/jbesomi/texthero

### Remove or replace strings

-   https://github.com/vi3k6i5/flashtext
-   https://github.com/ddelange/retrie

### Detect dates

-   https://github.com/scrapinghub/dateparser

### Clean massive Common Crawl data

-   https://github.com/facebookresearch/cc_net

## Acknowledgements

Built upon the work by [Burton DeWilde](https://github.com/bdewilde) for [Textacy](https://github.com/chartbeat-labs/textacy).

## License

Apache

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "clean-text",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "natural-language-processing,text-cleaning,text-preprocessing,text-normalization,user-generated-content",
    "author": "Johannes Filter",
    "author_email": "hi@jfilter.de",
    "download_url": "https://files.pythonhosted.org/packages/c3/5c/3151736165b123611351c103908f24841d88df0dfe455ece15b2657adeae/clean-text-0.6.0.tar.gz",
    "platform": "",
    "description": "# `clean-text` [![Build Status](https://img.shields.io/github/workflow/status/jfilter/clean-text/Test)](https://github.com/jfilter/clean-text/actions/workflows/test.yml) [![PyPI](https://img.shields.io/pypi/v/clean-text.svg)](https://pypi.org/project/clean-text/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/clean-text.svg)](https://pypi.org/project/clean-text/) [![PyPI - Downloads](https://img.shields.io/pypi/dm/clean-text)](https://pypistats.org/packages/clean-text)\n\nUser-generated content on the Web and in social media is often dirty. Preprocess your scraped data with `clean-text` to create a normalized text representation. For instance, turn this corrupted input:\n\n```txt\nA bunch of \\\\u2018new\\\\u2019 references, including [Moana](https://en.wikipedia.org/wiki/Moana_%282016_film%29).\n\n\n\u00bbY\u00f3\u00f9 \u00e0r\u00e9     r\u00efght &lt;3!\u00ab\n```\n\ninto this clean output:\n\n```txt\nA bunch of 'new' references, including [moana](<URL>).\n\n\"you are right <3!\"\n```\n\n`clean-text` uses [ftfy](https://github.com/LuminosoInsight/python-ftfy), [unidecode](https://github.com/takluyver/Unidecode) and numerous hand-crafted rules, i.e., RegEx.\n\n## Installation\n\nTo install the GPL-licensed package [unidecode](https://github.com/takluyver/Unidecode) alongside:\n\n```bash\npip install clean-text[gpl]\n```\n\nYou may want to abstain from GPL:\n\n```bash\npip install clean-text\n```\n\nNB: This package is named `clean-text` and not `cleantext`.\n\nIf [unidecode](https://github.com/takluyver/Unidecode) is not available, `clean-text` will resort to Python's [unicodedata.normalize](https://docs.python.org/3.7/library/unicodedata.html#unicodedata.normalize) for [transliteration](https://en.wikipedia.org/wiki/Transliteration).\nTransliteration to closest ASCII symbols involes manually mappings, i.e., `\u00ea` to `e`.\n`unidecode`'s mapping is superiour but unicodedata's are sufficent.\nHowever, you may want to disable this feature altogether depending on your data and use case.\n\nTo make it clear: There are **inconsistencies** between processing text with or without `unidecode`.\n\n## Usage\n\n```python\nfrom cleantext import clean\n\nclean(\"some input\",\n    fix_unicode=True,               # fix various unicode errors\n    to_ascii=True,                  # transliterate to closest ASCII representation\n    lower=True,                     # lowercase text\n    no_line_breaks=False,           # fully strip line breaks as opposed to only normalizing them\n    no_urls=False,                  # replace all URLs with a special token\n    no_emails=False,                # replace all email addresses with a special token\n    no_phone_numbers=False,         # replace all phone numbers with a special token\n    no_numbers=False,               # replace all numbers with a special token\n    no_digits=False,                # replace all digits with a special token\n    no_currency_symbols=False,      # replace all currency symbols with a special token\n    no_punct=False,                 # remove punctuations\n    replace_with_punct=\"\",          # instead of removing punctuations you may replace them\n    replace_with_url=\"<URL>\",\n    replace_with_email=\"<EMAIL>\",\n    replace_with_phone_number=\"<PHONE>\",\n    replace_with_number=\"<NUMBER>\",\n    replace_with_digit=\"0\",\n    replace_with_currency_symbol=\"<CUR>\",\n    lang=\"en\"                       # set to 'de' for German special handling\n)\n```\n\nCarefully choose the arguments that fit your task. The default parameters are listed above.\n\nYou may also only use specific functions for cleaning. For this, take a look at the [source code](https://github.com/jfilter/clean-text/blob/main/cleantext/clean.py).\n\n### Supported languages\n\nSo far, only English and German are fully supported.\nIt should work for the majority of western languages.\nIf you need some special handling for your language, feel free to contribute. \ud83d\ude43\n\n### Using `clean-text` with `scikit-learn`\n\nThere is also **scikit-learn** compatible API to use in your pipelines.\nAll of the parameters above work here as well.\n\n```bash\npip install clean-text[gpl,sklearn]\npip install clean-text[sklearn]\n```\n\n```python\nfrom cleantext.sklearn import CleanTransformer\n\ncleaner = CleanTransformer(no_punct=False, lower=False)\n\ncleaner.transform(['Happily clean your text!', 'Another Input'])\n```\n\n## Development\n\n[Use poetry.](https://python-poetry.org/)\n\n## Contributing\n\nIf you have a **question**, found a **bug** or want to propose a new **feature**, have a look at the [issues page](https://github.com/jfilter/clean-text/issues).\n\n**Pull requests** are especially welcomed when they fix bugs or improve the code quality.\n\nIf you don't like the output of `clean-text`, consider adding a [test](https://github.com/jfilter/clean-text/tree/main/tests) with your specific input and desired output.\n\n## Related Work\n\n### Generic text cleaning packages\n\n-   https://github.com/pudo/normality\n-   https://github.com/davidmogar/cucco\n-   https://github.com/lyeoni/prenlp\n-   https://github.com/s/preprocessor\n-   https://github.com/artefactory/NLPretext\n-   https://github.com/cbaziotis/ekphrasis\n\n### Full-blown NLP libraries with some text cleaning\n\n-   https://github.com/chartbeat-labs/textacy\n-   https://github.com/jbesomi/texthero\n\n### Remove or replace strings\n\n-   https://github.com/vi3k6i5/flashtext\n-   https://github.com/ddelange/retrie\n\n### Detect dates\n\n-   https://github.com/scrapinghub/dateparser\n\n### Clean massive Common Crawl data\n\n-   https://github.com/facebookresearch/cc_net\n\n## Acknowledgements\n\nBuilt upon the work by [Burton DeWilde](https://github.com/bdewilde) for [Textacy](https://github.com/chartbeat-labs/textacy).\n\n## License\n\nApache\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Functions to preprocess and normalize text.",
    "version": "0.6.0",
    "project_urls": null,
    "split_keywords": [
        "natural-language-processing",
        "text-cleaning",
        "text-preprocessing",
        "text-normalization",
        "user-generated-content"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "347fc99da1cf5b69ed112b3f21029f2cbf37ee4dbffc4607fa0c5601f1991410",
                "md5": "8fbf64794fbbd924897501cfa738d6ea",
                "sha256": "4fedb156042f192cdef9ed5324b281465f1116aba96791e9289384a2e6bec4da"
            },
            "downloads": -1,
            "filename": "clean_text-0.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8fbf64794fbbd924897501cfa738d6ea",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 11586,
            "upload_time": "2022-02-02T21:45:53",
            "upload_time_iso_8601": "2022-02-02T21:45:53.980421Z",
            "url": "https://files.pythonhosted.org/packages/34/7f/c99da1cf5b69ed112b3f21029f2cbf37ee4dbffc4607fa0c5601f1991410/clean_text-0.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c35c3151736165b123611351c103908f24841d88df0dfe455ece15b2657adeae",
                "md5": "ccb1616887d53bd91f2c691cbdafd641",
                "sha256": "8374b385fc2a26e06383f62aed076fa6be115e5832239e2a7fd8b344fa8d2ab2"
            },
            "downloads": -1,
            "filename": "clean-text-0.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ccb1616887d53bd91f2c691cbdafd641",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 12432,
            "upload_time": "2022-02-02T21:45:52",
            "upload_time_iso_8601": "2022-02-02T21:45:52.162565Z",
            "url": "https://files.pythonhosted.org/packages/c3/5c/3151736165b123611351c103908f24841d88df0dfe455ece15b2657adeae/clean-text-0.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-02-02 21:45:52",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "clean-text"
}
        
Elapsed time: 0.19736s