linguaf


Namelinguaf JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://github.com/Perevalov/LinguaF
SummaryPython package for calculating famous measures in computational linguistics
upload_time2024-11-05 12:27:12
maintainerNone
docs_urlNone
authorAleksandr Perevalov
requires_python>=3.6
licenseMIT
keywords language features computational linguistics quantitative text analysis
VCS
bugtrack_url
requirements natasha nltk pymorphy3 Pyphen pytest setuptools spacy stopwordsiso
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LinguaF

![Version](https://img.shields.io/pypi/v/linguaf?logo=pypi)
![Downloads](https://img.shields.io/pypi/dm/linguaf)
![Repo size](https://img.shields.io/github/repo-size/perevalov/linguaf)

**LinguaF provides an easy access for researchers and developers to methods of quantitative language analysis, such as: readability, complexity, diversity, and other descriptive statistics.**

## Usage

```python
documents = [
    "Pain and suffering are always inevitable for a large intelligence and a deep heart. The really great men must, I think, have great sadness on earth.",
    "To go wrong in one's own way is better than to go right in someone else's.",
    "The darker the night, the brighter the stars, The deeper the grief, the closer is God!"
]
```

### Descriptive Statistics

The following descriptive statistics are supported (`descriptive_statistics.py` module):

* Number of characters `char_count`
* Number of letters `letter_count`
* Number of punctuation characters `punctuation_count`
* Number of digits `digit_count`
* Number of syllables `syllable_count`
* Number of sentences `sentence_count`
* Number of n-syllable words `number_of_n_syllable_words`
* Number of n-syllable words for all found syllables `number_of_n_syllable_words_all`
* Average syllables per word `avg_syllable_per_word`
* Average word length `avg_word_length`
* Average sentence length `avg_sentence_length`
* Average words per sentence `avg_words_per_sentence`

Additional methods:
* Get lexical items (nouns, adjectives, verbs, adverbs) `get_lexical_items`
* Get n-grams `get_ngrams`
* Get sentences `get_sentences`
* Get words `get_words`
* Tokenize `tokenize`
* Remove punctuation `remove_punctuation`
* Remove digits `remove_digits`

Example:

```python
from linguaf import descriptive_statistics as ds


ds.avg_words_per_sentence(documents)
# Output: 15
```

### Syntactical Complexity

The following syntactical complexity metrics are supported (`syntactical_complexity.py` module): 
* Mean Dependency Distance (MDD) `mean_dependency_distance`

Example:

```python
from linguaf import syntactical_complexity as sc


sc.mean_dependency_distance(documents)
# Output: 2.375
```

### Lexical Diversity

The following lexical diversity metrics are supported (`lexical_diversity.py` module): 
* Lexical Density (LD) `lexical_density`
* Type Token Ratio (TTR) `type_token_ratio`
* Herdan's Constant or Log Type Token Ratio (LogTTR) `log_type_token_ratio`
* Summer's Index `summer_index`
* Root Type Token Ratio (RootTTR) `root_type_token_ratio`

Example:

```python
from linguaf import lexical_diversity as ld


ld.log_type_token_ratio(documents)
# Output: 0.9403574963462502
```

### Readability

The following readability metrics are supported (`readability.py` module): 
* Flesch Reading Ease (FRE) `flesch_reading_ease`
* Flesch-Kincaid Grade (FKG) `flesch_kincaid_grade`
* Automated Readability Index (ARI) `automated_readability_index`
* Simple Automated Readability Index (sARI) `automated_readability_index_simple`
* Coleman's Readability Score `coleman_readability`
* Easy Listening Score `easy_listening`


Example:

```python
from linguaf import readability as r


r.flesch_kincaid_grade(documents)
# Output: 4.813333333333336
```

## Install

### Via PIP

```bash
pip install linguaf
```

### Latest version from GitHub

```bash
git clone https://github.com/Perevalov/LinguaF.git
cd LinguaF
pip install .
```

## Language Support

At the moment, library supports the following languages:
* English πŸ‡¬πŸ‡§ (`en`): full support
* Russian πŸ‡·πŸ‡Ί (`ru`): full support
* German πŸ‡©πŸ‡ͺ (`de`)
* French πŸ‡«πŸ‡· (`fr`)
* Spanish πŸ‡ͺπŸ‡Έ (`es`)
* Chinese πŸ‡¨πŸ‡³ (`zh`)
* Lithuanian πŸ‡±πŸ‡Ή (`lt`)
* Belarusian πŸ‡§πŸ‡Ύ (`be`)
* Ukrainian πŸ‡ΊπŸ‡¦ (`uk`)
* Armenian πŸ‡¦πŸ‡² (`hy`)

**Important:** not every method is implemented for every language. If you use a particular method that does not support the input language, you'll get a `ValueError`.

## Citation

TBD



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Perevalov/LinguaF",
    "name": "linguaf",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "language features computational linguistics quantitative text analysis",
    "author": "Aleksandr Perevalov",
    "author_email": "perevalovproduction@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/88/9c/2dc0bc35ceb2c5ea27dfc709679642f2126cd8aa01441658bfae6772c5f6/linguaf-0.1.2.tar.gz",
    "platform": null,
    "description": "# LinguaF\n\n![Version](https://img.shields.io/pypi/v/linguaf?logo=pypi)\n![Downloads](https://img.shields.io/pypi/dm/linguaf)\n![Repo size](https://img.shields.io/github/repo-size/perevalov/linguaf)\n\n**LinguaF provides an easy access for researchers and developers to methods of quantitative language analysis, such as: readability, complexity, diversity, and other descriptive statistics.**\n\n## Usage\n\n```python\ndocuments = [\n    \"Pain and suffering are always inevitable for a large intelligence and a deep heart. The really great men must, I think, have great sadness on earth.\",\n    \"To go wrong in one's own way is better than to go right in someone else's.\",\n    \"The darker the night, the brighter the stars, The deeper the grief, the closer is God!\"\n]\n```\n\n### Descriptive Statistics\n\nThe following descriptive statistics are supported (`descriptive_statistics.py` module):\n\n* Number of characters `char_count`\n* Number of letters `letter_count`\n* Number of punctuation characters `punctuation_count`\n* Number of digits `digit_count`\n* Number of syllables `syllable_count`\n* Number of sentences `sentence_count`\n* Number of n-syllable words `number_of_n_syllable_words`\n* Number of n-syllable words for all found syllables `number_of_n_syllable_words_all`\n* Average syllables per word `avg_syllable_per_word`\n* Average word length `avg_word_length`\n* Average sentence length `avg_sentence_length`\n* Average words per sentence `avg_words_per_sentence`\n\nAdditional methods:\n* Get lexical items (nouns, adjectives, verbs, adverbs) `get_lexical_items`\n* Get n-grams `get_ngrams`\n* Get sentences `get_sentences`\n* Get words `get_words`\n* Tokenize `tokenize`\n* Remove punctuation `remove_punctuation`\n* Remove digits `remove_digits`\n\nExample:\n\n```python\nfrom linguaf import descriptive_statistics as ds\n\n\nds.avg_words_per_sentence(documents)\n# Output: 15\n```\n\n### Syntactical Complexity\n\nThe following syntactical complexity metrics are supported (`syntactical_complexity.py` module): \n* Mean Dependency Distance (MDD) `mean_dependency_distance`\n\nExample:\n\n```python\nfrom linguaf import syntactical_complexity as sc\n\n\nsc.mean_dependency_distance(documents)\n# Output: 2.375\n```\n\n### Lexical Diversity\n\nThe following lexical diversity metrics are supported (`lexical_diversity.py` module): \n* Lexical Density (LD) `lexical_density`\n* Type Token Ratio (TTR) `type_token_ratio`\n* Herdan's Constant or Log Type Token Ratio (LogTTR) `log_type_token_ratio`\n* Summer's Index `summer_index`\n* Root Type Token Ratio (RootTTR) `root_type_token_ratio`\n\nExample:\n\n```python\nfrom linguaf import lexical_diversity as ld\n\n\nld.log_type_token_ratio(documents)\n# Output: 0.9403574963462502\n```\n\n### Readability\n\nThe following readability metrics are supported (`readability.py` module): \n* Flesch Reading Ease (FRE) `flesch_reading_ease`\n* Flesch-Kincaid Grade (FKG) `flesch_kincaid_grade`\n* Automated Readability Index (ARI) `automated_readability_index`\n* Simple Automated Readability Index (sARI) `automated_readability_index_simple`\n* Coleman's Readability Score `coleman_readability`\n* Easy Listening Score `easy_listening`\n\n\nExample:\n\n```python\nfrom linguaf import readability as r\n\n\nr.flesch_kincaid_grade(documents)\n# Output: 4.813333333333336\n```\n\n## Install\n\n### Via PIP\n\n```bash\npip install linguaf\n```\n\n### Latest version from GitHub\n\n```bash\ngit clone https://github.com/Perevalov/LinguaF.git\ncd LinguaF\npip install .\n```\n\n## Language Support\n\nAt the moment, library supports the following languages:\n* English \ud83c\uddec\ud83c\udde7 (`en`): full support\n* Russian \ud83c\uddf7\ud83c\uddfa (`ru`): full support\n* German \ud83c\udde9\ud83c\uddea (`de`)\n* French \ud83c\uddeb\ud83c\uddf7 (`fr`)\n* Spanish \ud83c\uddea\ud83c\uddf8 (`es`)\n* Chinese \ud83c\udde8\ud83c\uddf3 (`zh`)\n* Lithuanian \ud83c\uddf1\ud83c\uddf9 (`lt`)\n* Belarusian \ud83c\udde7\ud83c\uddfe (`be`)\n* Ukrainian \ud83c\uddfa\ud83c\udde6 (`uk`)\n* Armenian \ud83c\udde6\ud83c\uddf2 (`hy`)\n\n**Important:** not every method is implemented for every language. If you use a particular method that does not support the input language, you'll get a `ValueError`.\n\n## Citation\n\nTBD\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python package for calculating famous measures in computational linguistics",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://github.com/Perevalov/LinguaF"
    },
    "split_keywords": [
        "language",
        "features",
        "computational",
        "linguistics",
        "quantitative",
        "text",
        "analysis"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fdea9216a703dde21ef4af728890c3091c83574214adfd2cd50eac278ba9a9c3",
                "md5": "bb2992a1ea8069d3dcf0a2d1d69c64d7",
                "sha256": "8a1ba93fe7c055e10562a67b06b37ee97f5aa70d2be467284b35e847a1aeb3d3"
            },
            "downloads": -1,
            "filename": "linguaf-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bb2992a1ea8069d3dcf0a2d1d69c64d7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 29481,
            "upload_time": "2024-11-05T12:27:10",
            "upload_time_iso_8601": "2024-11-05T12:27:10.452393Z",
            "url": "https://files.pythonhosted.org/packages/fd/ea/9216a703dde21ef4af728890c3091c83574214adfd2cd50eac278ba9a9c3/linguaf-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "889c2dc0bc35ceb2c5ea27dfc709679642f2126cd8aa01441658bfae6772c5f6",
                "md5": "5bf195d440eda6762668fb6cc8153be6",
                "sha256": "170065332f53d382d9b53a9d62b69604c09f784d440ea6eeaf29b52e3cd1e21a"
            },
            "downloads": -1,
            "filename": "linguaf-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "5bf195d440eda6762668fb6cc8153be6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 27384,
            "upload_time": "2024-11-05T12:27:12",
            "upload_time_iso_8601": "2024-11-05T12:27:12.094260Z",
            "url": "https://files.pythonhosted.org/packages/88/9c/2dc0bc35ceb2c5ea27dfc709679642f2126cd8aa01441658bfae6772c5f6/linguaf-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-05 12:27:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Perevalov",
    "github_project": "LinguaF",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "natasha",
            "specs": [
                [
                    "==",
                    "1.6.0"
                ]
            ]
        },
        {
            "name": "nltk",
            "specs": [
                [
                    "==",
                    "3.8.1"
                ]
            ]
        },
        {
            "name": "pymorphy3",
            "specs": [
                [
                    "==",
                    "2.0.2"
                ]
            ]
        },
        {
            "name": "Pyphen",
            "specs": [
                [
                    "==",
                    "0.15.0"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "==",
                    "8.2.0"
                ]
            ]
        },
        {
            "name": "setuptools",
            "specs": [
                [
                    "==",
                    "56.0.0"
                ]
            ]
        },
        {
            "name": "spacy",
            "specs": [
                [
                    "==",
                    "3.7.4"
                ]
            ]
        },
        {
            "name": "stopwordsiso",
            "specs": [
                [
                    "==",
                    "0.6.1"
                ]
            ]
        }
    ],
    "lcname": "linguaf"
}
        
Elapsed time: 1.09980s