===============
LexicalRichness
===============
| |pypi| |conda-forge| |latest-release| |python-ver|
| |ci-status| |rtfd| |maintained|
| |PRs| |codefactor| |isort|
| |license| |mybinder| |zenodo|
`LexicalRichness <https://github.com/lsys/lexicalrichness>`__ is a small Python module to compute textual lexical richness (aka lexical diversity) measures.
Lexical richness refers to the range and variety of vocabulary deployed in a text by a speaker/writer `(McCarthy and Jarvis 2007) <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1028.8657&rep=rep1&type=pdf>`_ . Lexical richness is used interchangeably with lexical diversity, lexical variation, lexical density, and vocabulary richness and is measured by a wide variety of indices. Uses include (but not limited to) measuring writing quality, vocabulary knowledge `(Šišková 2012) <https://www.researchgate.net/publication/305999633_Lexical_Richness_in_EFL_Students'_Narratives>`_ , speaker competence, and socioeconomic status `(McCarthy and Jarvis 2007) <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1028.8657&rep=rep1&type=pdf>`_.
See the `notebook <https://nbviewer.org/github/LSYS/LexicalRichness/blob/master/docs/example.ipynb>`_ for examples.
.. TOC
.. contents:: **Table of Contents**
:depth: 1
:local:
1. Installation
---------------
**Install using PIP**
.. code-block:: bash
pip install lexicalrichness
If you encounter,
.. code-block:: python
ModuleNotFoundError: No module named 'textblob'
install textblob:
.. code-block:: bash
pip install textblob
*Note*: This error should only exist for :code:`versions <= v0.1.3`. Fixed in
`v0.1.4 <https://github.com/LSYS/LexicalRichness/releases/tag/0.1.4>`__ by `David Lesieur <https://github.com/davidlesieur>`__ and `Christophe Bedetti <https://github.com/cbedetti>`__.
**Install from Conda-Forge**
*LexicalRichness* is now also available on conda-forge. If you have are using the `Anaconda <https://www.anaconda.com/distribution/#download-section>`__ or `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`__ distribution, you can create a conda environment and install the package from conda.
.. code-block:: bash
conda create -n lex
conda activate lex
conda install -c conda-forge lexicalrichness
*Note*: If you get the error :code:`CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'` with :code:`conda activate lex` in *Bash* either try
* :code:`conda activate bash` in the *Anaconda Prompt* and then retry :code:`conda activate lex` in *Bash*
* or just try :code:`source activate lex` in *Bash*
**Install manually using Git and GitHub**
.. code-block:: bash
git clone https://github.com/LSYS/LexicalRichness.git
cd LexicalRichness
pip install .
**Run from the cloud**
Try the package on the cloud (without setting anything up on your local machine) by clicking the icon here:
|mybinder|
2. Quickstart
-------------
.. code-block:: python
>>> from lexicalrichness import LexicalRichness
# text example
>>> text = """Measure of textual lexical diversity, computed as the mean length of sequential words in
a text that maintains a minimum threshold TTR score.
Iterates over words until TTR scores falls below a threshold, then increase factor
counter by 1 and start over. McCarthy and Jarvis (2010, pg. 385) recommends a factor
threshold in the range of [0.660, 0.750].
(McCarthy 2005, McCarthy and Jarvis 2010)"""
# instantiate new text object (use the tokenizer=blobber argument to use the textblob tokenizer)
>>> lex = LexicalRichness(text)
# Return word count.
>>> lex.words
57
# Return (unique) word count.
>>> lex.terms
39
# Return type-token ratio (TTR) of text.
>>> lex.ttr
0.6842105263157895
# Return root type-token ratio (RTTR) of text.
>>> lex.rttr
5.165676192553671
# Return corrected type-token ratio (CTTR) of text.
>>> lex.cttr
3.6526846651686067
# Return mean segmental type-token ratio (MSTTR).
>>> lex.msttr(segment_window=25)
0.88
# Return moving average type-token ratio (MATTR).
>>> lex.mattr(window_size=25)
0.8351515151515151
# Return Measure of Textual Lexical Diversity (MTLD).
>>> lex.mtld(threshold=0.72)
46.79226361031519
# Return hypergeometric distribution diversity (HD-D) measure.
>>> lex.hdd(draws=42)
0.7468703323966486
# Return voc-D measure.
>>> lex.vocd(ntokens=50, within_sample=100, iterations=3)
46.27679899103406
# Return Herdan's lexical diversity measure.
>>> lex.Herdan
0.9061378160786574
# Return Summer's lexical diversity measure.
>>> lex.Summer
0.9294460323356605
# Return Dugast's lexical diversity measure.
>>> lex.Dugast
43.074336212149774
# Return Maas's lexical diversity measure.
>>> lex.Maas
0.023215679867353005
# Return Yule's K.
>>> lex.yulek
153.8935056940597
# Return Yule's I.
>>> lex.yulei
22.36764705882353
# Return Herdan's Vm.
>>> lex.herdanvm
0.08539428890448784
# Return Simpson's D.
>>> lex.simpsond
0.015664160401002505
3. Use LexicalRichness in your own pipeline
-------------------------------------------
:code:`LexicalRichness` comes packaged with minimal preprocessing + tokenization for a quick start.
But for intermediate users, you likely have your preferred :code:`nlp_pipeline`:
.. code-block:: python
# Your preferred preprocessing + tokenization pipeline
def nlp_pipeline(text):
...
return list_of_tokens
Use :code:`LexicalRichness` with your own :code:`nlp_pipeline`:
.. code-block:: python
# Initiate new LexicalRichness object with your preprocessing pipeline as input
lex = LexicalRichness(text, preprocessor=None, tokenizer=nlp_pipeline)
# Compute lexical richness
mtld = lex.mtld()
Or use :code:`LexicalRichness` at the end of your pipeline and input the :code:`list_of_tokens` with :code:`preprocessor=None` and :code:`tokenizer=None`:
.. code-block:: python
# Preprocess the text
list_of_tokens = nlp_pipeline(text)
# Initiate new LexicalRichness object with your list of tokens as input
lex = LexicalRichness(list_of_tokens, preprocessor=None, tokenizer=None)
# Compute lexical richness
mtld = lex.mtld()
4. Using with Pandas
--------------------
Here's a minimal example using `lexicalrichness` with a `Pandas` `dataframe` with a column containing text:
.. code-block:: python
def mtld(text):
lex = LexicalRichness(text)
return lex.mtld()
df['mtld'] = df['text'].apply(mtld)
5. Attributes
-------------
+-------------------------+-----------------------------------------------------------------------------------+
| ``wordlist`` | list of words |
+-------------------------+-----------------------------------------------------------------------------------+
| ``words`` | number of words (w) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``terms`` | number of unique terms (t) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``preprocessor`` | preprocessor used |
+-------------------------+-----------------------------------------------------------------------------------+
| ``tokenizer`` | tokenizer used |
+-------------------------+-----------------------------------------------------------------------------------+
| ``ttr`` | type-token ratio computed as t / w (Chotlos 1944, Templin 1957) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``rttr`` | root TTR computed as t / sqrt(w) (Guiraud 1954, 1960) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``cttr`` | corrected TTR computed as t / sqrt(2w) (Carrol 1964) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``Herdan`` | log(t) / log(w) (Herdan 1960, 1964) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``Summer`` | log(log(t)) / log(log(w)) (Summer 1966) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``Dugast`` | (log(w) ** 2) / (log(w) - log(t) (Dugast 1978) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``Maas`` | (log(w) - log(t)) / (log(w) ** 2) (Maas 1972) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``yulek`` | Yule's K (Yule 1944, Tweedie and Baayen 1998) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``yulei`` | Yule's I (Yule 1944, Tweedie and Baayen 1998) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``herdanvm`` | Herdan's Vm (Herdan 1955, Tweedie and Baayen 1998) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``simpsond`` | Simpson's D (Simpson 1949, Tweedie and Baayen 1998) |
+-------------------------+-----------------------------------------------------------------------------------+
6. Methods
----------
+-------------------------+-----------------------------------------------------------------------------------+
| ``msttr`` | Mean segmental TTR (Johnson 1944) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``mattr`` | Moving average TTR (Covington 2007, Covington and McFall 2010) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``mtld`` | Measure of Lexical Diversity (McCarthy 2005, McCarthy and Jarvis 2010) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``hdd`` | HD-D (McCarthy and Jarvis 2007) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``vocd`` | voc-D (Mckee, Malvern, and Richards 2010) |
+-------------------------+-----------------------------------------------------------------------------------+
| ``vocd_fig`` | Utility to plot empirical voc-D curve |
+-------------------------+-----------------------------------------------------------------------------------+
**Plot the empirical voc-D curve**
.. code-block:: python
lex.vocd_fig(
ntokens=50, # Maximum number for the token/word size in the random samplings
within_sample=100, # Number of samples
seed=42, # Seed for reproducibility
)
.. image:: https://raw.githubusercontent.com/LSYS/LexicalRichness/master/docs/images/vocd.png
:width: 450
**Assessing method docstrings**
.. code-block:: python
>>> import inspect
# docstring for hdd (HD-D)
>>> print(inspect.getdoc(LexicalRichness.hdd))
Hypergeometric distribution diversity (HD-D) score.
For each term (t) in the text, compute the probabiltiy (p) of getting at least one appearance
of t with a random draw of size n < N (text size). The contribution of t to the final HD-D
score is p * (1/n). The final HD-D score thus sums over p * (1/n) with p computed for
each term t. Described in McCarthy and Javis 2007, p.g. 465-466.
(McCarthy and Jarvis 2007)
Parameters
__________
draws: int
Number of random draws in the hypergeometric distribution (default=42).
Returns
_______
float
Alternatively, just do
.. code-block:: python
>>> print(lex.hdd.__doc__)
Hypergeometric distribution diversity (HD-D) score.
For each term (t) in the text, compute the probabiltiy (p) of getting at least one appearance
of t with a random draw of size n < N (text size). The contribution of t to the final HD-D
score is p * (1/n). The final HD-D score thus sums over p * (1/n) with p computed for
each term t. Described in McCarthy and Javis 2007, p.g. 465-466.
(McCarthy and Jarvis 2007)
Parameters
----------
draws: int
Number of random draws in the hypergeometric distribution (default=42).
Returns
-------
float
7. Formulation & Algorithmic Details
------------------------------------
For details under the hood, please see `this section <https://lexicalrichness.readthedocs.io/en/latest/#details-of-lexical-richness-measures>`_ in the docs (or `see here <https://www.lucasshen.com/software/lexicalrichness/doc#details-of-lexical-richness-measures>`_).
8. Example use cases
--------------------
* `[1] <https://doi.org/10.1007/s10579-021-09562-4>`_ **SENTiVENT** used the metrics that LexicalRichness provides to estimate the classification difficulty of annotated categories in their corpus (Jacobs & Hoste 2020). The metrics show which categories will be more difficult for modeling approaches that rely on linguistic inputs because greater lexical diversity means greater data scarcity and more need for generalization. (h/t Gilles Jacobs)
Jacobs, Gilles, and Véronique Hoste. "SENTiVENT: enabling supervised information extraction of company-specific events in economic and financial news." Language Resources and Evaluation (2021): 1-33.
* | `[2] <https://www.lucasshen.com/research/media.pdf>`_ **Measuring political media using text data.** This chapter of my thesis investigates whether political media bias manifests by coverage accuracy. As covaraites, I use characteristics of the text data (political speech and news article transcripts). One of the ways speeches can be characterized is via lexical richness.
* `[3] <https://github.com/notnews/unreadable_news>`_ **Unreadable News: How Readable is American News?** This study characterizes modern news by readability and lexical richness. Focusing on the NYT, they find increasing readability and lexical richness, suggesting that NYT feels competition from alternative sources to be accessible while maintaining its key demographic of college-educated Americans.
* `[4] <https://github.com/g-hurst/Comparing-Properties-of-German-and-English-Books>`_ **German is more complicated than English** This study analyses a small sample of English books and compares them to their German translation. Within the sample, it can be observed that the German translations tend to be shorter in length, but contain more unique terms than their English counterparts. LexicalRichness was used to generate the statistics modeled within the study.
9. Contributing
---------------
**Author**
`Lucas Shen <https://www.lucasshen.com/>`__
**Contributors**
.. image:: https://contrib.rocks/image?repo=lsys/lexicalrichness
:target: https://github.com/lsys/lexicalrichness/graphs/contributors
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
See here for `how to contribute <./docs//CONTRIBUTING.rst>`__ to this project.
See here for `Contributor Code of
Conduct <http://contributor-covenant.org/version/1/0/0/>`__.
If you'd like to contribute via a Pull Request (PR), feel free to open an issue on the `Issue Tracker
<https://github.com/LSYS/LexicalRichness/issues>`__ to discuss the potential contribution via a PR.
10. Citing
----------
If you have used this codebase and wish to cite it, here is the citation metadata.
Codebase:
.. code-block:: bib
@misc{lex,
author = {Shen, Lucas},
doi = {10.5281/zenodo.6607007},
license = {MIT license},
title = {{LexicalRichness: A small module to compute textual lexical richness}},
url = {https://github.com/LSYS/lexicalrichness},
year = {2022}
}
Documentation on formulations and algorithms:
.. code-block:: bib
@misc{accuracybias,
title={Measuring Political Media Slant Using Text Data},
author={Shen, Lucas},
url={https://www.lucasshen.com/research/media.pdf},
year={2021}
}
The package is released under the `MIT
License <https://opensource.org/licenses/MIT>`__.
.. macros -------------------------------------------------------------------------------------------------------
.. badges
.. |pypi| image:: https://badge.fury.io/py/lexicalrichness.svg
:target: https://pypi.org/project/lexicalrichness/
.. |conda-forge| image:: https://img.shields.io/conda/vn/conda-forge/lexicalrichness
:target: https://anaconda.org/conda-forge/lexicalrichness
.. |latest-release| image:: https://img.shields.io/github/v/release/lsys/lexicalrichness
:target: https://github.com/LSYS/LexicalRichness/releases
.. |ci-status| image:: https://github.com/LSYS/LexicalRichness/actions/workflows/build.yml/badge.svg?branch=master
:target: https://github.com/LSYS/LexicalRichness/actions/workflows/build.yml
.. |python-ver| image:: https://img.shields.io/pypi/pyversions/lexicalrichness
:target: https://img.shields.io/pypi/pyversions/lexicalrichness
.. |codefactor| image:: https://www.codefactor.io/repository/github/lsys/lexicalrichness/badge
:target: https://www.codefactor.io/repository/github/lsys/lexicalrichness
.. |lgtm| image:: https://img.shields.io/lgtm/grade/python/g/LSYS/LexicalRichness.svg?logo=lgtm&logoWidth=18)
:target: https://lgtm.com/projects/g/LSYS/LexicalRichness/context:python
.. |maintained| image:: https://img.shields.io/badge/Maintained%3F-yes-green.svg
:target: https://GitHub.com/Naereen/StrapDown.js/graphs/commit-
.. |PRs| image:: https://img.shields.io/badge/PRs-welcome-brightgreen.svg
:target: http://makeapullrequest.com
.. |license| image:: https://img.shields.io/github/license/LSYS/LexicalRichness?color=blue&label=License
:target: https://github.com/LSYS/LexicalRichness/blob/master/LICENSE
.. |mybinder| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/LSYS/lexicaldiversity-example/main?labpath=example.ipynb
.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.6607007.svg
:target: https://doi.org/10.5281/zenodo.6607007
.. |rtfd| image:: https://readthedocs.org/projects/lexicalrichness/badge/?version=latest
:target: https://lexicalrichness.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. |isort| image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336
:target: https://pycqa.github.io/isort
:alt: Imports: isort
Raw data
{
"_id": null,
"home_page": "https://github.com/LSYS/lexicalrichness",
"name": "lexicalrichness",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "lexical diversity,lexical richness,vocabulary diversity,lexical density,lexical,nlp,data science,natural language processing,information retrieval,data mining,natural langauge,lexical analysis,api,lexical analyzer,linguistic analysis,statistics",
"author": "Lucas Shen YS",
"author_email": "lucas@lucasshen.com",
"download_url": "https://files.pythonhosted.org/packages/d6/4a/f67555e6cce1f3c44291e429cb5377c6117bbe7c0fc6fa77a15674f292da/lexicalrichness-0.5.1.tar.gz",
"platform": null,
"description": "===============\nLexicalRichness\n===============\n|\t|pypi| |conda-forge| |latest-release| |python-ver| \n|\t|ci-status| |rtfd| |maintained|\n|\t|PRs| |codefactor| |isort|\n|\t|license| |mybinder| |zenodo|\n\n`LexicalRichness <https://github.com/lsys/lexicalrichness>`__ is a small Python module to compute textual lexical richness (aka lexical diversity) measures.\n\nLexical richness refers to the range and variety of vocabulary deployed in a text by a speaker/writer `(McCarthy and Jarvis 2007) <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1028.8657&rep=rep1&type=pdf>`_ . Lexical richness is used interchangeably with lexical diversity, lexical variation, lexical density, and vocabulary richness and is measured by a wide variety of indices. Uses include (but not limited to) measuring writing quality, vocabulary knowledge `(\u0160i\u0161kov\u00e1 2012) <https://www.researchgate.net/publication/305999633_Lexical_Richness_in_EFL_Students'_Narratives>`_ , speaker competence, and socioeconomic status `(McCarthy and Jarvis 2007) <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1028.8657&rep=rep1&type=pdf>`_. \nSee the `notebook <https://nbviewer.org/github/LSYS/LexicalRichness/blob/master/docs/example.ipynb>`_ for examples.\n\n.. TOC\n.. contents:: **Table of Contents**\n :depth: 1\n :local:\n\t\n1. Installation\n---------------\n**Install using PIP**\n\n.. code-block:: bash\n\n\tpip install lexicalrichness\n\nIf you encounter, \n\n.. code-block:: python\n\n\tModuleNotFoundError: No module named 'textblob'\n\ninstall textblob:\n\n.. code-block:: bash\n\n\tpip install textblob\n\n*Note*: This error should only exist for :code:`versions <= v0.1.3`. Fixed in \n`v0.1.4 <https://github.com/LSYS/LexicalRichness/releases/tag/0.1.4>`__ by `David Lesieur <https://github.com/davidlesieur>`__ and `Christophe Bedetti <https://github.com/cbedetti>`__.\n\n\n**Install from Conda-Forge**\n\n*LexicalRichness* is now also available on conda-forge. If you have are using the `Anaconda <https://www.anaconda.com/distribution/#download-section>`__ or `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`__ distribution, you can create a conda environment and install the package from conda.\n\n.. code-block:: bash\n\n\tconda create -n lex\n\tconda activate lex \n\tconda install -c conda-forge lexicalrichness\n\n*Note*: If you get the error :code:`CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'` with :code:`conda activate lex` in *Bash* either try\n\n\t* :code:`conda activate bash` in the *Anaconda Prompt* and then retry :code:`conda activate lex` in *Bash*\n\t* or just try :code:`source activate lex` in *Bash*\n\n**Install manually using Git and GitHub**\n\n.. code-block:: bash\n\n\tgit clone https://github.com/LSYS/LexicalRichness.git\n\tcd LexicalRichness\n\tpip install .\n\n**Run from the cloud**\n\nTry the package on the cloud (without setting anything up on your local machine) by clicking the icon here: \n\n|mybinder|\n\n\n\n2. Quickstart\n-------------\n\n.. code-block:: python\n\n\t>>> from lexicalrichness import LexicalRichness\n\n\t# text example\n\t>>> text = \"\"\"Measure of textual lexical diversity, computed as the mean length of sequential words in\n \t\ta text that maintains a minimum threshold TTR score.\n\n \t\tIterates over words until TTR scores falls below a threshold, then increase factor\n \t\tcounter by 1 and start over. McCarthy and Jarvis (2010, pg. 385) recommends a factor\n \t\tthreshold in the range of [0.660, 0.750].\n \t\t(McCarthy 2005, McCarthy and Jarvis 2010)\"\"\"\n\n\t# instantiate new text object (use the tokenizer=blobber argument to use the textblob tokenizer)\n\t>>> lex = LexicalRichness(text)\n\n\t# Return word count.\n\t>>> lex.words\n\t57\n\n\t# Return (unique) word count.\n\t>>> lex.terms\n\t39\n\n\t# Return type-token ratio (TTR) of text.\n\t>>> lex.ttr\n\t0.6842105263157895\n\n\t# Return root type-token ratio (RTTR) of text.\n\t>>> lex.rttr\n\t5.165676192553671\n\n\t# Return corrected type-token ratio (CTTR) of text.\n\t>>> lex.cttr\n\t3.6526846651686067\n\n\t# Return mean segmental type-token ratio (MSTTR).\n\t>>> lex.msttr(segment_window=25)\n\t0.88\n\n\t# Return moving average type-token ratio (MATTR).\n\t>>> lex.mattr(window_size=25)\n\t0.8351515151515151\n\n\t# Return Measure of Textual Lexical Diversity (MTLD).\n\t>>> lex.mtld(threshold=0.72)\n\t46.79226361031519\n\n\t# Return hypergeometric distribution diversity (HD-D) measure.\n\t>>> lex.hdd(draws=42)\n\t0.7468703323966486\n\t\n\t# Return voc-D measure.\n\t>>> lex.vocd(ntokens=50, within_sample=100, iterations=3)\n\t46.27679899103406\n\n\t# Return Herdan's lexical diversity measure.\n\t>>> lex.Herdan\n\t0.9061378160786574\n\n\t# Return Summer's lexical diversity measure.\n\t>>> lex.Summer\n\t0.9294460323356605\n\n\t# Return Dugast's lexical diversity measure.\n\t>>> lex.Dugast\n\t43.074336212149774\n\n\t# Return Maas's lexical diversity measure.\n\t>>> lex.Maas\n\t0.023215679867353005\n\n\t# Return Yule's K.\n\t>>> lex.yulek\n\t153.8935056940597\n\n\t# Return Yule's I.\n\t>>> lex.yulei\n\t22.36764705882353\n\t\n\t# Return Herdan's Vm.\n\t>>> lex.herdanvm\n\t0.08539428890448784\n\n\t# Return Simpson's D.\n\t>>> lex.simpsond\n\t0.015664160401002505\n\n\t\n3. Use LexicalRichness in your own pipeline\n-------------------------------------------\n:code:`LexicalRichness` comes packaged with minimal preprocessing + tokenization for a quick start. \n\nBut for intermediate users, you likely have your preferred :code:`nlp_pipeline`:\n\n.. code-block:: python\n\n\t# Your preferred preprocessing + tokenization pipeline\n\tdef nlp_pipeline(text):\n\t ...\n\t return list_of_tokens\n\nUse :code:`LexicalRichness` with your own :code:`nlp_pipeline`:\n\n.. code-block:: python\n\n\t# Initiate new LexicalRichness object with your preprocessing pipeline as input\n\tlex = LexicalRichness(text, preprocessor=None, tokenizer=nlp_pipeline)\n\n\t# Compute lexical richness\n\tmtld = lex.mtld()\n\t\nOr use :code:`LexicalRichness` at the end of your pipeline and input the :code:`list_of_tokens` with :code:`preprocessor=None` and :code:`tokenizer=None`:\n\t\n.. code-block:: python\n\n\t# Preprocess the text\n\tlist_of_tokens = nlp_pipeline(text)\n\t\n\t# Initiate new LexicalRichness object with your list of tokens as input\n\tlex = LexicalRichness(list_of_tokens, preprocessor=None, tokenizer=None)\n\n\t# Compute lexical richness\n\tmtld = lex.mtld()\t\n\t\n4. Using with Pandas\n--------------------\nHere's a minimal example using `lexicalrichness` with a `Pandas` `dataframe` with a column containing text:\n\n.. code-block:: python\n\n\tdef mtld(text):\n\t lex = LexicalRichness(text)\n\t return lex.mtld()\n\t\t\n\tdf['mtld'] = df['text'].apply(mtld)\n\n\n5. Attributes\n-------------\n\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``wordlist`` | list of words \t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``words`` \t\t | number of words (w) \t\t\t\t \t\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``terms``\t\t | number of unique terms (t)\t\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``preprocessor`` | preprocessor used\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``tokenizer`` | tokenizer used\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``ttr``\t\t | type-token ratio computed as t / w (Chotlos 1944, Templin 1957) \t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``rttr``\t | root TTR computed as t / sqrt(w) (Guiraud 1954, 1960) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``cttr``\t | corrected TTR computed as t / sqrt(2w) (Carrol 1964)\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``Herdan`` \t | log(t) / log(w) (Herdan 1960, 1964) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``Summer`` \t | log(log(t)) / log(log(w)) (Summer 1966) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``Dugast`` \t | (log(w) ** 2) / (log(w) - log(t) (Dugast 1978)\t\t\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``Maas`` \t | (log(w) - log(t)) / (log(w) ** 2) (Maas 1972) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``yulek``\t | Yule's K (Yule 1944, Tweedie and Baayen 1998) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``yulei``\t | Yule's I (Yule 1944, Tweedie and Baayen 1998) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``herdanvm``\t | Herdan's Vm (Herdan 1955, Tweedie and Baayen 1998) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``simpsond``\t | Simpson's D (Simpson 1949, Tweedie and Baayen 1998) |\n+-------------------------+-----------------------------------------------------------------------------------+\n\n6. Methods\n----------\n\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``msttr`` \t | Mean segmental TTR (Johnson 1944)\t\t\t\t\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``mattr`` \t\t | Moving average TTR (Covington 2007, Covington and McFall 2010)\t\t |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``mtld``\t\t | Measure of Lexical Diversity (McCarthy 2005, McCarthy and Jarvis 2010) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``hdd`` | HD-D (McCarthy and Jarvis 2007) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``vocd`` | voc-D (Mckee, Malvern, and Richards 2010) |\n+-------------------------+-----------------------------------------------------------------------------------+\n| ``vocd_fig`` | Utility to plot empirical voc-D curve \t |\n+-------------------------+-----------------------------------------------------------------------------------+\n\n**Plot the empirical voc-D curve**\n\n.. code-block:: python\n\n\tlex.vocd_fig(\n\t ntokens=50, # Maximum number for the token/word size in the random samplings\n\t within_sample=100, # Number of samples\n\t seed=42, # Seed for reproducibility\n\t)\n\n.. image:: https://raw.githubusercontent.com/LSYS/LexicalRichness/master/docs/images/vocd.png\n\t:width: 450\n\n\n**Assessing method docstrings**\n\n.. code-block:: python\n\n\t>>> import inspect\n\n\t# docstring for hdd (HD-D)\n\t>>> print(inspect.getdoc(LexicalRichness.hdd))\n\n\tHypergeometric distribution diversity (HD-D) score.\n\n\tFor each term (t) in the text, compute the probabiltiy (p) of getting at least one appearance\n\tof t with a random draw of size n < N (text size). The contribution of t to the final HD-D\n\tscore is p * (1/n). The final HD-D score thus sums over p * (1/n) with p computed for\n\teach term t. Described in McCarthy and Javis 2007, p.g. 465-466.\n\t(McCarthy and Jarvis 2007)\n\n\tParameters\n\t__________\n\tdraws: int\n\t Number of random draws in the hypergeometric distribution (default=42).\n\n\tReturns\n\t_______\n\tfloat\n\t\nAlternatively, just do\n\n.. code-block:: python\n\n\t>>> print(lex.hdd.__doc__)\n\t\n\tHypergeometric distribution diversity (HD-D) score.\n\n For each term (t) in the text, compute the probabiltiy (p) of getting at least one appearance\n of t with a random draw of size n < N (text size). The contribution of t to the final HD-D\n score is p * (1/n). The final HD-D score thus sums over p * (1/n) with p computed for\n each term t. Described in McCarthy and Javis 2007, p.g. 465-466.\n (McCarthy and Jarvis 2007)\n\n Parameters\n ----------\n draws: int\n Number of random draws in the hypergeometric distribution (default=42).\n\n Returns\n -------\n float\t\n\t \n7. Formulation & Algorithmic Details\n------------------------------------\nFor details under the hood, please see `this section <https://lexicalrichness.readthedocs.io/en/latest/#details-of-lexical-richness-measures>`_ in the docs (or `see here <https://www.lucasshen.com/software/lexicalrichness/doc#details-of-lexical-richness-measures>`_).\n\n\t \n8. Example use cases\n--------------------\n* `[1] <https://doi.org/10.1007/s10579-021-09562-4>`_ **SENTiVENT** used the metrics that LexicalRichness provides to estimate the classification difficulty of annotated categories in their corpus (Jacobs & Hoste 2020). The metrics show which categories will be more difficult for modeling approaches that rely on linguistic inputs because greater lexical diversity means greater data scarcity and more need for generalization. (h/t Gilles Jacobs)\n\n\tJacobs, Gilles, and V\u00e9ronique Hoste. \"SENTiVENT: enabling supervised information extraction of company-specific events in economic and financial news.\" Language Resources and Evaluation (2021): 1-33.\n\n \n* | `[2] <https://www.lucasshen.com/research/media.pdf>`_ **Measuring political media using text data.** This chapter of my thesis investigates whether political media bias manifests by coverage accuracy. As covaraites, I use characteristics of the text data (political speech and news article transcripts). One of the ways speeches can be characterized is via lexical richness.\n\n* `[3] <https://github.com/notnews/unreadable_news>`_ **Unreadable News: How Readable is American News?** This study characterizes modern news by readability and lexical richness. Focusing on the NYT, they find increasing readability and lexical richness, suggesting that NYT feels competition from alternative sources to be accessible while maintaining its key demographic of college-educated Americans. \n\n* `[4] <https://github.com/g-hurst/Comparing-Properties-of-German-and-English-Books>`_ **German is more complicated than English** This study analyses a small sample of English books and compares them to their German translation. Within the sample, it can be observed that the German translations tend to be shorter in length, but contain more unique terms than their English counterparts. LexicalRichness was used to generate the statistics modeled within the study. \n\n\t \n9. Contributing\n---------------\n**Author**\n\n`Lucas Shen <https://www.lucasshen.com/>`__\n\n**Contributors**\n\n.. image:: https://contrib.rocks/image?repo=lsys/lexicalrichness\n :target: https://github.com/lsys/lexicalrichness/graphs/contributors\n\nContributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. \nSee here for `how to contribute <./docs//CONTRIBUTING.rst>`__ to this project.\nSee here for `Contributor Code of\nConduct <http://contributor-covenant.org/version/1/0/0/>`__.\n\nIf you'd like to contribute via a Pull Request (PR), feel free to open an issue on the `Issue Tracker\n<https://github.com/LSYS/LexicalRichness/issues>`__ to discuss the potential contribution via a PR.\n\n10. Citing\n----------\nIf you have used this codebase and wish to cite it, here is the citation metadata.\n\nCodebase:\n\n.. code-block:: bib\n\n\t@misc{lex,\n\t\tauthor = {Shen, Lucas},\n\t\tdoi = {10.5281/zenodo.6607007},\n\t\tlicense = {MIT license},\n\t\ttitle = {{LexicalRichness: A small module to compute textual lexical richness}},\n\t\turl = {https://github.com/LSYS/lexicalrichness},\n\t\tyear = {2022}\n\t}\n\nDocumentation on formulations and algorithms:\n\n.. code-block:: bib\n\n\t@misc{accuracybias, \n\t\ttitle={Measuring Political Media Slant Using Text Data},\n\t\tauthor={Shen, Lucas},\n\t\turl={https://www.lucasshen.com/research/media.pdf},\n\t\tyear={2021}\n\t}\n\nThe package is released under the `MIT\nLicense <https://opensource.org/licenses/MIT>`__.\n\n.. macros -------------------------------------------------------------------------------------------------------\n.. badges\n.. |pypi| image:: https://badge.fury.io/py/lexicalrichness.svg\n\t:target: https://pypi.org/project/lexicalrichness/\n.. |conda-forge| image:: https://img.shields.io/conda/vn/conda-forge/lexicalrichness \n\t:target: https://anaconda.org/conda-forge/lexicalrichness\n.. |latest-release| image:: https://img.shields.io/github/v/release/lsys/lexicalrichness \n\t:target: https://github.com/LSYS/LexicalRichness/releases\n.. |ci-status| image:: https://github.com/LSYS/LexicalRichness/actions/workflows/build.yml/badge.svg?branch=master \n\t:target: https://github.com/LSYS/LexicalRichness/actions/workflows/build.yml\n.. |python-ver| image:: https://img.shields.io/pypi/pyversions/lexicalrichness \n\t:target: https://img.shields.io/pypi/pyversions/lexicalrichness\n.. |codefactor| image:: https://www.codefactor.io/repository/github/lsys/lexicalrichness/badge\n\t:target: https://www.codefactor.io/repository/github/lsys/lexicalrichness \n.. |lgtm| image:: https://img.shields.io/lgtm/grade/python/g/LSYS/LexicalRichness.svg?logo=lgtm&logoWidth=18)\n\t:target: https://lgtm.com/projects/g/LSYS/LexicalRichness/context:python \n.. |maintained| image:: https://img.shields.io/badge/Maintained%3F-yes-green.svg\n :target: https://GitHub.com/Naereen/StrapDown.js/graphs/commit- \n.. |PRs| image:: https://img.shields.io/badge/PRs-welcome-brightgreen.svg\n\t:target: http://makeapullrequest.com \n.. |license| image:: https://img.shields.io/github/license/LSYS/LexicalRichness?color=blue&label=License \n\t:target: https://github.com/LSYS/LexicalRichness/blob/master/LICENSE \n.. |mybinder| image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/LSYS/lexicaldiversity-example/main?labpath=example.ipynb\t\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.6607007.svg\n :target: https://doi.org/10.5281/zenodo.6607007\n\t\t\n.. |rtfd| image:: https://readthedocs.org/projects/lexicalrichness/badge/?version=latest\n :target: https://lexicalrichness.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n.. |isort| image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336\n\t:target: https://pycqa.github.io/isort\n\t:alt: Imports: isort\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "A small module to compute textual lexical richness (aka lexical diversity).",
"version": "0.5.1",
"project_urls": {
"Download": "https://github.com/LSYS/LexicalRichness/archive/refs/tags/v0.5.1.tar.gz",
"Homepage": "https://github.com/LSYS/lexicalrichness"
},
"split_keywords": [
"lexical diversity",
"lexical richness",
"vocabulary diversity",
"lexical density",
"lexical",
"nlp",
"data science",
"natural language processing",
"information retrieval",
"data mining",
"natural langauge",
"lexical analysis",
"api",
"lexical analyzer",
"linguistic analysis",
"statistics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d64af67555e6cce1f3c44291e429cb5377c6117bbe7c0fc6fa77a15674f292da",
"md5": "94732797f878c938e57f340a9a094af4",
"sha256": "e38ba6753d4c48c58cb9bd7c12d548e1565e4d4a657976fb2e02dc196f44d97f"
},
"downloads": -1,
"filename": "lexicalrichness-0.5.1.tar.gz",
"has_sig": false,
"md5_digest": "94732797f878c938e57f340a9a094af4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 97765,
"upload_time": "2023-08-27T05:26:20",
"upload_time_iso_8601": "2023-08-27T05:26:20.835656Z",
"url": "https://files.pythonhosted.org/packages/d6/4a/f67555e6cce1f3c44291e429cb5377c6117bbe7c0fc6fa77a15674f292da/lexicalrichness-0.5.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-08-27 05:26:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LSYS",
"github_project": "lexicalrichness",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "lexicalrichness"
}