tmtoolkit: Text mining and topic modeling toolkit
=================================================
|pypi| |pypi_downloads| |rtd| |runtests| |coverage| |zenodo|
*tmtoolkit* is a set of tools for text mining and topic modeling with Python developed especially for the use in the
social sciences, linguistics, journalism or related disciplines. It aims for easy installation, extensive documentation
and a clear programming interface while offering good performance on large datasets by the means of vectorized
operations (via NumPy) and parallel computation (using Python's *multiprocessing* module and the
`loky <https://loky.readthedocs.io/>`_ package). The basis of tmtoolkit's text mining capabilities are built around
`SpaCy <https://spacy.io/>`_, which offers `many language models <https://spacy.io/models>`_.
The documentation for tmtoolkit is available on `tmtoolkit.readthedocs.org <https://tmtoolkit.readthedocs.org>`_ and
the GitHub code repository is on
`github.com/internaut/tmtoolkit <https://github.com/internaut/tmtoolkit>`_.
Requirements and installation
-----------------------------
**tmtoolkit works with Python 3.8 or newer (tested up to Python 3.11).**
.. note:: There are two dependencies, that don't work with Python 3.11 so far: *lda* and *wordcloud*. If you want to
do topic modeling via LDA and/or want to use word cloud visualizations, you must use Python 3.8 to 3.10 or
wait until lda and wordcloud receive updates that make them work under Python 3.11.
The tmtoolkit package is highly modular and tries to install as few dependencies as possible. For requirements and
installation procedures, please have a look at the
`installation section in the documentation <https://tmtoolkit.readthedocs.io/en/latest/install.html>`_. For short,
the recommended way of installing tmtoolkit is to create and activate a
`Python Virtual Environment ("venv") <https://docs.python.org/3/tutorial/venv.html>`_ and then install tmtoolkit with
a recommended set of dependencies and a list of language models via the following:
.. code-block:: text
pip install -U "tmtoolkit[recommended]"
# add or remove language codes in the list for installing the models that you need;
# don't use spaces in the list of languages
python -m tmtoolkit setup en,de
Again, you should have a look at the detailed
`installation instructions <https://tmtoolkit.readthedocs.io/en/latest/install.html>`_ in order to install additional
packages that enable more features such as topic modeling.
Features
--------
Text preprocessing
^^^^^^^^^^^^^^^^^^
The tmtoolkit package offers several text preprocessing and text mining methods, including:
- `tokenization, sentence segmentation, part-of-speech (POS) tagging, named-entity recognition (NER) <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Configuring-the-NLP-pipeline,-parallel-processing-and-more-via-Corpus-parameters>`_ (via SpaCy)
- `lemmatization and token normalization <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Lemmatization-and-token-normalization>`_
- extensive `pattern matching capabilities <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Common-parameters-for-pattern-matching-functions>`_
(exact matching, regular expressions or "glob" patterns) to be used in many
methods of the package, e.g. for filtering on token or document level, or for
`keywords-in-context (KWIC) <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Keywords-in-context-(KWIC)-and-general-filtering-methods>`_
- adding and managing
`custom document and token attributes <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Working-with-document-and-token-attributes>`_
- accessing text corpora along with their
`document and token attributes as dataframes <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Accessing-tokens-and-token-attributes>`_
- calculating and `visualizing corpus summary statistics <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Visualizing-corpus-summary-statistics>`_
- finding out and joining `collocations <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Identifying-and-joining-token-collocations>`_
- calculating `token cooccurrences <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Token-cooccurrence-matrices>`_
- `splitting and sampling corpora <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Corpus-functions-for-document-management>`_
- generating `n-grams <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Generating-n-grams>`_ and using
`N-gram models <https://tmtoolkit.readthedocs.io/en/latest/api.html#module-tmtoolkit.ngrammodels>`_
- generating `sparse document-term matrices <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Generating-a-sparse-document-term-matrix-(DTM)>`_
Wherever possible and useful, these methods can operate in parallel to speed up computations with large datasets.
Topic modeling
^^^^^^^^^^^^^^
* `model computation in parallel <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Computing-topic-models-in-parallel>`_ for different copora
and/or parameter sets
* support for `lda <http://pythonhosted.org/lda/>`_,
`scikit-learn <http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html>`_
and `gensim <https://radimrehurek.com/gensim/>`_ topic modeling backends
* `evaluation of topic models <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Evaluation-of-topic-models>`_ (e.g. in order to an optimal number
of topics for a given dataset) using several implemented metrics:
* model coherence (`Mimno et al. 2011 <https://dl.acm.org/citation.cfm?id=2145462>`_) or with
`metrics implemented in Gensim <https://radimrehurek.com/gensim/models/coherencemodel.html>`_)
* KL divergence method (`Arun et al. 2010 <http://doi.org/10.1007/978-3-642-13657-3_43>`_)
* probability of held-out documents (`Wallach et al. 2009 <https://doi.org/10.1145/1553374.1553515>`_)
* pair-wise cosine distance method (`Cao Juan et al. 2009 <http://doi.org/10.1016/j.neucom.2008.06.011>`_)
* harmonic mean method (`Griffiths, Steyvers 2004 <http://doi.org/10.1073/pnas.0307752101>`_)
* the loglikelihood or perplexity methods natively implemented in lda, sklearn or gensim
* `plotting of evaluation results <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Evaluation-of-topic-models>`_
* `common statistics for topic models <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Common-statistics-and-tools-for-topic-models>`_ such as
word saliency and distinctiveness (`Chuang et al. 2012 <https://dl.acm.org/citation.cfm?id=2254572>`_), topic-word
relevance (`Sievert and Shirley 2014 <https://www.aclweb.org/anthology/W14-3110>`_)
* `finding / filtering topics with pattern matching <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Filtering-topics>`_
* `export estimated document-topic and topic-word distributions to Excel
<https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Displaying-and-exporting-topic-modeling-results>`_
* `visualize topic-word distributions and document-topic distributions <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Visualizing-topic-models>`_
as word clouds or heatmaps
* model coherence (`Mimno et al. 2011 <https://dl.acm.org/citation.cfm?id=2145462>`_) for individual topics
* integrate `PyLDAVis <https://pyldavis.readthedocs.io/en/latest/>`_ to visualize results
Other features
^^^^^^^^^^^^^^
- loading and cleaning of raw text from
`text files, tabular files (CSV or Excel), ZIP files or folders <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Loading-text-data>`_
- `splitting and joining documents <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Corpus-functions-for-document-management>`_
- `common statistics and transformations for document-term matrices <https://tmtoolkit.readthedocs.io/en/latest/bow.html>`_ like word cooccurrence and *tf-idf*
- `interoperability with R <https://tmtoolkit.readthedocs.io/en/latest/rinterop.html>`_
Limits
------
* all languages are supported, for which `SpaCy language models <https://spacy.io/models>`_ are available
* all data must reside in memory, i.e. no streaming of large data from the hard disk (which for example
`Gensim <https://radimrehurek.com/gensim/>`_ supports)
Contribute
----------
If you'd like to contribute, please read the `developer documentation <https://tmtoolkit.readthedocs.io/en/latest/development.html>`_ first.
License
-------
Code licensed under `Apache License 2.0 <https://www.apache.org/licenses/LICENSE-2.0>`_.
See `LICENSE <https://github.com/internaut/tmtoolkit/blob/master/LICENSE>`_ file.
.. |pypi| image:: https://badge.fury.io/py/tmtoolkit.svg
:target: https://badge.fury.io/py/tmtoolkit
:alt: PyPI Version
.. |pypi_downloads| image:: https://img.shields.io/pypi/dm/tmtoolkit
:target: https://pypi.org/project/tmtoolkit/
:alt: Downloads from PyPI
.. |runtests| image:: https://github.com/internaut/tmtoolkit/actions/workflows/runtests.yml/badge.svg
:target: https://github.com/internaut/tmtoolkit/actions/workflows/runtests.yml
:alt: GitHub Actions CI Build Status
.. |coverage| image:: https://raw.githubusercontent.com/internaut/tmtoolkit/master/coverage.svg?sanitize=true
:target: https://github.com/internaut/tmtoolkit/tree/master/tests
:alt: Coverage status
.. |rtd| image:: https://readthedocs.org/projects/tmtoolkit/badge/?version=latest
:target: https://tmtoolkit.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. |zenodo| image:: https://zenodo.org/badge/109812180.svg
:target: https://zenodo.org/badge/latestdoi/109812180
:alt: Citable Zenodo DOI
Raw data
{
"_id": null,
"home_page": "https://github.com/internaut/tmtoolkit",
"name": "tmtoolkit",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "textmining textanalysis text mining analysis preprocessing topicmodeling topic modeling evaluation",
"author": "Markus Konrad",
"author_email": "post@mkonrad.net",
"download_url": "https://files.pythonhosted.org/packages/32/99/488fa98f68711ea337e2659c51d54860cd79e499263dbd49f22696dd8067/tmtoolkit-0.12.0.tar.gz",
"platform": null,
"description": "tmtoolkit: Text mining and topic modeling toolkit\n=================================================\n\n|pypi| |pypi_downloads| |rtd| |runtests| |coverage| |zenodo|\n\n*tmtoolkit* is a set of tools for text mining and topic modeling with Python developed especially for the use in the\nsocial sciences, linguistics, journalism or related disciplines. It aims for easy installation, extensive documentation\nand a clear programming interface while offering good performance on large datasets by the means of vectorized\noperations (via NumPy) and parallel computation (using Python's *multiprocessing* module and the\n`loky <https://loky.readthedocs.io/>`_ package). The basis of tmtoolkit's text mining capabilities are built around\n`SpaCy <https://spacy.io/>`_, which offers `many language models <https://spacy.io/models>`_.\n\nThe documentation for tmtoolkit is available on `tmtoolkit.readthedocs.org <https://tmtoolkit.readthedocs.org>`_ and\nthe GitHub code repository is on\n`github.com/internaut/tmtoolkit <https://github.com/internaut/tmtoolkit>`_.\n\nRequirements and installation\n-----------------------------\n\n**tmtoolkit works with Python 3.8 or newer (tested up to Python 3.11).**\n\n.. note:: There are two dependencies, that don't work with Python 3.11 so far: *lda* and *wordcloud*. If you want to\n do topic modeling via LDA and/or want to use word cloud visualizations, you must use Python 3.8 to 3.10 or\n wait until lda and wordcloud receive updates that make them work under Python 3.11.\n\nThe tmtoolkit package is highly modular and tries to install as few dependencies as possible. For requirements and\ninstallation procedures, please have a look at the\n`installation section in the documentation <https://tmtoolkit.readthedocs.io/en/latest/install.html>`_. For short,\nthe recommended way of installing tmtoolkit is to create and activate a\n`Python Virtual Environment (\"venv\") <https://docs.python.org/3/tutorial/venv.html>`_ and then install tmtoolkit with\na recommended set of dependencies and a list of language models via the following:\n\n.. code-block:: text\n\n pip install -U \"tmtoolkit[recommended]\"\n # add or remove language codes in the list for installing the models that you need;\n # don't use spaces in the list of languages\n python -m tmtoolkit setup en,de\n\nAgain, you should have a look at the detailed\n`installation instructions <https://tmtoolkit.readthedocs.io/en/latest/install.html>`_ in order to install additional\npackages that enable more features such as topic modeling.\n\nFeatures\n--------\n\nText preprocessing\n^^^^^^^^^^^^^^^^^^\n\nThe tmtoolkit package offers several text preprocessing and text mining methods, including:\n\n- `tokenization, sentence segmentation, part-of-speech (POS) tagging, named-entity recognition (NER) <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Configuring-the-NLP-pipeline,-parallel-processing-and-more-via-Corpus-parameters>`_ (via SpaCy)\n- `lemmatization and token normalization <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Lemmatization-and-token-normalization>`_\n- extensive `pattern matching capabilities <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Common-parameters-for-pattern-matching-functions>`_\n (exact matching, regular expressions or \"glob\" patterns) to be used in many\n methods of the package, e.g. for filtering on token or document level, or for\n `keywords-in-context (KWIC) <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Keywords-in-context-(KWIC)-and-general-filtering-methods>`_\n- adding and managing\n `custom document and token attributes <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Working-with-document-and-token-attributes>`_\n- accessing text corpora along with their\n `document and token attributes as dataframes <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Accessing-tokens-and-token-attributes>`_\n- calculating and `visualizing corpus summary statistics <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Visualizing-corpus-summary-statistics>`_\n- finding out and joining `collocations <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Identifying-and-joining-token-collocations>`_\n- calculating `token cooccurrences <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Token-cooccurrence-matrices>`_\n- `splitting and sampling corpora <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Corpus-functions-for-document-management>`_\n- generating `n-grams <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Generating-n-grams>`_ and using\n `N-gram models <https://tmtoolkit.readthedocs.io/en/latest/api.html#module-tmtoolkit.ngrammodels>`_\n- generating `sparse document-term matrices <https://tmtoolkit.readthedocs.io/en/latest/preprocessing.html#Generating-a-sparse-document-term-matrix-(DTM)>`_\n\nWherever possible and useful, these methods can operate in parallel to speed up computations with large datasets.\n\nTopic modeling\n^^^^^^^^^^^^^^\n\n* `model computation in parallel <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Computing-topic-models-in-parallel>`_ for different copora\n and/or parameter sets\n* support for `lda <http://pythonhosted.org/lda/>`_,\n `scikit-learn <http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html>`_\n and `gensim <https://radimrehurek.com/gensim/>`_ topic modeling backends\n* `evaluation of topic models <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Evaluation-of-topic-models>`_ (e.g. in order to an optimal number\n of topics for a given dataset) using several implemented metrics:\n\n * model coherence (`Mimno et al. 2011 <https://dl.acm.org/citation.cfm?id=2145462>`_) or with\n `metrics implemented in Gensim <https://radimrehurek.com/gensim/models/coherencemodel.html>`_)\n * KL divergence method (`Arun et al. 2010 <http://doi.org/10.1007/978-3-642-13657-3_43>`_)\n * probability of held-out documents (`Wallach et al. 2009 <https://doi.org/10.1145/1553374.1553515>`_)\n * pair-wise cosine distance method (`Cao Juan et al. 2009 <http://doi.org/10.1016/j.neucom.2008.06.011>`_)\n * harmonic mean method (`Griffiths, Steyvers 2004 <http://doi.org/10.1073/pnas.0307752101>`_)\n * the loglikelihood or perplexity methods natively implemented in lda, sklearn or gensim\n\n* `plotting of evaluation results <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Evaluation-of-topic-models>`_\n* `common statistics for topic models <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Common-statistics-and-tools-for-topic-models>`_ such as\n word saliency and distinctiveness (`Chuang et al. 2012 <https://dl.acm.org/citation.cfm?id=2254572>`_), topic-word\n relevance (`Sievert and Shirley 2014 <https://www.aclweb.org/anthology/W14-3110>`_)\n* `finding / filtering topics with pattern matching <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Filtering-topics>`_\n* `export estimated document-topic and topic-word distributions to Excel\n <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Displaying-and-exporting-topic-modeling-results>`_\n* `visualize topic-word distributions and document-topic distributions <https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Visualizing-topic-models>`_\n as word clouds or heatmaps\n* model coherence (`Mimno et al. 2011 <https://dl.acm.org/citation.cfm?id=2145462>`_) for individual topics\n* integrate `PyLDAVis <https://pyldavis.readthedocs.io/en/latest/>`_ to visualize results\n\nOther features\n^^^^^^^^^^^^^^\n\n- loading and cleaning of raw text from\n `text files, tabular files (CSV or Excel), ZIP files or folders <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Loading-text-data>`_\n- `splitting and joining documents <https://tmtoolkit.readthedocs.io/en/latest/text_corpora.html#Corpus-functions-for-document-management>`_\n- `common statistics and transformations for document-term matrices <https://tmtoolkit.readthedocs.io/en/latest/bow.html>`_ like word cooccurrence and *tf-idf*\n- `interoperability with R <https://tmtoolkit.readthedocs.io/en/latest/rinterop.html>`_\n\n\nLimits\n------\n\n* all languages are supported, for which `SpaCy language models <https://spacy.io/models>`_ are available\n* all data must reside in memory, i.e. no streaming of large data from the hard disk (which for example\n `Gensim <https://radimrehurek.com/gensim/>`_ supports)\n\n\nContribute\n----------\n\nIf you'd like to contribute, please read the `developer documentation <https://tmtoolkit.readthedocs.io/en/latest/development.html>`_ first.\n\n\nLicense\n-------\n\nCode licensed under `Apache License 2.0 <https://www.apache.org/licenses/LICENSE-2.0>`_.\nSee `LICENSE <https://github.com/internaut/tmtoolkit/blob/master/LICENSE>`_ file.\n\n.. |pypi| image:: https://badge.fury.io/py/tmtoolkit.svg\n :target: https://badge.fury.io/py/tmtoolkit\n :alt: PyPI Version\n\n.. |pypi_downloads| image:: https://img.shields.io/pypi/dm/tmtoolkit\n :target: https://pypi.org/project/tmtoolkit/\n :alt: Downloads from PyPI\n\n.. |runtests| image:: https://github.com/internaut/tmtoolkit/actions/workflows/runtests.yml/badge.svg\n :target: https://github.com/internaut/tmtoolkit/actions/workflows/runtests.yml\n :alt: GitHub Actions CI Build Status\n\n.. |coverage| image:: https://raw.githubusercontent.com/internaut/tmtoolkit/master/coverage.svg?sanitize=true\n :target: https://github.com/internaut/tmtoolkit/tree/master/tests\n :alt: Coverage status\n\n.. |rtd| image:: https://readthedocs.org/projects/tmtoolkit/badge/?version=latest\n :target: https://tmtoolkit.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. |zenodo| image:: https://zenodo.org/badge/109812180.svg\n :target: https://zenodo.org/badge/latestdoi/109812180\n :alt: Citable Zenodo DOI\n",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "Text Mining and Topic Modeling Toolkit",
"version": "0.12.0",
"project_urls": {
"Bug Reports": "https://github.com/internaut/tmtoolkit/issues",
"Homepage": "https://github.com/internaut/tmtoolkit",
"Source": "https://github.com/internaut/tmtoolkit"
},
"split_keywords": [
"textmining",
"textanalysis",
"text",
"mining",
"analysis",
"preprocessing",
"topicmodeling",
"topic",
"modeling",
"evaluation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "59998193f25605b747177322c7f07f194a090a80ef6f1e66cb347d96aa9f51da",
"md5": "206592e2a6f0cc855d63f46d8698ff78",
"sha256": "f18c68ef0676377714a6fe87d1822903f3c3493cc64437d1da7964ec3f68b2b5"
},
"downloads": -1,
"filename": "tmtoolkit-0.12.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "206592e2a6f0cc855d63f46d8698ff78",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 10545649,
"upload_time": "2023-05-03T12:01:17",
"upload_time_iso_8601": "2023-05-03T12:01:17.719593Z",
"url": "https://files.pythonhosted.org/packages/59/99/8193f25605b747177322c7f07f194a090a80ef6f1e66cb347d96aa9f51da/tmtoolkit-0.12.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3299488fa98f68711ea337e2659c51d54860cd79e499263dbd49f22696dd8067",
"md5": "b76a64cc1c3edd65d57c662875641569",
"sha256": "6df5429cd675989f21d9f075ddb11fe5ae273d6544fc337a2589bab2bc331909"
},
"downloads": -1,
"filename": "tmtoolkit-0.12.0.tar.gz",
"has_sig": false,
"md5_digest": "b76a64cc1c3edd65d57c662875641569",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 12323440,
"upload_time": "2023-05-03T12:01:21",
"upload_time_iso_8601": "2023-05-03T12:01:21.400705Z",
"url": "https://files.pythonhosted.org/packages/32/99/488fa98f68711ea337e2659c51d54860cd79e499263dbd49f22696dd8067/tmtoolkit-0.12.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-03 12:01:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "internaut",
"github_project": "tmtoolkit",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"tox": true,
"lcname": "tmtoolkit"
}