======================================================
Simplemma: a simple multilingual lemmatizer for Python
======================================================
.. image:: https://img.shields.io/pypi/v/simplemma.svg
:target: https://pypi.python.org/pypi/simplemma
:alt: Python package
.. image:: https://img.shields.io/pypi/l/simplemma.svg
:target: https://pypi.python.org/pypi/simplemma
:alt: License
.. image:: https://img.shields.io/pypi/pyversions/simplemma.svg
:target: https://pypi.python.org/pypi/simplemma
:alt: Python versions
.. image:: https://img.shields.io/codecov/c/github/adbar/simplemma.svg
:target: https://codecov.io/gh/adbar/simplemma
:alt: Code Coverage
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
.. image:: https://img.shields.io/badge/DOI-10.5281%2Fzenodo.4673264-brightgreen
:target: https://doi.org/10.5281/zenodo.4673264
:alt: Reference DOI: 10.5281/zenodo.4673264
Purpose
-------
`Lemmatization <https://en.wikipedia.org/wiki/Lemmatisation>`_ is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form. Unlike stemming, lemmatization outputs word units that are still valid linguistic forms.
In modern natural language processing (NLP), this task is often indirectly tackled by more complex systems encompassing a whole processing pipeline. However, it appears that there is no straightforward way to address lemmatization in Python although this task can be crucial in fields such as information retrieval and NLP.
*Simplemma* provides a simple and multilingual approach to look for base forms or lemmata. It may not be as powerful as full-fledged solutions but it is generic, easy to install and straightforward to use. In particular, it does not need morphosyntactic information and can process a raw series of tokens or even a text with its built-in tokenizer. By design it should be reasonably fast and work in a large majority of cases, without being perfect.
With its comparatively small footprint it is especially useful when speed and simplicity matter, in low-resource contexts, for educational purposes, or as a baseline system for lemmatization and morphological analysis.
Currently, 49 languages are partly or fully supported (see table below).
Installation
------------
The current library is written in pure Python with no dependencies:
``pip install simplemma``
- ``pip3`` where applicable
- ``pip install -U simplemma`` for updates
Usage
-----
Word-by-word
~~~~~~~~~~~~
Simplemma is used by selecting a language of interest and then applying the data on a list of words.
.. code-block:: python
>>> import simplemma
# get a word
myword = 'masks'
# decide which language to use and apply it on a word form
>>> simplemma.lemmatize(myword, lang='en')
'mask'
# grab a list of tokens
>>> mytokens = ['Hier', 'sind', 'Vaccines']
>>> for token in mytokens:
>>> simplemma.lemmatize(token, lang='de')
'hier'
'sein'
'Vaccines'
# list comprehensions can be faster
>>> [simplemma.lemmatize(t, lang='de') for t in mytokens]
['hier', 'sein', 'Vaccines']
Chaining several languages can improve coverage, they are used in sequence:
.. code-block:: python
>>> from simplemma import lemmatize
>>> lemmatize('Vaccines', lang=('de', 'en'))
'vaccine'
>>> lemmatize('spaghettis', lang='it')
'spaghettis'
>>> lemmatize('spaghettis', lang=('it', 'fr'))
'spaghetti'
>>> lemmatize('spaghetti', lang=('it', 'fr'))
'spaghetto'
For certain languages a greedier decomposition is activated by default as it can be beneficial, mostly due to a certain capacity to address affixes in an unsupervised way. This can be triggered manually by setting the ``greedy`` parameter to ``True``.
This option also triggers a stronger reduction through a further iteration of the search algorithm, e.g. "angekündigten" → "angekündigt" (standard) → "ankündigen" (greedy). In some cases it may be closer to stemming than to lemmatization.
.. code-block:: python
# same example as before, comes to this result in one step
>>> simplemma.lemmatize('spaghettis', lang=('it', 'fr'), greedy=True)
'spaghetto'
# German case described above
>>> simplemma.lemmatize('angekündigten', lang='de', greedy=True)
'ankündigen' # 2 steps: reduction to infinitive verb
>>> simplemma.lemmatize('angekündigten', lang='de', greedy=False)
'angekündigt' # 1 step: reduction to past participle
The additional function ``is_known()`` checks if a given word is present in the language data:
.. code-block:: python
>>> from simplemma import is_known
>>> is_known('spaghetti', lang='it')
True
Tokenization
~~~~~~~~~~~~
A simple tokenization function is included for convenience:
.. code-block:: python
>>> from simplemma import simple_tokenizer
>>> simple_tokenizer('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.')
['Lorem', 'ipsum', 'dolor', 'sit', 'amet', ',', 'consectetur', 'adipiscing', 'elit', ',', 'sed', 'do', 'eiusmod', 'tempor', 'incididunt', 'ut', 'labore', 'et', 'dolore', 'magna', 'aliqua', '.']
# use iterator instead
>>> simple_tokenizer('Lorem ipsum dolor sit amet', iterate=True)
The functions ``text_lemmatizer()`` and ``lemma_iterator()`` chain tokenization and lemmatization. They can take ``greedy`` (affecting lemmatization) and ``silent`` (affecting errors and logging) as arguments:
.. code-block:: python
>>> from simplemma import text_lemmatizer
>>> sentence = 'Sou o intervalo entre o que desejo ser e os outros me fizeram.'
>>> text_lemmatizer(sentence, lang='pt')
# caveat: desejo is also a noun, should be desejar here
['ser', 'o', 'intervalo', 'entre', 'o', 'que', 'desejo', 'ser', 'e', 'o', 'outro', 'me', 'fazer', '.']
# same principle, returns a generator and not a list
>>> from simplemma import lemma_iterator
>>> lemma_iterator(sentence, lang='pt')
Caveats
~~~~~~~
.. code-block:: python
# don't expect too much though
# this diminutive form isn't in the model data
>>> simplemma.lemmatize('spaghettini', lang='it')
'spaghettini' # should read 'spaghettino'
# the algorithm cannot choose between valid alternatives yet
>>> simplemma.lemmatize('son', lang='es')
'son' # valid common name, but what about the verb form?
As the focus lies on overall coverage, some short frequent words (typically: pronouns and conjunctions) may need post-processing, this generally concerns a few dozens of tokens per language.
The current absence of morphosyntactic information is both an advantage in terms of simplicity and an impassable frontier regarding lemmatization accuracy, e.g. disambiguation between past participles and adjectives derived from verbs in Germanic and Romance languages. In most cases, ``simplemma`` often does not change such input words.
The greedy algorithm seldom produces invalid forms. It is designed to work best in the low-frequency range, notably for compound words and neologisms. Aggressive decomposition is only useful as a general approach in the case of morphologically-rich languages, where it can also act as a linguistically motivated stemmer.
Bug reports over the `issues page <https://github.com/adbar/simplemma/issues>`_ are welcome.
Language detection
~~~~~~~~~~~~~~~~~~
Language detection works by providing a text and tuple ``lang`` consisting of a series of languages of interest. Scores between 0 and 1 are returned.
The ``lang_detector()`` function returns a list of language codes along with scores and adds "unk" for unknown or out-of-vocabulary words. The latter can also be calculated by using the function ``in_target_language()`` which returns a ratio.
.. code-block:: python
# import necessary functions
>>> from simplemma.langdetect import in_target_language, lang_detector
# language detection
>>> lang_detector('"Moderní studie narazily na několik tajemství." Extracted from Wikipedia.', lang=("cs", "sk"))
[('cs', 0.625), ('unk', 0.375), ('sk', 0.125)]
# proportion of known words
>>> in_target_language("opera post physica posita (τὰ μετὰ τὰ φυσικά)", lang="la")
0.5
Supported languages
-------------------
The following languages are available using their `BCP 47 language tag <https://en.wikipedia.org/wiki/IETF_language_tag>`_, which is usually the `ISO 639-1 code <https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes>`_ but if no such code exists, a `ISO 639-3 code <https://en.wikipedia.org/wiki/List_of_ISO_639-3_codes>`_ is used instead:
======= ==================== =========== ===== ========================================================================
Available languages (2022-01-20)
-----------------------------------------------------------------------------------------------------------------------
Code Language Forms (10³) Acc. Comments
======= ==================== =========== ===== ========================================================================
``ast`` Asturian 124
``bg`` Bulgarian 204
``ca`` Catalan 579
``cs`` Czech 187 0.89 on UD CS-PDT
``cy`` Welsh 360
``da`` Danish 554 0.92 on UD DA-DDT, alternative: `lemmy <https://github.com/sorenlind/lemmy>`_
``de`` German 675 0.95 on UD DE-GSD, see also `German-NLP list <https://github.com/adbar/German-NLP#Lemmatization>`_
``el`` Greek 181 0.88 on UD EL-GDT
``en`` English 131 0.94 on UD EN-GUM, alternative: `LemmInflect <https://github.com/bjascob/LemmInflect>`_
``enm`` Middle English 38
``es`` Spanish 665 0.95 on UD ES-GSD
``et`` Estonian 119 low coverage
``fa`` Persian 12 experimental
``fi`` Finnish 3,199 see `this benchmark <https://github.com/aajanki/finnish-pos-accuracy>`_
``fr`` French 217 0.94 on UD FR-GSD
``ga`` Irish 372
``gd`` Gaelic 48
``gl`` Galician 384
``gv`` Manx 62
``hbs`` Serbo-Croatian 656 Croatian and Serbian lists to be added later
``hi`` Hindi 58 experimental
``hu`` Hungarian 458
``hy`` Armenian 246
``id`` Indonesian 17 0.91 on UD ID-CSUI
``is`` Icelandic 174
``it`` Italian 333 0.93 on UD IT-ISDT
``ka`` Georgian 65
``la`` Latin 843
``lb`` Luxembourgish 305
``lt`` Lithuanian 247
``lv`` Latvian 164
``mk`` Macedonian 56
``ms`` Malay 14
``nb`` Norwegian (Bokmål) 617
``nl`` Dutch 250 0.92 on UD-NL-Alpino
``nn`` Norwegian (Nynorsk) 56
``pl`` Polish 3,211 0.91 on UD-PL-PDB
``pt`` Portuguese 924 0.92 on UD-PT-GSD
``ro`` Romanian 311
``ru`` Russian 595 alternative: `pymorphy2 <https://github.com/kmike/pymorphy2/>`_
``se`` Northern Sámi 113
``sk`` Slovak 818 0.92 on UD SK-SNK
``sl`` Slovene 136
``sq`` Albanian 35
``sv`` Swedish 658 alternative: `lemmy <https://github.com/sorenlind/lemmy>`_
``sw`` Swahili 10 experimental
``tl`` Tagalog 32 experimental
``tr`` Turkish 1,232 0.89 on UD-TR-Boun
``uk`` Ukrainian 370 alternative: `pymorphy2 <https://github.com/kmike/pymorphy2/>`_
======= ==================== =========== ===== ========================================================================
*Low coverage* mentions means one would probably be better off with a language-specific library, but *simplemma* will work to a limited extent. Open-source alternatives for Python are referenced if possible.
*Experimental* mentions indicate that the language remains untested or that there could be issues with the underlying data or lemmatization process.
The scores are calculated on `Universal Dependencies <https://universaldependencies.org/>`_ treebanks on single word tokens (including some contractions but not merged prepositions), they describe to what extent simplemma can accurately map tokens to their lemma form. They can be reproduced by concatenating all available UD files and by using the script ``udscore.py`` in the ``tests/`` folder.
This library is particularly relevant as regards the lemmatization of less frequent words. Its performance in this case is only incidentally captured by the benchmark above. In some languages, a fixed number of words such as pronouns can be further mapped by hand to enhance performance.
Speed
-----
Orders of magnitude given for reference only, measured on an old laptop to give a lower bound:
- Tokenization: > 1 million tokens/sec
- Lemmatization: > 250,000 words/sec
Installing the most recent Python version can improve speed.
Optional pre-compilation with `mypyc <https://github.com/mypyc/mypyc>`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. ``pip3 install mypy``
2. clone or download the source code from the repository
3. ``python3 setup.py --use-mypyc bdist_wheel``
4. ``pip3 install dist/*.whl`` (where ``*`` is the compiled wheel)
Roadmap
-------
- [-] Add further lemmatization lists
- [ ] Grammatical categories as option
- [ ] Function as a meta-package?
- [ ] Integrate optional, more complex models?
Credits and licenses
--------------------
Software under MIT license, for the linguistic information databases see ``licenses`` folder.
The surface lookups (non-greedy mode) use lemmatization lists derived from various sources, ordered by relative importance:
- `Lemmatization lists <https://github.com/michmech/lemmatization-lists>`_ by Michal Měchura (Open Database License)
- Wiktionary entries packaged by the `Kaikki project <https://kaikki.org/>`_
- `FreeLing project <https://github.com/TALP-UPC/FreeLing>`_
- `spaCy lookups data <https://github.com/explosion/spacy-lookups-data>`_
- `Unimorph Project <https://unimorph.github.io/>`_
- `Wikinflection corpus <https://github.com/lenakmeth/Wikinflection-Corpus>`_ by Eleni Metheniti (CC BY 4.0 License)
Contributions
-------------
See this `list of contributors <https://github.com/adbar/simplemma/graphs/contributors>`_ to the repository.
Feel free to contribute, notably by `filing issues <https://github.com/adbar/simplemma/issues/>`_ for feedback, bug reports, or links to further lemmatization lists, rules and tests.
Contributions by pull requests ought to follow the following conventions: code style with `black <https://github.com/psf/black>`_, type hinting with `mypy <https://github.com/python/mypy>`_, included tests with `pytest <https://pytest.org>`_.
Other solutions
---------------
See lists: `German-NLP <https://github.com/adbar/German-NLP>`_ and `other awesome-NLP lists <https://github.com/adbar/German-NLP#More-lists>`_.
For a more complex and universal approach in Python see `universal-lemmatizer <https://github.com/jmnybl/universal-lemmatizer/>`_.
References
----------
To cite this software:
.. image:: https://img.shields.io/badge/DOI-10.5281%2Fzenodo.4673264-brightgreen
:target: https://doi.org/10.5281/zenodo.4673264
:alt: Reference DOI: 10.5281/zenodo.4673264
Barbaresi A. (*year*). Simplemma: a simple multilingual lemmatizer for Python [Computer software] (Version *version number*). Berlin, Germany: Berlin-Brandenburg Academy of Sciences. Available from https://github.com/adbar/simplemma DOI: 10.5281/zenodo.4673264
This work draws from lexical analysis algorithms used in:
- Barbaresi, A., & Hein, K. (2017). `Data-driven identification of German phrasal compounds <https://hal.archives-ouvertes.fr/hal-01575651/document>`_. In International Conference on Text, Speech, and Dialogue Springer, pp. 192-200.
- Barbaresi, A. (2016). `An unsupervised morphological criterion for discriminating similar languages <https://aclanthology.org/W16-4827/>`_. In 3rd Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2016), Association for Computational Linguistics, pp. 212-220.
- Barbaresi, A. (2016). `Bootstrapped OCR error detection for a less-resourced language variant <https://hal.archives-ouvertes.fr/hal-01371689/document>`_. In 13th Conference on Natural Language Processing (KONVENS 2016), pp. 21-26.
Raw data
{
"_id": null,
"home_page": "https://github.com/adbar/simplemma",
"name": "simplemma",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "nlp,lemmatization,lemmatisation,lemmatiser,tokenization,tokenizer",
"author": "Adrien Barbaresi",
"author_email": "barbaresi@bbaw.de",
"download_url": "https://files.pythonhosted.org/packages/f1/0f/f6cff760d641b0ff5d3a8978d791eafdcfeffd762c2b103b5e7a3a511f16/simplemma-0.9.1.tar.gz",
"platform": null,
"description": "======================================================\nSimplemma: a simple multilingual lemmatizer for Python\n======================================================\n\n\n.. image:: https://img.shields.io/pypi/v/simplemma.svg\n :target: https://pypi.python.org/pypi/simplemma\n :alt: Python package\n\n.. image:: https://img.shields.io/pypi/l/simplemma.svg\n :target: https://pypi.python.org/pypi/simplemma\n :alt: License\n\n.. image:: https://img.shields.io/pypi/pyversions/simplemma.svg\n :target: https://pypi.python.org/pypi/simplemma\n :alt: Python versions\n\n.. image:: https://img.shields.io/codecov/c/github/adbar/simplemma.svg\n :target: https://codecov.io/gh/adbar/simplemma\n :alt: Code Coverage\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n :alt: Code style: black\n\n.. image:: https://img.shields.io/badge/DOI-10.5281%2Fzenodo.4673264-brightgreen\n :target: https://doi.org/10.5281/zenodo.4673264\n :alt: Reference DOI: 10.5281/zenodo.4673264\n\n\nPurpose\n-------\n\n`Lemmatization <https://en.wikipedia.org/wiki/Lemmatisation>`_ is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form. Unlike stemming, lemmatization outputs word units that are still valid linguistic forms.\n\nIn modern natural language processing (NLP), this task is often indirectly tackled by more complex systems encompassing a whole processing pipeline. However, it appears that there is no straightforward way to address lemmatization in Python although this task can be crucial in fields such as information retrieval and NLP.\n\n*Simplemma* provides a simple and multilingual approach to look for base forms or lemmata. It may not be as powerful as full-fledged solutions but it is generic, easy to install and straightforward to use. In particular, it does not need morphosyntactic information and can process a raw series of tokens or even a text with its built-in tokenizer. By design it should be reasonably fast and work in a large majority of cases, without being perfect.\n\nWith its comparatively small footprint it is especially useful when speed and simplicity matter, in low-resource contexts, for educational purposes, or as a baseline system for lemmatization and morphological analysis.\n\nCurrently, 49 languages are partly or fully supported (see table below).\n\n\nInstallation\n------------\n\nThe current library is written in pure Python with no dependencies:\n\n``pip install simplemma``\n\n- ``pip3`` where applicable\n- ``pip install -U simplemma`` for updates\n\n\nUsage\n-----\n\nWord-by-word\n~~~~~~~~~~~~\n\nSimplemma is used by selecting a language of interest and then applying the data on a list of words.\n\n.. code-block:: python\n\n >>> import simplemma\n # get a word\n myword = 'masks'\n # decide which language to use and apply it on a word form\n >>> simplemma.lemmatize(myword, lang='en')\n 'mask'\n # grab a list of tokens\n >>> mytokens = ['Hier', 'sind', 'Vaccines']\n >>> for token in mytokens:\n >>> simplemma.lemmatize(token, lang='de')\n 'hier'\n 'sein'\n 'Vaccines'\n # list comprehensions can be faster\n >>> [simplemma.lemmatize(t, lang='de') for t in mytokens]\n ['hier', 'sein', 'Vaccines']\n\n\nChaining several languages can improve coverage, they are used in sequence:\n\n\n.. code-block:: python\n\n >>> from simplemma import lemmatize\n >>> lemmatize('Vaccines', lang=('de', 'en'))\n 'vaccine'\n >>> lemmatize('spaghettis', lang='it')\n 'spaghettis'\n >>> lemmatize('spaghettis', lang=('it', 'fr'))\n 'spaghetti'\n >>> lemmatize('spaghetti', lang=('it', 'fr'))\n 'spaghetto'\n\n\nFor certain languages a greedier decomposition is activated by default as it can be beneficial, mostly due to a certain capacity to address affixes in an unsupervised way. This can be triggered manually by setting the ``greedy`` parameter to ``True``.\n\nThis option also triggers a stronger reduction through a further iteration of the search algorithm, e.g. \"angek\u00fcndigten\" \u2192 \"angek\u00fcndigt\" (standard) \u2192 \"ank\u00fcndigen\" (greedy). In some cases it may be closer to stemming than to lemmatization.\n\n\n.. code-block:: python\n\n # same example as before, comes to this result in one step\n >>> simplemma.lemmatize('spaghettis', lang=('it', 'fr'), greedy=True)\n 'spaghetto'\n # German case described above\n >>> simplemma.lemmatize('angek\u00fcndigten', lang='de', greedy=True)\n 'ank\u00fcndigen' # 2 steps: reduction to infinitive verb\n >>> simplemma.lemmatize('angek\u00fcndigten', lang='de', greedy=False)\n 'angek\u00fcndigt' # 1 step: reduction to past participle\n\n\nThe additional function ``is_known()`` checks if a given word is present in the language data:\n\n.. code-block:: python\n\n >>> from simplemma import is_known\n >>> is_known('spaghetti', lang='it')\n True\n\n\nTokenization\n~~~~~~~~~~~~\n\nA simple tokenization function is included for convenience:\n\n.. code-block:: python\n\n >>> from simplemma import simple_tokenizer\n >>> simple_tokenizer('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.')\n ['Lorem', 'ipsum', 'dolor', 'sit', 'amet', ',', 'consectetur', 'adipiscing', 'elit', ',', 'sed', 'do', 'eiusmod', 'tempor', 'incididunt', 'ut', 'labore', 'et', 'dolore', 'magna', 'aliqua', '.']\n # use iterator instead\n >>> simple_tokenizer('Lorem ipsum dolor sit amet', iterate=True)\n\n\nThe functions ``text_lemmatizer()`` and ``lemma_iterator()`` chain tokenization and lemmatization. They can take ``greedy`` (affecting lemmatization) and ``silent`` (affecting errors and logging) as arguments:\n\n.. code-block:: python\n\n >>> from simplemma import text_lemmatizer\n >>> sentence = 'Sou o intervalo entre o que desejo ser e os outros me fizeram.'\n >>> text_lemmatizer(sentence, lang='pt')\n # caveat: desejo is also a noun, should be desejar here\n ['ser', 'o', 'intervalo', 'entre', 'o', 'que', 'desejo', 'ser', 'e', 'o', 'outro', 'me', 'fazer', '.']\n # same principle, returns a generator and not a list\n >>> from simplemma import lemma_iterator\n >>> lemma_iterator(sentence, lang='pt')\n\n\nCaveats\n~~~~~~~\n\n.. code-block:: python\n\n # don't expect too much though\n # this diminutive form isn't in the model data\n >>> simplemma.lemmatize('spaghettini', lang='it')\n 'spaghettini' # should read 'spaghettino'\n # the algorithm cannot choose between valid alternatives yet\n >>> simplemma.lemmatize('son', lang='es')\n 'son' # valid common name, but what about the verb form?\n\n\nAs the focus lies on overall coverage, some short frequent words (typically: pronouns and conjunctions) may need post-processing, this generally concerns a few dozens of tokens per language.\n\nThe current absence of morphosyntactic information is both an advantage in terms of simplicity and an impassable frontier regarding lemmatization accuracy, e.g. disambiguation between past participles and adjectives derived from verbs in Germanic and Romance languages. In most cases, ``simplemma`` often does not change such input words.\n\nThe greedy algorithm seldom produces invalid forms. It is designed to work best in the low-frequency range, notably for compound words and neologisms. Aggressive decomposition is only useful as a general approach in the case of morphologically-rich languages, where it can also act as a linguistically motivated stemmer.\n\nBug reports over the `issues page <https://github.com/adbar/simplemma/issues>`_ are welcome.\n\n\nLanguage detection\n~~~~~~~~~~~~~~~~~~\n\nLanguage detection works by providing a text and tuple ``lang`` consisting of a series of languages of interest. Scores between 0 and 1 are returned.\n\nThe ``lang_detector()`` function returns a list of language codes along with scores and adds \"unk\" for unknown or out-of-vocabulary words. The latter can also be calculated by using the function ``in_target_language()`` which returns a ratio.\n\n.. code-block:: python\n\n # import necessary functions\n >>> from simplemma.langdetect import in_target_language, lang_detector\n # language detection\n >>> lang_detector('\"Modern\u00ed studie narazily na n\u011bkolik tajemstv\u00ed.\" Extracted from Wikipedia.', lang=(\"cs\", \"sk\"))\n [('cs', 0.625), ('unk', 0.375), ('sk', 0.125)]\n # proportion of known words\n >>> in_target_language(\"opera post physica posita (\u03c4\u1f70 \u03bc\u03b5\u03c4\u1f70 \u03c4\u1f70 \u03c6\u03c5\u03c3\u03b9\u03ba\u03ac)\", lang=\"la\")\n 0.5\n\n\nSupported languages\n-------------------\n\nThe following languages are available using their `BCP 47 language tag <https://en.wikipedia.org/wiki/IETF_language_tag>`_, which is usually the `ISO 639-1 code <https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes>`_ but if no such code exists, a `ISO 639-3 code <https://en.wikipedia.org/wiki/List_of_ISO_639-3_codes>`_ is used instead:\n\n\n======= ==================== =========== ===== ========================================================================\nAvailable languages (2022-01-20)\n-----------------------------------------------------------------------------------------------------------------------\nCode Language Forms (10\u00b3) Acc. Comments\n======= ==================== =========== ===== ========================================================================\n``ast`` Asturian 124\n``bg`` Bulgarian 204\n``ca`` Catalan 579\n``cs`` Czech 187 0.89 on UD CS-PDT\n``cy`` Welsh 360\n``da`` Danish 554 0.92 on UD DA-DDT, alternative: `lemmy <https://github.com/sorenlind/lemmy>`_\n``de`` German 675 0.95 on UD DE-GSD, see also `German-NLP list <https://github.com/adbar/German-NLP#Lemmatization>`_\n``el`` Greek 181 0.88 on UD EL-GDT\n``en`` English 131 0.94 on UD EN-GUM, alternative: `LemmInflect <https://github.com/bjascob/LemmInflect>`_\n``enm`` Middle English 38\n``es`` Spanish 665 0.95 on UD ES-GSD\n``et`` Estonian 119 low coverage\n``fa`` Persian 12 experimental\n``fi`` Finnish 3,199 see `this benchmark <https://github.com/aajanki/finnish-pos-accuracy>`_\n``fr`` French 217 0.94 on UD FR-GSD\n``ga`` Irish 372\n``gd`` Gaelic 48\n``gl`` Galician 384\n``gv`` Manx 62\n``hbs`` Serbo-Croatian 656 Croatian and Serbian lists to be added later\n``hi`` Hindi 58 experimental\n``hu`` Hungarian 458\n``hy`` Armenian 246\n``id`` Indonesian 17 0.91 on UD ID-CSUI\n``is`` Icelandic 174\n``it`` Italian 333 0.93 on UD IT-ISDT\n``ka`` Georgian 65\n``la`` Latin 843\n``lb`` Luxembourgish 305\n``lt`` Lithuanian 247\n``lv`` Latvian 164\n``mk`` Macedonian 56\n``ms`` Malay 14\n``nb`` Norwegian (Bokm\u00e5l) 617\n``nl`` Dutch 250 0.92 on UD-NL-Alpino\n``nn`` Norwegian (Nynorsk) 56\n``pl`` Polish 3,211 0.91 on UD-PL-PDB\n``pt`` Portuguese 924 0.92 on UD-PT-GSD\n``ro`` Romanian 311\n``ru`` Russian 595 alternative: `pymorphy2 <https://github.com/kmike/pymorphy2/>`_\n``se`` Northern S\u00e1mi 113\n``sk`` Slovak 818 0.92 on UD SK-SNK\n``sl`` Slovene 136\n``sq`` Albanian 35\n``sv`` Swedish 658 alternative: `lemmy <https://github.com/sorenlind/lemmy>`_\n``sw`` Swahili 10 experimental\n``tl`` Tagalog 32 experimental\n``tr`` Turkish 1,232 0.89 on UD-TR-Boun\n``uk`` Ukrainian 370 alternative: `pymorphy2 <https://github.com/kmike/pymorphy2/>`_\n======= ==================== =========== ===== ========================================================================\n\n\n*Low coverage* mentions means one would probably be better off with a language-specific library, but *simplemma* will work to a limited extent. Open-source alternatives for Python are referenced if possible.\n\n*Experimental* mentions indicate that the language remains untested or that there could be issues with the underlying data or lemmatization process.\n\nThe scores are calculated on `Universal Dependencies <https://universaldependencies.org/>`_ treebanks on single word tokens (including some contractions but not merged prepositions), they describe to what extent simplemma can accurately map tokens to their lemma form. They can be reproduced by concatenating all available UD files and by using the script ``udscore.py`` in the ``tests/`` folder.\n\nThis library is particularly relevant as regards the lemmatization of less frequent words. Its performance in this case is only incidentally captured by the benchmark above. In some languages, a fixed number of words such as pronouns can be further mapped by hand to enhance performance.\n\n\nSpeed\n-----\n\nOrders of magnitude given for reference only, measured on an old laptop to give a lower bound:\n\n- Tokenization: > 1 million tokens/sec\n- Lemmatization: > 250,000 words/sec\n\nInstalling the most recent Python version can improve speed.\n\n\nOptional pre-compilation with `mypyc <https://github.com/mypyc/mypyc>`_\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n1. ``pip3 install mypy``\n2. clone or download the source code from the repository\n3. ``python3 setup.py --use-mypyc bdist_wheel``\n4. ``pip3 install dist/*.whl`` (where ``*`` is the compiled wheel)\n\n\nRoadmap\n-------\n\n- [-] Add further lemmatization lists\n- [ ] Grammatical categories as option\n- [ ] Function as a meta-package?\n- [ ] Integrate optional, more complex models?\n\n\nCredits and licenses\n--------------------\n\nSoftware under MIT license, for the linguistic information databases see ``licenses`` folder.\n\nThe surface lookups (non-greedy mode) use lemmatization lists derived from various sources, ordered by relative importance:\n\n- `Lemmatization lists <https://github.com/michmech/lemmatization-lists>`_ by Michal M\u011bchura (Open Database License)\n- Wiktionary entries packaged by the `Kaikki project <https://kaikki.org/>`_\n- `FreeLing project <https://github.com/TALP-UPC/FreeLing>`_\n- `spaCy lookups data <https://github.com/explosion/spacy-lookups-data>`_\n- `Unimorph Project <https://unimorph.github.io/>`_\n- `Wikinflection corpus <https://github.com/lenakmeth/Wikinflection-Corpus>`_ by Eleni Metheniti (CC BY 4.0 License)\n\n\nContributions\n-------------\n\nSee this `list of contributors <https://github.com/adbar/simplemma/graphs/contributors>`_ to the repository.\n\nFeel free to contribute, notably by `filing issues <https://github.com/adbar/simplemma/issues/>`_ for feedback, bug reports, or links to further lemmatization lists, rules and tests.\n\nContributions by pull requests ought to follow the following conventions: code style with `black <https://github.com/psf/black>`_, type hinting with `mypy <https://github.com/python/mypy>`_, included tests with `pytest <https://pytest.org>`_.\n\n\nOther solutions\n---------------\n\nSee lists: `German-NLP <https://github.com/adbar/German-NLP>`_ and `other awesome-NLP lists <https://github.com/adbar/German-NLP#More-lists>`_.\n\nFor a more complex and universal approach in Python see `universal-lemmatizer <https://github.com/jmnybl/universal-lemmatizer/>`_.\n\n\nReferences\n----------\n\nTo cite this software:\n\n.. image:: https://img.shields.io/badge/DOI-10.5281%2Fzenodo.4673264-brightgreen\n :target: https://doi.org/10.5281/zenodo.4673264\n :alt: Reference DOI: 10.5281/zenodo.4673264\n\nBarbaresi A. (*year*). Simplemma: a simple multilingual lemmatizer for Python [Computer software] (Version *version number*). Berlin, Germany: Berlin-Brandenburg Academy of Sciences. Available from https://github.com/adbar/simplemma DOI: 10.5281/zenodo.4673264\n\nThis work draws from lexical analysis algorithms used in:\n\n- Barbaresi, A., & Hein, K. (2017). `Data-driven identification of German phrasal compounds <https://hal.archives-ouvertes.fr/hal-01575651/document>`_. In International Conference on Text, Speech, and Dialogue Springer, pp. 192-200.\n- Barbaresi, A. (2016). `An unsupervised morphological criterion for discriminating similar languages <https://aclanthology.org/W16-4827/>`_. In 3rd Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2016), Association for Computational Linguistics, pp. 212-220.\n- Barbaresi, A. (2016). `Bootstrapped OCR error detection for a less-resourced language variant <https://hal.archives-ouvertes.fr/hal-01371689/document>`_. In 13th Conference on Natural Language Processing (KONVENS 2016), pp. 21-26.\n\n\n\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "A simple multilingual lemmatizer for Python.",
"version": "0.9.1",
"split_keywords": [
"nlp",
"lemmatization",
"lemmatisation",
"lemmatiser",
"tokenization",
"tokenizer"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "748724e3ce8de234998171fe0161cbac221cbfa51b28043d0441a9980758b768",
"md5": "11063927f9624a1261781cf8e412645d",
"sha256": "7f7371b325302ce522d90bae85b29839f87918fb1201d38359de9ea3b7467e65"
},
"downloads": -1,
"filename": "simplemma-0.9.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "11063927f9624a1261781cf8e412645d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 75504503,
"upload_time": "2023-01-20T17:07:26",
"upload_time_iso_8601": "2023-01-20T17:07:26.931482Z",
"url": "https://files.pythonhosted.org/packages/74/87/24e3ce8de234998171fe0161cbac221cbfa51b28043d0441a9980758b768/simplemma-0.9.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f10ff6cff760d641b0ff5d3a8978d791eafdcfeffd762c2b103b5e7a3a511f16",
"md5": "ccaf192f0ed8f332438617c706e5df0a",
"sha256": "98ebcaf659bd1e281d9d87716a95d5c148d318dc18d866d3549990d6b3334749"
},
"downloads": -1,
"filename": "simplemma-0.9.1.tar.gz",
"has_sig": false,
"md5_digest": "ccaf192f0ed8f332438617c706e5df0a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 75523297,
"upload_time": "2023-01-20T17:07:40",
"upload_time_iso_8601": "2023-01-20T17:07:40.384317Z",
"url": "https://files.pythonhosted.org/packages/f1/0f/f6cff760d641b0ff5d3a8978d791eafdcfeffd762c2b103b5e7a3a511f16/simplemma-0.9.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-20 17:07:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "adbar",
"github_project": "simplemma",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "simplemma"
}