Readability
===========
An implementation of traditional readability measures based on simple surface
characteristics. These measures are basically linear regressions based on the
number of words, syllables, and sentences.
The functionality is modeled after the UNIX ``style(1)`` command. Compared to the
implementation as part of `GNU diction <http://www.moria.de/~michael/diction/>`_,
this version supports UTF-8 encoded text, but expects sentence-segmented and
tokenized text. The syllabification and word type recognition is based on
simple heuristics and only provides a rough measure. The supported languages
are English, German, and Dutch. Adding support for a new language involves the
addition of heuristics for the aforementioned syllabification and word type
recognition; see ``langdata.py``.
NB: all readability formulas were developed for English, so the scales of the
outcomes are only meaningful for English texts. The Dale-Chall measure uses the
original word list for English, but for Dutch and German lists of frequent
words are used that were not specifically selected for recognizability by
school children.
Installation
------------
::
$ pip install https://github.com/andreasvc/readability/tarball/master
Usage
-----
From Python::
>>> import readability
>>> text = ('This is an example sentence .\n'
'Note that tokens are separated by spaces and sentences by newlines .\n')
>>> results = readability.getmeasures(text, lang='en')
>>> print(results['readability grades']['FleschReadingEase'])
55.95250000000002
Command line usage::
$ readability --help
Simple readability measures.
Usage: readability [--lang=<x>] [FILE]
or: readability [--lang=<x>] --csv FILES...
By default, input is read from standard input.
Text should be encoded with UTF-8,
one sentence per line, tokens space-separated.
Options:
-L, --lang=<x> Set language (available: de, nl, en).
--csv Produce a table in comma separated value format on
standard output given one or more filenames.
--tokenizer=<x> Specify a tokenizer including options that will be given
each text on stdin and should return tokenized output on
stdout. Not applicable when reading from stdin.
For proper results, the text should be tokenized.
- For English, I recommend "tokenizer",
cf. http://moin.delph-in.net/WeSearch/DocumentParsing
- For Dutch, I recommend the tokenizer that is part of the Alpino parser:
http://www.let.rug.nl/vannoord/alp/Alpino/.
- ``ucto`` is a general multilingual tokenizer: http://ilk.uvt.nl/ucto
Example using ``ucto``::
$ ucto -L en -n -s '' "CONRAD, Joseph - Lord Jim.txt" | readability
[...]
readability grades:
Kincaid: 5.44
ARI: 6.39
Coleman-Liau: 6.91
FleschReadingEase: 85.17
GunningFogIndex: 9.86
LIX: 31.98
SMOGIndex: 9.39
RIX: 2.56
DaleChallIndex: 8.02
sentence info:
characters_per_word: 4.17
syll_per_word: 1.24
words_per_sentence: 16.35
sentences_per_paragraph: 11.5
type_token_ratio: 0.09
characters: 551335
syllables: 164205
words: 132211
wordtypes: 12071
sentences: 8087
paragraphs: 703
long_words: 20670
complex_words: 10990
complex_words_dc: 29908
word usage:
tobeverb: 3907
auxverb: 1630
conjunction: 4398
pronoun: 18092
preposition: 19290
nominalization: 1167
sentence beginnings:
pronoun: 2578
interrogative: 217
article: 629
subordination: 120
conjunction: 236
preposition: 397
The option ``--csv`` collects readability measures for a number of texts in
a table. To tokenize documents on-the-fly when using this option, use
the ``--tokenizer`` option. Example with the "tokenize" tool::
$ readability --csv --tokenizer='tokenizer -L en-u8 -P -S -E "" -N' */*.txt >readabilitymeasures.csv
References
----------
The following readability metrics are included:
1. http://en.wikipedia.org/wiki/Automated_Readability_Index
2. http://en.wikipedia.org/wiki/SMOG
3. http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_Grade_Level#Flesch.E2.80.93Kincaid_Grade_Level
4. http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_test#Flesch_Reading_Ease
5. http://en.wikipedia.org/wiki/Coleman-Liau_Index
6. http://en.wikipedia.org/wiki/Gunning-Fog_Index
7. https://en.wikipedia.org/wiki/Dale%E2%80%93Chall_readability_formula
For better readability measures, consider the following:
- Collins-Thompson & Callan (2004). A language modeling approach to predicting reading difficulty.
In Proc. of HLT/NAACL, pp. 193-200. http://aclweb.org/anthology/N04-1025.pdf
- Schwarm & Ostendorf (2005). Reading level assessment using SVM and statistical language models.
Proc. of ACL, pp. 523-530. http://www.aclweb.org/anthology/P05-1065.pdf
- The Lexile framework for reading. http://www.lexile.com
- Coh-Metrix. http://cohmetrix.memphis.edu/
- Stylene: http://www.clips.ua.ac.be/category/projects/stylene
- T-Scan: http://languagelink.let.uu.nl/tscan
Acknowledgments
---------------
The code is based on: https://github.com/mmautner/readability
Which in turn was based on: https://github.com/nltk/nltk_contrib/tree/master/nltk_contrib/readability
Raw data
{
"_id": null,
"home_page": "https://github.com/andreasvc/readability/",
"name": "readability",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Andreas van Cranenburgh",
"author_email": "A.W.vanCranenburgh@uva.nl",
"download_url": "https://files.pythonhosted.org/packages/26/70/6f8750066255d4d2b82b813dd2550e0bd2bee99d026d14088a7b977cd0fc/readability-0.3.1.tar.gz",
"platform": "",
"description": "Readability\n===========\n\nAn implementation of traditional readability measures based on simple surface\ncharacteristics. These measures are basically linear regressions based on the\nnumber of words, syllables, and sentences.\n\nThe functionality is modeled after the UNIX ``style(1)`` command. Compared to the\nimplementation as part of `GNU diction <http://www.moria.de/~michael/diction/>`_,\nthis version supports UTF-8 encoded text, but expects sentence-segmented and\ntokenized text. The syllabification and word type recognition is based on\nsimple heuristics and only provides a rough measure. The supported languages\nare English, German, and Dutch. Adding support for a new language involves the\naddition of heuristics for the aforementioned syllabification and word type\nrecognition; see ``langdata.py``.\n\nNB: all readability formulas were developed for English, so the scales of the\noutcomes are only meaningful for English texts. The Dale-Chall measure uses the\noriginal word list for English, but for Dutch and German lists of frequent\nwords are used that were not specifically selected for recognizability by\nschool children.\n\nInstallation\n------------\n::\n\n $ pip install https://github.com/andreasvc/readability/tarball/master\n\nUsage\n-----\nFrom Python::\n\n >>> import readability\n >>> text = ('This is an example sentence .\\n'\n 'Note that tokens are separated by spaces and sentences by newlines .\\n')\n >>> results = readability.getmeasures(text, lang='en')\n >>> print(results['readability grades']['FleschReadingEase'])\n 55.95250000000002\n\nCommand line usage::\n\n $ readability --help\n Simple readability measures.\n\n Usage: readability [--lang=<x>] [FILE]\n or: readability [--lang=<x>] --csv FILES...\n\n By default, input is read from standard input.\n Text should be encoded with UTF-8,\n one sentence per line, tokens space-separated.\n\n Options:\n -L, --lang=<x> Set language (available: de, nl, en).\n --csv Produce a table in comma separated value format on\n standard output given one or more filenames.\n --tokenizer=<x> Specify a tokenizer including options that will be given\n each text on stdin and should return tokenized output on\n stdout. Not applicable when reading from stdin.\n\nFor proper results, the text should be tokenized.\n\n- For English, I recommend \"tokenizer\",\n cf. http://moin.delph-in.net/WeSearch/DocumentParsing\n- For Dutch, I recommend the tokenizer that is part of the Alpino parser:\n http://www.let.rug.nl/vannoord/alp/Alpino/.\n- ``ucto`` is a general multilingual tokenizer: http://ilk.uvt.nl/ucto\n\nExample using ``ucto``::\n\n $ ucto -L en -n -s '' \"CONRAD, Joseph - Lord Jim.txt\" | readability\n [...]\n readability grades:\n Kincaid: 5.44\n ARI: 6.39\n Coleman-Liau: 6.91\n FleschReadingEase: 85.17\n GunningFogIndex: 9.86\n LIX: 31.98\n SMOGIndex: 9.39\n RIX: 2.56\n DaleChallIndex: 8.02\n sentence info:\n characters_per_word: 4.17\n syll_per_word: 1.24\n words_per_sentence: 16.35\n sentences_per_paragraph: 11.5\n type_token_ratio: 0.09\n characters: 551335\n syllables: 164205\n words: 132211\n wordtypes: 12071\n sentences: 8087\n paragraphs: 703\n long_words: 20670\n complex_words: 10990\n complex_words_dc: 29908\n word usage:\n tobeverb: 3907\n auxverb: 1630\n conjunction: 4398\n pronoun: 18092\n preposition: 19290\n nominalization: 1167\n sentence beginnings:\n pronoun: 2578\n interrogative: 217\n article: 629\n subordination: 120\n conjunction: 236\n preposition: 397\n\nThe option ``--csv`` collects readability measures for a number of texts in\na table. To tokenize documents on-the-fly when using this option, use\nthe ``--tokenizer`` option. Example with the \"tokenize\" tool::\n\n $ readability --csv --tokenizer='tokenizer -L en-u8 -P -S -E \"\" -N' */*.txt >readabilitymeasures.csv\n\nReferences\n----------\nThe following readability metrics are included:\n\n1. http://en.wikipedia.org/wiki/Automated_Readability_Index\n2. http://en.wikipedia.org/wiki/SMOG\n3. http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_Grade_Level#Flesch.E2.80.93Kincaid_Grade_Level\n4. http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_test#Flesch_Reading_Ease\n5. http://en.wikipedia.org/wiki/Coleman-Liau_Index\n6. http://en.wikipedia.org/wiki/Gunning-Fog_Index\n7. https://en.wikipedia.org/wiki/Dale%E2%80%93Chall_readability_formula\n\nFor better readability measures, consider the following:\n\n- Collins-Thompson & Callan (2004). A language modeling approach to predicting reading difficulty.\n In Proc. of HLT/NAACL, pp. 193-200. http://aclweb.org/anthology/N04-1025.pdf\n- Schwarm & Ostendorf (2005). Reading level assessment using SVM and statistical language models.\n Proc. of ACL, pp. 523-530. http://www.aclweb.org/anthology/P05-1065.pdf\n- The Lexile framework for reading. http://www.lexile.com\n- Coh-Metrix. http://cohmetrix.memphis.edu/\n- Stylene: http://www.clips.ua.ac.be/category/projects/stylene\n- T-Scan: http://languagelink.let.uu.nl/tscan\n\nAcknowledgments\n---------------\nThe code is based on: https://github.com/mmautner/readability\n\nWhich in turn was based on: https://github.com/nltk/nltk_contrib/tree/master/nltk_contrib/readability\n",
"bugtrack_url": null,
"license": "",
"summary": "Measure the readability of a given text using surface characteristics",
"version": "0.3.1",
"project_urls": {
"Homepage": "https://github.com/andreasvc/readability/"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "26706f8750066255d4d2b82b813dd2550e0bd2bee99d026d14088a7b977cd0fc",
"md5": "9a4ffde00314a2130ef941d363dd68dd",
"sha256": "f9030df8bc31aad45baffa9a2d9ce1fdd8051833e5b5bda3027df32fdec00fad"
},
"downloads": -1,
"filename": "readability-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "9a4ffde00314a2130ef941d363dd68dd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 34777,
"upload_time": "2019-01-13T00:04:35",
"upload_time_iso_8601": "2019-01-13T00:04:35.595737Z",
"url": "https://files.pythonhosted.org/packages/26/70/6f8750066255d4d2b82b813dd2550e0bd2bee99d026d14088a7b977cd0fc/readability-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2019-01-13 00:04:35",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "andreasvc",
"github_project": "readability",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "readability"
}