# corpus_similarity
Measure the similarity between two corpora (text datasets). The measures work best when each corpus is at least 10k words. This package support 74 languages.
from corpus_similarity import Similarity
cs = Similarity(language = "eng")
result = cs.calculate(corpus1, corpus2)
The package contains all preprocessing and training. Only the language needs to be specified. A list of supported languages is provided below.
# Input
The **Similarity.calculate** method requires two input corpora. These can be a list of strings or a filename (supports .txt and .gz files).
# Output
The output is a scalar measure of how similar the two corpora are. The values fall between 0 (very different) and 1 (very similar). The values are consistent within languages, but not across languages. For example, Swedish has higher relative similarity than Estonian.
# Installation
pip install corpus_similarity
pip install git+https://github.com/jonathandunn/corpus_similarity.git
# How It Works
The corpus similarity measure is a simple character n-gram comparison, with the best performance coming with using Spearman's Rho as a measure of correlation. The original idea for this kind of corpus comparison comes from Adam Kilgarriff (https://kilgarriff.co.uk/Publications/2001-K-CompCorpIJCL.pdf).
Recent work in *Lingua* has evaluated the measures used in this package extensively in a multi-lingual setting (https://arxiv.org/abs/2206.04332). These measures have since been used to model the relationship between registers in a multi-lingual setting (https://arxiv.org/abs/2209.09813) and to validate geo-referenced corpus collections (https://arxiv.org/abs/2104.01294). Other work has modelled the relationship between corpus similarity (upstream) and embedding similarity (downstream) (https://arxiv.org/abs/2206.04330). These papers provide further details for the theory and evaluation behind this package.
# Languages
amh, Amharic
ara, Arabic
aze, Azerbaijani
ben, Bengali
bul, Bulgarian
cat, Catalan
ceb, Cebuano
ces, Czech
cha, Chamorro
dan, Danish
deu, German
ell, Greek
eng, English
est, Estonian
eus, Basque
fas, Farsi
fij, Fijian
fin, Finnish
fra, French
gle, Gaelic
glg, Galician
guj, Gujarati
hat, Hatian
haw, Hawaiian
heb, Hebrew
hin, Hindi
hmo, Hiri Motu
hun, Hungarian
ilo, Ilocano
ind, Indonesian
isl, Icelandic
ita, Italian
jav, Javanese
jpn, Japanese
kan, Kannada
kat, Georgian
kor, Korean
lav, Latvian
lit, Lithuanian
mal, Malayalam
mar, Marathi
mkd, Macedonian
mlg, Malagasy
mon, Mongolian
mri, te reo Maori
msa, Malay
nld, Dutch
nor, Norwegian
pan, Punjabi
pol, Polish
por, Portuguese
ron, Romanian
rus, Russian
sin, Sinhala
slk, Slovak
slv, Slovenian
smo, Samoan
som, Somali
spa, Spanish
sqi, Albanian
swe, Swedish
tah, Tahitian
tam, Tamil
tel, Telugu
tgl, Tagalog
tha, Thai
ton, Tongan
tur, Turkish
tvl, Tuvaluan
ukr, Ukrainian
urd, Urdu
uzb, Uzbek
vie, Vietnamese
zho, Chinese
Raw data
{
"_id": null,
"home_page": "https://github.com/jonathandunn/corpus_similarity",
"name": "corpus-similarity",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "text analytics,natural language processing,computational linguistics,corpus,corpora,similarity",
"author": "Jonathan Dunn, Haipeng Li, Damian Sastre",
"author_email": "jonathan.dunn@canterbury.ac.nz",
"download_url": "",
"platform": null,
"description": "# corpus_similarity\n\nMeasure the similarity between two corpora (text datasets). The measures work best when each corpus is at least 10k words. This package support 74 languages.\n\n from corpus_similarity import Similarity\n cs = Similarity(language = \"eng\")\n\n result = cs.calculate(corpus1, corpus2)\n\nThe package contains all preprocessing and training. Only the language needs to be specified. A list of supported languages is provided below.\n\n# Input\n\nThe **Similarity.calculate** method requires two input corpora. These can be a list of strings or a filename (supports .txt and .gz files).\n\n# Output\n\nThe output is a scalar measure of how similar the two corpora are. The values fall between 0 (very different) and 1 (very similar). The values are consistent within languages, but not across languages. For example, Swedish has higher relative similarity than Estonian.\n\n# Installation\n\n pip install corpus_similarity\n\n pip install git+https://github.com/jonathandunn/corpus_similarity.git\n \n# How It Works\n\nThe corpus similarity measure is a simple character n-gram comparison, with the best performance coming with using Spearman's Rho as a measure of correlation. The original idea for this kind of corpus comparison comes from Adam Kilgarriff (https://kilgarriff.co.uk/Publications/2001-K-CompCorpIJCL.pdf). \n\nRecent work in *Lingua* has evaluated the measures used in this package extensively in a multi-lingual setting (https://arxiv.org/abs/2206.04332). These measures have since been used to model the relationship between registers in a multi-lingual setting (https://arxiv.org/abs/2209.09813) and to validate geo-referenced corpus collections (https://arxiv.org/abs/2104.01294). Other work has modelled the relationship between corpus similarity (upstream) and embedding similarity (downstream) (https://arxiv.org/abs/2206.04330). These papers provide further details for the theory and evaluation behind this package.\n \n# Languages\n\namh, Amharic\n\nara, Arabic\n\naze, Azerbaijani\n\nben, Bengali\n\nbul, Bulgarian\n\ncat, Catalan\n\nceb, Cebuano\n\nces, Czech\n\ncha, Chamorro\n\ndan, Danish\n\ndeu, German\n\nell, Greek\n\neng, English\n\nest, Estonian\n\neus, Basque\n\nfas, Farsi\n\nfij, Fijian\n\nfin, Finnish\n\nfra, French\n\ngle, Gaelic\n\nglg, Galician\n\nguj, Gujarati\n\nhat, Hatian\n\nhaw, Hawaiian\n\nheb, Hebrew\n\nhin, Hindi\n\nhmo, Hiri Motu\n\nhun, Hungarian\n\nilo, Ilocano\n\nind, Indonesian\n\nisl, Icelandic\n\nita, Italian\n\njav, Javanese\n\njpn, Japanese\n\nkan, Kannada\n\nkat, Georgian\n\nkor, Korean\n\nlav, Latvian\n\nlit, Lithuanian\n\nmal, Malayalam\n\nmar, Marathi\n\nmkd, Macedonian\n\nmlg, Malagasy\n\nmon, Mongolian\n\nmri, te reo Maori\n\nmsa, Malay\n\nnld, Dutch\n\nnor, Norwegian\n\npan, Punjabi\n\npol, Polish\n\npor, Portuguese\n\nron, Romanian\n\nrus, Russian\n\nsin, Sinhala\n\nslk, Slovak\n\nslv, Slovenian\n\nsmo, Samoan\n\nsom, Somali\n\nspa, Spanish\n\nsqi, Albanian\n\nswe, Swedish\n\ntah, Tahitian\n\ntam, Tamil\n\ntel, Telugu\n\ntgl, Tagalog\n\ntha, Thai\n\nton, Tongan\n\ntur, Turkish\n\ntvl, Tuvaluan\n\nukr, Ukrainian\n\nurd, Urdu\n\nuzb, Uzbek\n\nvie, Vietnamese\n\nzho, Chinese\n\n\n",
"bugtrack_url": null,
"license": "GNU GENERAL PUBLIC LICENSE v3",
"summary": "Measuring corpus similarity in Python",
"version": "1.1",
"project_urls": {
"Homepage": "https://github.com/jonathandunn/corpus_similarity"
},
"split_keywords": [
"text analytics",
"natural language processing",
"computational linguistics",
"corpus",
"corpora",
"similarity"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ff0c0c0f5a77fe8bafc98afd00780316786f92e3793e4a5b1c06c9e7e32ba6b5",
"md5": "3e0e3011f739b0192de3bbf376dc99c6",
"sha256": "c8522d5833a676bcdcfff869c5a15d6ba4df0a4a43d85caeccb61e66287bb566"
},
"downloads": -1,
"filename": "corpus_similarity-1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3e0e3011f739b0192de3bbf376dc99c6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 4685274,
"upload_time": "2024-01-26T02:43:22",
"upload_time_iso_8601": "2024-01-26T02:43:22.625278Z",
"url": "https://files.pythonhosted.org/packages/ff/0c/0c0f5a77fe8bafc98afd00780316786f92e3793e4a5b1c06c9e7e32ba6b5/corpus_similarity-1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-01-26 02:43:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jonathandunn",
"github_project": "corpus_similarity",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "corpus-similarity"
}