Distance


NameDistance JSON
Version 0.1.3 PyPI version JSON
download
home_pagehttps://github.com/doukremt/distance
SummaryUtilities for comparing sequences
upload_time2013-11-21 00:14:34
maintainerNone
docs_urlNone
authorMichaël Meyer
requires_pythonNone
licenseUNKNOWN
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            distance - Utilities for comparing sequences
============================================

This package provides helpers for computing similarities between arbitrary sequences. Included metrics are Levenshtein, Hamming, Jaccard, and Sorensen distance, plus some bonuses. All distance computations are implemented in pure Python, and most of them are also implemented in C.


Installation
------------

If you don't want or need to use the C extension, just unpack the archive and run, as root:

	# python setup.py install

For the C extension to work, you need the Python source files, and a C compiler (typically Microsoft Visual C++ 2010 on Windows, and GCC on Mac and Linux). On a Debian-like system, you can get all of these with:

	# apt-get install gcc pythonX.X-dev

where X.X is the number of your Python version.

Then you should type:

	# python setup.py install --with-c

Note the use of the `--with-c` switch.


Usage
-----

A common use case for this module is to compare single words for similarity:

	>>> distance.levenshtein("lenvestein", "levenshtein")
	3
	>>> distance.hamming("hamming", "hamning")
	1

If there is not a one-to-one mapping between sounds and glyphs in your language, or if you want to compare not glyphs, but syllables or phonems, you can pass in tuples of characters:

	>>> t1 = ("de", "ci", "si", "ve")
	>>> t2 = ("de", "ri", "si", "ve")
	>>> distance.levenshtein(t1, t2)
	1

Comparing lists of strings can also be useful for computing similarities between sentences, paragraphs, etc.:

	>>> sent1 = ['the', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog']
	>>> sent2 = ['the', 'lazy', 'fox', 'jumps', 'over', 'the', 'crazy', 'dog']
	>>> distance.levenshtein(sent1, sent2)
	3

Hamming and Levenshtein distance can be normalized, so that the results of several distance measures can be meaningfully compared. Two strategies are available for Levenshtein: either the length of the shortest alignment between the sequences is taken as factor, or the length of the longer one. Example uses:

	>>> distance.hamming("fat", "cat", normalized=True)
	0.3333333333333333
	>>> distance.nlevenshtein("abc", "acd", method=1)  # shortest alignment
	0.6666666666666666
	>>> distance.nlevenshtein("abc", "acd", method=2)  # longest alignment
	0.5

`jaccard` and `sorensen` return a normalized value per default:

	>>> distance.sorensen("decide", "resize")
	0.5555555555555556
	>>> distance.jaccard("decide", "resize")
	0.7142857142857143

As for the bonuses, there is a `fast_comp` function, which computes the distance between two strings up to a value of 2 included. If the distance between the strings is higher than that, -1 is returned. This function is of limited use, but on the other hand it is quite faster than `levenshtein`. There is also a `lcsubstrings` function which can be used to find the longest common substrings in two sequences.

Finally, two convenience iterators `ilevenshtein` and `ifast_comp` are provided, which are intended to be used for filtering from a long list of sequences the ones that are close to a reference one. They both return a series of tuples (distance, sequence). Example:

	>>> tokens = ["fo", "bar", "foob", "foo", "fooba", "foobar"]
	>>> sorted(distance.ifast_comp("foo", tokens))
	[(0, 'foo'), (1, 'fo'), (1, 'foob'), (2, 'fooba')]
	>>> sorted(distance.ilevenshtein("foo", tokens, max_dist=1))
	[(0, 'foo'), (1, 'fo'), (1, 'foob')]

`ifast_comp` is particularly efficient, and can handle 1 million tokens without a problem.

For more informations, see the functions documentation (`help(funcname)`).

Have fun!


Changelog
---------

20/11/13:
* Switched back to using the to-be-deprecated Python unicode api. Good news is that this makes the
C extension compatible with Python 2.7+, and that distance computations on unicode strings is now
much faster.
* Added a C version of `lcsubstrings`.
* Added a new method for computing normalized Levenshtein distance.
* Added some tests.

12/11/13:
Expanded `fast_comp` (formerly `quick_levenshtein`) so that it can handle transpositions.
Fixed variable interversions in (C) `levenshtein` which produced sometimes strange results.

10/11/13:
Added `quick_levenshtein` and `iquick_levenshtein`.

05/11/13:
Added Sorensen and Jaccard metrics, fixed memory issue in Levenshtein.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/doukremt/distance",
    "name": "Distance",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Micha\u00ebl Meyer",
    "author_email": "michaelnm.meyer@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5c/1a/883e47df323437aefa0d0a92ccfb38895d9416bd0b56262c2e46a47767b8/Distance-0.1.3.tar.gz",
    "platform": "UNKNOWN",
    "description": "distance - Utilities for comparing sequences\n============================================\n\nThis package provides helpers for computing similarities between arbitrary sequences. Included metrics are Levenshtein, Hamming, Jaccard, and Sorensen distance, plus some bonuses. All distance computations are implemented in pure Python, and most of them are also implemented in C.\n\n\nInstallation\n------------\n\nIf you don't want or need to use the C extension, just unpack the archive and run, as root:\n\n\t# python setup.py install\n\nFor the C extension to work, you need the Python source files, and a C compiler (typically Microsoft Visual C++ 2010 on Windows, and GCC on Mac and Linux). On a Debian-like system, you can get all of these with:\n\n\t# apt-get install gcc pythonX.X-dev\n\nwhere X.X is the number of your Python version.\n\nThen you should type:\n\n\t# python setup.py install --with-c\n\nNote the use of the `--with-c` switch.\n\n\nUsage\n-----\n\nA common use case for this module is to compare single words for similarity:\n\n\t>>> distance.levenshtein(\"lenvestein\", \"levenshtein\")\n\t3\n\t>>> distance.hamming(\"hamming\", \"hamning\")\n\t1\n\nIf there is not a one-to-one mapping between sounds and glyphs in your language, or if you want to compare not glyphs, but syllables or phonems, you can pass in tuples of characters:\n\n\t>>> t1 = (\"de\", \"ci\", \"si\", \"ve\")\n\t>>> t2 = (\"de\", \"ri\", \"si\", \"ve\")\n\t>>> distance.levenshtein(t1, t2)\n\t1\n\nComparing lists of strings can also be useful for computing similarities between sentences, paragraphs, etc.:\n\n\t>>> sent1 = ['the', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog']\n\t>>> sent2 = ['the', 'lazy', 'fox', 'jumps', 'over', 'the', 'crazy', 'dog']\n\t>>> distance.levenshtein(sent1, sent2)\n\t3\n\nHamming and Levenshtein distance can be normalized, so that the results of several distance measures can be meaningfully compared. Two strategies are available for Levenshtein: either the length of the shortest alignment between the sequences is taken as factor, or the length of the longer one. Example uses:\n\n\t>>> distance.hamming(\"fat\", \"cat\", normalized=True)\n\t0.3333333333333333\n\t>>> distance.nlevenshtein(\"abc\", \"acd\", method=1)  # shortest alignment\n\t0.6666666666666666\n\t>>> distance.nlevenshtein(\"abc\", \"acd\", method=2)  # longest alignment\n\t0.5\n\n`jaccard` and `sorensen` return a normalized value per default:\n\n\t>>> distance.sorensen(\"decide\", \"resize\")\n\t0.5555555555555556\n\t>>> distance.jaccard(\"decide\", \"resize\")\n\t0.7142857142857143\n\nAs for the bonuses, there is a `fast_comp` function, which computes the distance between two strings up to a value of 2 included. If the distance between the strings is higher than that, -1 is returned. This function is of limited use, but on the other hand it is quite faster than `levenshtein`. There is also a `lcsubstrings` function which can be used to find the longest common substrings in two sequences.\n\nFinally, two convenience iterators `ilevenshtein` and `ifast_comp` are provided, which are intended to be used for filtering from a long list of sequences the ones that are close to a reference one. They both return a series of tuples (distance, sequence). Example:\n\n\t>>> tokens = [\"fo\", \"bar\", \"foob\", \"foo\", \"fooba\", \"foobar\"]\n\t>>> sorted(distance.ifast_comp(\"foo\", tokens))\n\t[(0, 'foo'), (1, 'fo'), (1, 'foob'), (2, 'fooba')]\n\t>>> sorted(distance.ilevenshtein(\"foo\", tokens, max_dist=1))\n\t[(0, 'foo'), (1, 'fo'), (1, 'foob')]\n\n`ifast_comp` is particularly efficient, and can handle 1 million tokens without a problem.\n\nFor more informations, see the functions documentation (`help(funcname)`).\n\nHave fun!\n\n\nChangelog\n---------\n\n20/11/13:\n* Switched back to using the to-be-deprecated Python unicode api. Good news is that this makes the\nC extension compatible with Python 2.7+, and that distance computations on unicode strings is now\nmuch faster.\n* Added a C version of `lcsubstrings`.\n* Added a new method for computing normalized Levenshtein distance.\n* Added some tests.\n\n12/11/13:\nExpanded `fast_comp` (formerly `quick_levenshtein`) so that it can handle transpositions.\nFixed variable interversions in (C) `levenshtein` which produced sometimes strange results.\n\n10/11/13:\nAdded `quick_levenshtein` and `iquick_levenshtein`.\n\n05/11/13:\nAdded Sorensen and Jaccard metrics, fixed memory issue in Levenshtein.",
    "bugtrack_url": null,
    "license": "UNKNOWN",
    "summary": "Utilities for comparing sequences",
    "version": "0.1.3",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "23d82d30517c22f0992e62099e4d3f00",
                "sha256": "60807584f5b6003f5c521aa73f39f51f631de3be5cccc5a1d67166fcbf0d4551"
            },
            "downloads": -1,
            "filename": "Distance-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "23d82d30517c22f0992e62099e4d3f00",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 180271,
            "upload_time": "2013-11-21T00:14:34",
            "upload_time_iso_8601": "2013-11-21T00:14:34.152138Z",
            "url": "https://files.pythonhosted.org/packages/5c/1a/883e47df323437aefa0d0a92ccfb38895d9416bd0b56262c2e46a47767b8/Distance-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2013-11-21 00:14:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "doukremt",
    "github_project": "distance",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "distance"
}
        
Elapsed time: 0.01660s