wordsegment


Namewordsegment JSON
Version 1.3.1 PyPI version JSON
download
home_pagehttp://www.grantjenks.com/docs/wordsegment/
SummaryEnglish word segmentation.
upload_time2018-07-07 03:51:29
maintainer
docs_urlNone
authorGrant Jenks
requires_python
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Python Word Segmentation
========================

`WordSegment`_ is an Apache2 licensed module for English word
segmentation, written in pure-Python, and based on a trillion-word corpus.

Based on code from the chapter "`Natural Language Corpus Data`_" by Peter
Norvig from the book "`Beautiful Data`_" (Segaran and Hammerbacher, 2009).

Data files are derived from the `Google Web Trillion Word Corpus`_, as
described by Thorsten Brants and Alex Franz, and `distributed`_ by the
Linguistic Data Consortium. This module contains only a subset of that
data. The unigram data includes only the most common 333,000 words. Similarly,
bigram data includes only the most common 250,000 phrases. Every word and
phrase is lowercased with punctuation removed.

.. _`WordSegment`: http://www.grantjenks.com/docs/wordsegment/
.. _`Natural Language Corpus Data`: http://norvig.com/ngrams/
.. _`Beautiful Data`: http://oreilly.com/catalog/9780596157111/
.. _`Google Web Trillion Word Corpus`: http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
.. _`distributed`: https://catalog.ldc.upenn.edu/LDC2006T13

Features
--------

- Pure-Python
- Fully documented
- 100% Test Coverage
- Includes unigram and bigram data
- Command line interface for batch processing
- Easy to hack (e.g. different scoring, new data, different language)
- Developed on Python 2.7
- Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6 and PyPy, PyPy3
- Tested on Windows, Mac OS X, and Linux
- Tested using Travis CI and AppVeyor CI

.. image:: https://api.travis-ci.org/grantjenks/python-wordsegment.svg
    :target: http://www.grantjenks.com/docs/wordsegment/

.. image:: https://ci.appveyor.com/api/projects/status/github/grantjenks/python-wordsegment?branch=master&svg=true
    :target: http://www.grantjenks.com/docs/wordsegment/

Quickstart
----------

Installing `WordSegment`_ is simple with
`pip <http://www.pip-installer.org/>`_::

    $ pip install wordsegment

You can access documentation in the interpreter with Python's built-in help
function::

    >>> import wordsegment
    >>> help(wordsegment)

Tutorial
--------

In your own Python programs, you'll mostly want to use `segment` to divide a
phrase into a list of its parts::

    >>> from wordsegment import load, segment
    >>> load()
    >>> segment('thisisatest')
    ['this', 'is', 'a', 'test']

The `load` function reads and parses the unigrams and bigrams data from
disk. Loading the data only needs to be done once.

`WordSegment`_ also provides a command-line interface for batch
processing. This interface accepts two arguments: in-file and out-file. Lines
from in-file are iteratively segmented, joined by a space, and written to
out-file. Input and output default to stdin and stdout respectively. ::

    $ echo thisisatest | python -m wordsegment
    this is a test

If you want to run `WordSegment`_ as a kind of server process then use Python's
``-u`` option for unbuffered output. You can also set ``PYTHONUNBUFFERED=1`` in
the environment. ::

    >>> import subprocess as sp
    >>> wordsegment = sp.Popen(
            ['python', '-um', 'wordsegment'],
            stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.STDOUT)
    >>> wordsegment.stdin.write('thisisatest\n')
    >>> wordsegment.stdout.readline()
    'this is a test\n'
    >>> wordsegment.stdin.write('workswithotherlanguages\n')
    >>> wordsegment.stdout.readline()
    'works with other languages\n'
    >>> wordsegment.stdin.close()
    >>> wordsegment.wait()  # Process exit code.
    0

The maximum segmented word length is 24 characters. Neither the unigram nor
bigram data contain words exceeding that length. The corpus also excludes
punctuation and all letters have been lowercased. Before segmenting text,
`clean` is called to transform the input to a canonical form::

    >>> from wordsegment import clean
    >>> clean('She said, "Python rocks!"')
    'shesaidpythonrocks'
    >>> segment('She said, "Python rocks!"')
    ['she', 'said', 'python', 'rocks']

Sometimes its interesting to explore the unigram and bigram counts
themselves. These are stored in Python dictionaries mapping word to count. ::

    >>> import wordsegment as ws
    >>> ws.load()
    >>> ws.UNIGRAMS['the']
    23135851162.0
    >>> ws.UNIGRAMS['gray']
    21424658.0
    >>> ws.UNIGRAMS['grey']
    18276942.0

Above we see that the spelling `gray` is more common than the spelling `grey`.

Bigrams are joined by a space::

    >>> import heapq
    >>> from pprint import pprint
    >>> from operator import itemgetter
    >>> pprint(heapq.nlargest(10, ws.BIGRAMS.items(), itemgetter(1)))
    [('of the', 2766332391.0),
     ('in the', 1628795324.0),
     ('to the', 1139248999.0),
     ('on the', 800328815.0),
     ('for the', 692874802.0),
     ('and the', 629726893.0),
     ('to be', 505148997.0),
     ('is a', 476718990.0),
     ('with the', 461331348.0),
     ('from the', 428303219.0)]

Some bigrams begin with `<s>`. This is to indicate the start of a bigram::

    >>> ws.BIGRAMS['<s> where']
    15419048.0
    >>> ws.BIGRAMS['<s> what']
    11779290.0

The unigrams and bigrams data is stored in the `wordsegment` directory in
the `unigrams.txt` and `bigrams.txt` files respectively.

User Guide
----------

* `Word Segment API Reference`_
* `Using a Different Corpus`_
* `Python: Load dict Fast From File`_

.. _`Word Segment API Reference`: http://www.grantjenks.com/docs/wordsegment/api.html
.. _`Using a Different Corpus`: http://www.grantjenks.com/docs/wordsegment/using-a-different-corpus.html
.. _`Python: Load dict Fast From File`: http://www.grantjenks.com/docs/wordsegment/python-load-dict-fast-from-file.html

References
----------

* `WordSegment Documentation`_
* `WordSegment at PyPI`_
* `WordSegment at Github`_
* `WordSegment Issue Tracker`_

.. _`WordSegment Documentation`: http://www.grantjenks.com/docs/wordsegment/
.. _`WordSegment at PyPI`: https://pypi.python.org/pypi/wordsegment
.. _`WordSegment at Github`: https://github.com/grantjenks/python-wordsegment
.. _`WordSegment Issue Tracker`: https://github.com/grantjenks/python-wordsegment/issues

WordSegment License
-------------------

Copyright 2018 Grant Jenks

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.



            

Raw data

            {
    "_id": null,
    "home_page": "http://www.grantjenks.com/docs/wordsegment/",
    "name": "wordsegment",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Grant Jenks",
    "author_email": "contact@grantjenks.com",
    "download_url": "https://files.pythonhosted.org/packages/64/68/08112f4c2888f41520e54e2d0b22dcec5adb28cddf4eeca344eb9da04177/wordsegment-1.3.1.tar.gz",
    "platform": "",
    "description": "Python Word Segmentation\n========================\n\n`WordSegment`_ is an Apache2 licensed module for English word\nsegmentation, written in pure-Python, and based on a trillion-word corpus.\n\nBased on code from the chapter \"`Natural Language Corpus Data`_\" by Peter\nNorvig from the book \"`Beautiful Data`_\" (Segaran and Hammerbacher, 2009).\n\nData files are derived from the `Google Web Trillion Word Corpus`_, as\ndescribed by Thorsten Brants and Alex Franz, and `distributed`_ by the\nLinguistic Data Consortium. This module contains only a subset of that\ndata. The unigram data includes only the most common 333,000 words. Similarly,\nbigram data includes only the most common 250,000 phrases. Every word and\nphrase is lowercased with punctuation removed.\n\n.. _`WordSegment`: http://www.grantjenks.com/docs/wordsegment/\n.. _`Natural Language Corpus Data`: http://norvig.com/ngrams/\n.. _`Beautiful Data`: http://oreilly.com/catalog/9780596157111/\n.. _`Google Web Trillion Word Corpus`: http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html\n.. _`distributed`: https://catalog.ldc.upenn.edu/LDC2006T13\n\nFeatures\n--------\n\n- Pure-Python\n- Fully documented\n- 100% Test Coverage\n- Includes unigram and bigram data\n- Command line interface for batch processing\n- Easy to hack (e.g. different scoring, new data, different language)\n- Developed on Python 2.7\n- Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6 and PyPy, PyPy3\n- Tested on Windows, Mac OS X, and Linux\n- Tested using Travis CI and AppVeyor CI\n\n.. image:: https://api.travis-ci.org/grantjenks/python-wordsegment.svg\n    :target: http://www.grantjenks.com/docs/wordsegment/\n\n.. image:: https://ci.appveyor.com/api/projects/status/github/grantjenks/python-wordsegment?branch=master&svg=true\n    :target: http://www.grantjenks.com/docs/wordsegment/\n\nQuickstart\n----------\n\nInstalling `WordSegment`_ is simple with\n`pip <http://www.pip-installer.org/>`_::\n\n    $ pip install wordsegment\n\nYou can access documentation in the interpreter with Python's built-in help\nfunction::\n\n    >>> import wordsegment\n    >>> help(wordsegment)\n\nTutorial\n--------\n\nIn your own Python programs, you'll mostly want to use `segment` to divide a\nphrase into a list of its parts::\n\n    >>> from wordsegment import load, segment\n    >>> load()\n    >>> segment('thisisatest')\n    ['this', 'is', 'a', 'test']\n\nThe `load` function reads and parses the unigrams and bigrams data from\ndisk. Loading the data only needs to be done once.\n\n`WordSegment`_ also provides a command-line interface for batch\nprocessing. This interface accepts two arguments: in-file and out-file. Lines\nfrom in-file are iteratively segmented, joined by a space, and written to\nout-file. Input and output default to stdin and stdout respectively. ::\n\n    $ echo thisisatest | python -m wordsegment\n    this is a test\n\nIf you want to run `WordSegment`_ as a kind of server process then use Python's\n``-u`` option for unbuffered output. You can also set ``PYTHONUNBUFFERED=1`` in\nthe environment. ::\n\n    >>> import subprocess as sp\n    >>> wordsegment = sp.Popen(\n            ['python', '-um', 'wordsegment'],\n            stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.STDOUT)\n    >>> wordsegment.stdin.write('thisisatest\\n')\n    >>> wordsegment.stdout.readline()\n    'this is a test\\n'\n    >>> wordsegment.stdin.write('workswithotherlanguages\\n')\n    >>> wordsegment.stdout.readline()\n    'works with other languages\\n'\n    >>> wordsegment.stdin.close()\n    >>> wordsegment.wait()  # Process exit code.\n    0\n\nThe maximum segmented word length is 24 characters. Neither the unigram nor\nbigram data contain words exceeding that length. The corpus also excludes\npunctuation and all letters have been lowercased. Before segmenting text,\n`clean` is called to transform the input to a canonical form::\n\n    >>> from wordsegment import clean\n    >>> clean('She said, \"Python rocks!\"')\n    'shesaidpythonrocks'\n    >>> segment('She said, \"Python rocks!\"')\n    ['she', 'said', 'python', 'rocks']\n\nSometimes its interesting to explore the unigram and bigram counts\nthemselves. These are stored in Python dictionaries mapping word to count. ::\n\n    >>> import wordsegment as ws\n    >>> ws.load()\n    >>> ws.UNIGRAMS['the']\n    23135851162.0\n    >>> ws.UNIGRAMS['gray']\n    21424658.0\n    >>> ws.UNIGRAMS['grey']\n    18276942.0\n\nAbove we see that the spelling `gray` is more common than the spelling `grey`.\n\nBigrams are joined by a space::\n\n    >>> import heapq\n    >>> from pprint import pprint\n    >>> from operator import itemgetter\n    >>> pprint(heapq.nlargest(10, ws.BIGRAMS.items(), itemgetter(1)))\n    [('of the', 2766332391.0),\n     ('in the', 1628795324.0),\n     ('to the', 1139248999.0),\n     ('on the', 800328815.0),\n     ('for the', 692874802.0),\n     ('and the', 629726893.0),\n     ('to be', 505148997.0),\n     ('is a', 476718990.0),\n     ('with the', 461331348.0),\n     ('from the', 428303219.0)]\n\nSome bigrams begin with `<s>`. This is to indicate the start of a bigram::\n\n    >>> ws.BIGRAMS['<s> where']\n    15419048.0\n    >>> ws.BIGRAMS['<s> what']\n    11779290.0\n\nThe unigrams and bigrams data is stored in the `wordsegment` directory in\nthe `unigrams.txt` and `bigrams.txt` files respectively.\n\nUser Guide\n----------\n\n* `Word Segment API Reference`_\n* `Using a Different Corpus`_\n* `Python: Load dict Fast From File`_\n\n.. _`Word Segment API Reference`: http://www.grantjenks.com/docs/wordsegment/api.html\n.. _`Using a Different Corpus`: http://www.grantjenks.com/docs/wordsegment/using-a-different-corpus.html\n.. _`Python: Load dict Fast From File`: http://www.grantjenks.com/docs/wordsegment/python-load-dict-fast-from-file.html\n\nReferences\n----------\n\n* `WordSegment Documentation`_\n* `WordSegment at PyPI`_\n* `WordSegment at Github`_\n* `WordSegment Issue Tracker`_\n\n.. _`WordSegment Documentation`: http://www.grantjenks.com/docs/wordsegment/\n.. _`WordSegment at PyPI`: https://pypi.python.org/pypi/wordsegment\n.. _`WordSegment at Github`: https://github.com/grantjenks/python-wordsegment\n.. _`WordSegment Issue Tracker`: https://github.com/grantjenks/python-wordsegment/issues\n\nWordSegment License\n-------------------\n\nCopyright 2018 Grant Jenks\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "English word segmentation.",
    "version": "1.3.1",
    "project_urls": {
        "Homepage": "http://www.grantjenks.com/docs/wordsegment/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cf6ce6f4734d6f7d28305f52ec81377d7ce7d1856b97b814278e9960183235ad",
                "md5": "a229e300c63712faa1835b5cb2cf3ff2",
                "sha256": "dd10e32fdb890079532ddffc1d179f839627af8345ecf52b84627a536449648d"
            },
            "downloads": -1,
            "filename": "wordsegment-1.3.1-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a229e300c63712faa1835b5cb2cf3ff2",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 4827997,
            "upload_time": "2018-07-07T03:51:19",
            "upload_time_iso_8601": "2018-07-07T03:51:19.955385Z",
            "url": "https://files.pythonhosted.org/packages/cf/6c/e6f4734d6f7d28305f52ec81377d7ce7d1856b97b814278e9960183235ad/wordsegment-1.3.1-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "646808112f4c2888f41520e54e2d0b22dcec5adb28cddf4eeca344eb9da04177",
                "md5": "b41acd915506bfd0e2055c71c6ab3eb3",
                "sha256": "3dcc7cd1e9bba3f3ffe6a0e54d98377bc502fc34e9e9d8c8199ac5636924f023"
            },
            "downloads": -1,
            "filename": "wordsegment-1.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "b41acd915506bfd0e2055c71c6ab3eb3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 4828756,
            "upload_time": "2018-07-07T03:51:29",
            "upload_time_iso_8601": "2018-07-07T03:51:29.142983Z",
            "url": "https://files.pythonhosted.org/packages/64/68/08112f4c2888f41520e54e2d0b22dcec5adb28cddf4eeca344eb9da04177/wordsegment-1.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2018-07-07 03:51:29",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "wordsegment"
}
        
Elapsed time: 0.45369s