g2p-en


Nameg2p-en JSON
Version 2.1.0 PyPI version JSON
download
home_pagehttps://github.com/Kyubyong/g2p
SummaryA Simple Python Module for English Grapheme To Phoneme Conversion
upload_time2019-12-31 01:16:12
maintainer
docs_urlNone
authorKyubyong Park & Jongseok Kim
requires_python
licenseApache Software License
keywords g2p g2p_en g2pe
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            g2p\_en: A Simple Python Module for English Grapheme To Phoneme Conversion
==========================================================================

[Update] * We removed TensorFlow from the dependencies. After all, it changes its APIs quite often, and we don't expect you to have a GPU. Instead, NumPy is used for inference.

This module is designed to convert English graphemes (spelling) to
phonemes (pronunciation). It is considered essential in several tasks
such as speech synthesis. Unlike many languages like Spanish or German
where pronunciation of a word can be inferred from its spelling, English
words are often far from people's expectations. Therefore, it will be
the best idea to consult a dictionary if we want to know the
pronunciation of some word. However, there are at least two tentative
issues in this approach. First, you can't disambiguate the pronunciation
of homographs, words which have multiple pronunciations. (See ``a``
below.) Second, you can't check if the word is not in the dictionary.
(See ``b`` below.)

-

   \a.  I refuse to collect the refuse around here. (rɪ\|fju:z as verb vs. \|refju:s as noun)

-
   \b.  I am an activationist. (activationist: newly coined word which means ``n. A person who designs and implements programs of treatment or therapy that use recreation and activities to help people whose functional abilities are affected by illness or disability.`` from `WORD SPY <https://wordspy.com/index.php?word=activationist>`__

For the first homograph issue, fortunately many homographs can be
disambiguated using their part-of-speech, if not all. When it comes to
the words not in the dictionary, however, we should make our best guess
using our knowledge. In this project, we employ a deep learning seq2seq
framework based on TensorFlow.

Algorithm
---------

1. Spells out arabic numbers and some currency symbols. (e.g. $200 ->
   two hundred dollars) (This is borrowed from `Keith Ito's
   code <https://github.com/keithito/tacotron/blob/master/text/numbers.py>`__)
2. Attempts to retrieve the correct pronunciation for homographs based
   on their POS)
3. Looks up `The CMU Pronouncing
   Dictionary <http://www.speech.cs.cmu.edu/cgi-bin/cmudict>`__ for
   non-homographs.
4. For OOVs, we predict their pronunciations using our neural net model.

Environment
-----------

-  python 3.x

Dependencies
------------

-  numpy >= 1.13.1
-  nltk >= 3.2.4
-  python -m nltk.downloader "averaged\_perceptron\_tagger" "cmudict"
-  inflect >= 0.3.1
-  Distance >= 0.1.3

Installation
------------

::

    pip install g2p_en

OR

::

    python setup.py install

nltk package will be automatically downloaded at your first run.


Usage
-----

::

    from g2p_en import G2p

    texts = ["I have $250 in my pocket.", # number -> spell-out
             "popular pets, e.g. cats and dogs", # e.g. -> for example
             "I refuse to collect the refuse around here.", # homograph
             "I'm an activationist."] # newly coined word
    g2p = G2p()
    for text in texts:
        out = g2p(text)
        print(out)
    >>> ['AY1', ' ', 'HH', 'AE1', 'V', ' ', 'T', 'UW1', ' ', 'HH', 'AH1', 'N', 'D', 'R', 'AH0', 'D', ' ', 'F', 'IH1', 'F', 'T', 'IY0', ' ', 'D', 'AA1', 'L', 'ER0', 'Z', ' ', 'IH0', 'N', ' ', 'M', 'AY1', ' ', 'P', 'AA1', 'K', 'AH0', 'T', ' ', '.']
    >>> ['P', 'AA1', 'P', 'Y', 'AH0', 'L', 'ER0', ' ', 'P', 'EH1', 'T', 'S', ' ', ',', ' ', 'F', 'AO1', 'R', ' ', 'IH0', 'G', 'Z', 'AE1', 'M', 'P', 'AH0', 'L', ' ', 'K', 'AE1', 'T', 'S', ' ', 'AH0', 'N', 'D', ' ', 'D', 'AA1', 'G', 'Z']
    >>> ['AY1', ' ', 'R', 'IH0', 'F', 'Y', 'UW1', 'Z', ' ', 'T', 'UW1', ' ', 'K', 'AH0', 'L', 'EH1', 'K', 'T', ' ', 'DH', 'AH0', ' ', 'R', 'EH1', 'F', 'Y', 'UW2', 'Z', ' ', 'ER0', 'AW1', 'N', 'D', ' ', 'HH', 'IY1', 'R', ' ', '.']
    >>> ['AY1', ' ', 'AH0', 'M', ' ', 'AE1', 'N', ' ', 'AE2', 'K', 'T', 'IH0', 'V', 'EY1', 'SH', 'AH0', 'N', 'IH0', 'S', 'T', ' ', '.']


May, 2018.

Kyubyong Park & `Jongseok Kim <https://github.com/ozmig77>`__



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Kyubyong/g2p",
    "name": "g2p-en",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "g2p,g2p_en,g2pE",
    "author": "Kyubyong Park & Jongseok Kim",
    "author_email": "kbpark.linguist@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5f/22/2c7acbe6164ed6cfd4301e9ad2dbde69c68d22268a0f9b5b0ee6052ed3ab/g2p_en-2.1.0.tar.gz",
    "platform": "",
    "description": "g2p\\_en: A Simple Python Module for English Grapheme To Phoneme Conversion\n==========================================================================\n\n[Update] * We removed TensorFlow from the dependencies. After all, it changes its APIs quite often, and we don't expect you to have a GPU. Instead, NumPy is used for inference.\n\nThis module is designed to convert English graphemes (spelling) to\nphonemes (pronunciation). It is considered essential in several tasks\nsuch as speech synthesis. Unlike many languages like Spanish or German\nwhere pronunciation of a word can be inferred from its spelling, English\nwords are often far from people's expectations. Therefore, it will be\nthe best idea to consult a dictionary if we want to know the\npronunciation of some word. However, there are at least two tentative\nissues in this approach. First, you can't disambiguate the pronunciation\nof homographs, words which have multiple pronunciations. (See ``a``\nbelow.) Second, you can't check if the word is not in the dictionary.\n(See ``b`` below.)\n\n-\n\n \u00a0 \\a.  I refuse to collect the refuse around here. (r\u026a\\|fju:z as verb vs. \\|refju:s as noun)\n\n-\n   \\b.  I am an activationist. (activationist: newly coined word which means ``n. A person who designs and implements programs of treatment or therapy that use recreation and activities to help people whose functional abilities are affected by illness or disability.`` from `WORD SPY <https://wordspy.com/index.php?word=activationist>`__\n\nFor the first homograph issue, fortunately many homographs can be\ndisambiguated using their part-of-speech, if not all. When it comes to\nthe words not in the dictionary, however, we should make our best guess\nusing our knowledge. In this project, we employ a deep learning seq2seq\nframework based on TensorFlow.\n\nAlgorithm\n---------\n\n1. Spells out arabic numbers and some currency symbols. (e.g. $200 ->\n   two hundred dollars) (This is borrowed from `Keith Ito's\n   code <https://github.com/keithito/tacotron/blob/master/text/numbers.py>`__)\n2. Attempts to retrieve the correct pronunciation for homographs based\n   on their POS)\n3. Looks up `The CMU Pronouncing\n   Dictionary <http://www.speech.cs.cmu.edu/cgi-bin/cmudict>`__ for\n   non-homographs.\n4. For OOVs, we predict their pronunciations using our neural net model.\n\nEnvironment\n-----------\n\n-  python 3.x\n\nDependencies\n------------\n\n-  numpy >= 1.13.1\n-  nltk >= 3.2.4\n-  python -m nltk.downloader \"averaged\\_perceptron\\_tagger\" \"cmudict\"\n-  inflect >= 0.3.1\n-  Distance >= 0.1.3\n\nInstallation\n------------\n\n::\n\n    pip install g2p_en\n\nOR\n\n::\n\n    python setup.py install\n\nnltk package will be automatically downloaded at your first run.\n\n\nUsage\n-----\n\n::\n\n    from g2p_en import G2p\n\n    texts = [\"I have $250 in my pocket.\", # number -> spell-out\n             \"popular pets, e.g. cats and dogs\", # e.g. -> for example\n             \"I refuse to collect the refuse around here.\", # homograph\n             \"I'm an activationist.\"] # newly coined word\n    g2p = G2p()\n    for text in texts:\n        out = g2p(text)\n        print(out)\n    >>> ['AY1', ' ', 'HH', 'AE1', 'V', ' ', 'T', 'UW1', ' ', 'HH', 'AH1', 'N', 'D', 'R', 'AH0', 'D', ' ', 'F', 'IH1', 'F', 'T', 'IY0', ' ', 'D', 'AA1', 'L', 'ER0', 'Z', ' ', 'IH0', 'N', ' ', 'M', 'AY1', ' ', 'P', 'AA1', 'K', 'AH0', 'T', ' ', '.']\n    >>> ['P', 'AA1', 'P', 'Y', 'AH0', 'L', 'ER0', ' ', 'P', 'EH1', 'T', 'S', ' ', ',', ' ', 'F', 'AO1', 'R', ' ', 'IH0', 'G', 'Z', 'AE1', 'M', 'P', 'AH0', 'L', ' ', 'K', 'AE1', 'T', 'S', ' ', 'AH0', 'N', 'D', ' ', 'D', 'AA1', 'G', 'Z']\n    >>> ['AY1', ' ', 'R', 'IH0', 'F', 'Y', 'UW1', 'Z', ' ', 'T', 'UW1', ' ', 'K', 'AH0', 'L', 'EH1', 'K', 'T', ' ', 'DH', 'AH0', ' ', 'R', 'EH1', 'F', 'Y', 'UW2', 'Z', ' ', 'ER0', 'AW1', 'N', 'D', ' ', 'HH', 'IY1', 'R', ' ', '.']\n    >>> ['AY1', ' ', 'AH0', 'M', ' ', 'AE1', 'N', ' ', 'AE2', 'K', 'T', 'IH0', 'V', 'EY1', 'SH', 'AH0', 'N', 'IH0', 'S', 'T', ' ', '.']\n\n\nMay, 2018.\n\nKyubyong Park & `Jongseok Kim <https://github.com/ozmig77>`__\n\n\n",
    "bugtrack_url": null,
    "license": "Apache Software License",
    "summary": "A Simple Python Module for English Grapheme To Phoneme Conversion",
    "version": "2.1.0",
    "project_urls": {
        "Download": "https://github.com/Kyubyong/g2p/archive/1.0.0.tar.gz",
        "Homepage": "https://github.com/Kyubyong/g2p"
    },
    "split_keywords": [
        "g2p",
        "g2p_en",
        "g2pe"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d7d9b77dc634a7a0c0c97716ba97dd0a28cbfa6267c96f359c4f27ed71cbd284",
                "md5": "c3a482f7940df3d620e0f172a33981c9",
                "sha256": "2a7aabf1fc7f270fcc3349881407988c9245173c2413debbe5432f4a4f31319f"
            },
            "downloads": -1,
            "filename": "g2p_en-2.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c3a482f7940df3d620e0f172a33981c9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 3117464,
            "upload_time": "2019-12-31T01:16:03",
            "upload_time_iso_8601": "2019-12-31T01:16:03.286213Z",
            "url": "https://files.pythonhosted.org/packages/d7/d9/b77dc634a7a0c0c97716ba97dd0a28cbfa6267c96f359c4f27ed71cbd284/g2p_en-2.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5f222c7acbe6164ed6cfd4301e9ad2dbde69c68d22268a0f9b5b0ee6052ed3ab",
                "md5": "a2472d72e09d266a3d725a3bf839e5b6",
                "sha256": "32ecb119827a3b10ea8c1197276f4ea4f44070ae56cbbd01f0f261875f556a58"
            },
            "downloads": -1,
            "filename": "g2p_en-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a2472d72e09d266a3d725a3bf839e5b6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 3116166,
            "upload_time": "2019-12-31T01:16:12",
            "upload_time_iso_8601": "2019-12-31T01:16:12.753157Z",
            "url": "https://files.pythonhosted.org/packages/5f/22/2c7acbe6164ed6cfd4301e9ad2dbde69c68d22268a0f9b5b0ee6052ed3ab/g2p_en-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2019-12-31 01:16:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Kyubyong",
    "github_project": "g2p",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "g2p-en"
}
        
Elapsed time: 0.26548s