wordhoard


Namewordhoard JSON
Version 1.5.5 PyPI version JSON
download
home_pagehttps://github.com/johnbumgarner/wordhoard
SummaryA comprehensive lexical discovery application that is useful for finding semantic relationships such as, the antonyms, synonyms, hypernyms, hyponyms, homophones and definitions for a specific word.
upload_time2024-05-29 11:18:12
maintainerNone
docs_urlNone
authorJohn Bumgarner
requires_python>=3.8
licenseLICENSE.txt
keywords antonyms bag of words definitions hypernyms hyponyms homophones information retrieval lexicon semantic relationships synonyms natural language processing
VCS
bugtrack_url
requirements backoff beautifulsoup4 certifi charset-normalizer cloudscraper deckar01-ratelimit deepl idna lxml pyparsing requests requests-toolbelt soupsieve urllib3
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Primary Use Case
<p align="justify"> 
Textual analysis is a broad term for various research methodologies used to qualitatively describe, interpret and understand text data. These methodologies are mainly used in academic research to analyze content related to media and communication studies, popular culture, sociology, and philosophy. Textual analysis allows these researchers to quickly obtain relevant insights from unstructured data. All types of information can be gleaned from textual data, especially from social media posts or news articles. Some of this information includes the overall concept of the subtext, symbolism within the text, assumptions being made and potential relative value to a subject (e.g. data science). In some cases it is possible to deduce the relative historical and cultural context of a body of text using analysis techniques coupled with knowledge from different disciplines, like linguistics and semiotics.
   
Word frequency is the technique used in textual analysis to measure the frequency of a specific word or word grouping within unstructured data. Measuring the number of word occurrences in a corpus allows a researcher to garner interesting insights about the text. A subset of word frequency is the correlation between a given word and that word's relationship to either antonyms and synonyms within the specific corpus being analyzed. Knowing these relationships is critical to improving word frequencies and topic modeling.

<strong>Wordhoard</strong> was designed to assist researchers performing textual analysis to build more comprehensive lists of antonyms, synonyms, hypernyms, hyponyms and homophones.
</p>

# Installation

<p align="justify"> 
   Install the distribution via pip:
</p>

```python
pip3 install wordhoard
```

# General Package Utilization

<p align="justify">
Please reference the <a href="https://wordhoard.readthedocs.io/en/latest" target="_blank">WordHoard Documentation</a> for package usage guidance and parameters.
</p>

# Sources

<p align="justify">
This package is designed to query these online sources for antonyms, synonyms, hypernyms, hyponyms and definitions:

1. classicthesaurus.com
2. collinsdictionary.com
3. merriam-webster.com
4. synonym.com
5. thesaurus.com
6. wordhippo.com
7. wordnet.princeton.edu
</p>
  
# Dependencies

<p align="justify">
This package has these core dependencies:
  
1. backoff
2. BeautifulSoup
3. cloudscraper
4. deckar01-ratelimit
5. deepl
6. lxml
7. requests
8. urllib3

</p>

<p align="justify">
Additional details on this package's dependencies can be found <a href="https://wordhoard.readthedocs.io/en/latest/dependencies" target="_blank">here</a>.
</p>

# Development Roadmap

<p align="justify">
If you would like to contribute to the <strong>Wordhoard</strong> project please read the <a href="https://wordhoard.readthedocs.io/en/latest/contributing" target="_blank">contributing guidelines</a>.
   
Items currently under development:
   - Expanding the list of hypernyms, hyponyms and homophones
   - Adding part-of-speech filters in queries 
</p>

# Issues

<p align="justify">
This repository is actively maintained.  Feel free to open any issues related to bugs, coding errors, broken links or enhancements. 

You can also contact me at [John Bumgarner](mailto:wordhoardproject@gmail.com?subject=[GitHub]%20wordhoard%20project%20request) with any issues or enhancement requests.
</p>


# Sponsorship
   
If you would like to contribute financially to the development and maintenance of the <strong>Wordhoard</strong> project please read the <a href="https://github.com/johnbumgarner/wordhoard/blob/master/SPONSOR.md">sponsorship information</a>.

# License

<p align="justify">
The MIT License (MIT).  Please see <a href="https://wordhoard.readthedocs.io/en/latest/license" target="_blank">License File</a> for more information.
</p>


# Author

<p align="justify">
   Copyright (c) 2021 John Bumgarner 
</p>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/johnbumgarner/wordhoard",
    "name": "wordhoard",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "antonyms, bag of words, definitions, hypernyms, hyponyms, homophones, information retrieval, lexicon, semantic relationships, synonyms, natural language processing",
    "author": "John Bumgarner",
    "author_email": "wordhoardproject@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/de/c0/a26b78dfe1a50a49ca173b1ed7ccacfa0276c43520435aa40d8feba2004c/wordhoard-1.5.5.tar.gz",
    "platform": null,
    "description": "# Primary Use Case\n<p align=\"justify\"> \nTextual analysis is a broad term for various research methodologies used to qualitatively describe, interpret and understand text data. These methodologies are mainly used in academic research to analyze content related to media and communication studies, popular culture, sociology, and philosophy. Textual analysis allows these researchers to quickly obtain relevant insights from unstructured data. All types of information can be gleaned from textual data, especially from social media posts or news articles. Some of this information includes the overall concept of the subtext, symbolism within the text, assumptions being made and potential relative value to a subject (e.g. data science). In some cases it is possible to deduce the relative historical and cultural context of a body of text using analysis techniques coupled with knowledge from different disciplines, like linguistics and semiotics.\n   \nWord frequency is the technique used in textual analysis to measure the frequency of a specific word or word grouping within unstructured data. Measuring the number of word occurrences in a corpus allows a researcher to garner interesting insights about the text. A subset of word frequency is the correlation between a given word and that word's relationship to either antonyms and synonyms within the specific corpus being analyzed. Knowing these relationships is critical to improving word frequencies and topic modeling.\n\n<strong>Wordhoard</strong> was designed to assist researchers performing textual analysis to build more comprehensive lists of antonyms, synonyms, hypernyms, hyponyms and homophones.\n</p>\n\n# Installation\n\n<p align=\"justify\"> \n   Install the distribution via pip:\n</p>\n\n```python\npip3 install wordhoard\n```\n\n# General Package Utilization\n\n<p align=\"justify\">\nPlease reference the <a href=\"https://wordhoard.readthedocs.io/en/latest\" target=\"_blank\">WordHoard Documentation</a> for package usage guidance and parameters.\n</p>\n\n# Sources\n\n<p align=\"justify\">\nThis package is designed to query these online sources for antonyms, synonyms, hypernyms, hyponyms and definitions:\n\n1. classicthesaurus.com\n2. collinsdictionary.com\n3. merriam-webster.com\n4. synonym.com\n5. thesaurus.com\n6. wordhippo.com\n7. wordnet.princeton.edu\n</p>\n  \n# Dependencies\n\n<p align=\"justify\">\nThis package has these core dependencies:\n  \n1. backoff\n2. BeautifulSoup\n3. cloudscraper\n4. deckar01-ratelimit\n5. deepl\n6. lxml\n7. requests\n8. urllib3\n\n</p>\n\n<p align=\"justify\">\nAdditional details on this package's dependencies can be found <a href=\"https://wordhoard.readthedocs.io/en/latest/dependencies\" target=\"_blank\">here</a>.\n</p>\n\n# Development Roadmap\n\n<p align=\"justify\">\nIf you would like to contribute to the <strong>Wordhoard</strong> project please read the <a href=\"https://wordhoard.readthedocs.io/en/latest/contributing\" target=\"_blank\">contributing guidelines</a>.\n   \nItems currently under development:\n   - Expanding the list of hypernyms, hyponyms and homophones\n   - Adding part-of-speech filters in queries \n</p>\n\n# Issues\n\n<p align=\"justify\">\nThis repository is actively maintained.  Feel free to open any issues related to bugs, coding errors, broken links or enhancements. \n\nYou can also contact me at [John Bumgarner](mailto:wordhoardproject@gmail.com?subject=[GitHub]%20wordhoard%20project%20request) with any issues or enhancement requests.\n</p>\n\n\n# Sponsorship\n   \nIf you would like to contribute financially to the development and maintenance of the <strong>Wordhoard</strong> project please read the <a href=\"https://github.com/johnbumgarner/wordhoard/blob/master/SPONSOR.md\">sponsorship information</a>.\n\n# License\n\n<p align=\"justify\">\nThe MIT License (MIT).  Please see <a href=\"https://wordhoard.readthedocs.io/en/latest/license\" target=\"_blank\">License File</a> for more information.\n</p>\n\n\n# Author\n\n<p align=\"justify\">\n   Copyright (c) 2021 John Bumgarner \n</p>\n",
    "bugtrack_url": null,
    "license": "LICENSE.txt",
    "summary": "A comprehensive lexical discovery application that is useful for finding semantic relationships such as, the antonyms, synonyms, hypernyms, hyponyms, homophones and definitions for a specific word.",
    "version": "1.5.5",
    "project_urls": {
        "Homepage": "https://github.com/johnbumgarner/wordhoard"
    },
    "split_keywords": [
        "antonyms",
        " bag of words",
        " definitions",
        " hypernyms",
        " hyponyms",
        " homophones",
        " information retrieval",
        " lexicon",
        " semantic relationships",
        " synonyms",
        " natural language processing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "130c796b861c79ef40c53e6d20f4475cdfe71d7333f4a06bce8d9cd0a76e00d4",
                "md5": "c3d1dacc64e9a74d278ae90d9ffb34cf",
                "sha256": "caf4e95498e52f17094289833897efa5ad6d633579a213003494bcfdda13ea59"
            },
            "downloads": -1,
            "filename": "wordhoard-1.5.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c3d1dacc64e9a74d278ae90d9ffb34cf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 364296,
            "upload_time": "2024-05-29T11:18:10",
            "upload_time_iso_8601": "2024-05-29T11:18:10.818798Z",
            "url": "https://files.pythonhosted.org/packages/13/0c/796b861c79ef40c53e6d20f4475cdfe71d7333f4a06bce8d9cd0a76e00d4/wordhoard-1.5.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dec0a26b78dfe1a50a49ca173b1ed7ccacfa0276c43520435aa40d8feba2004c",
                "md5": "756ad3334991c2eb52e64fbfb61bea27",
                "sha256": "a6ab617a0ddeea04512a81f4936583be06fe97530ac5e5c1acf526dcea77b571"
            },
            "downloads": -1,
            "filename": "wordhoard-1.5.5.tar.gz",
            "has_sig": false,
            "md5_digest": "756ad3334991c2eb52e64fbfb61bea27",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 311681,
            "upload_time": "2024-05-29T11:18:12",
            "upload_time_iso_8601": "2024-05-29T11:18:12.376504Z",
            "url": "https://files.pythonhosted.org/packages/de/c0/a26b78dfe1a50a49ca173b1ed7ccacfa0276c43520435aa40d8feba2004c/wordhoard-1.5.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-29 11:18:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "johnbumgarner",
    "github_project": "wordhoard",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "backoff",
            "specs": [
                [
                    ">=",
                    "2.2.1"
                ]
            ]
        },
        {
            "name": "beautifulsoup4",
            "specs": [
                [
                    ">=",
                    "4.12.2"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    ">=",
                    "2024.2.2"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    ">=",
                    "3.3.2"
                ]
            ]
        },
        {
            "name": "cloudscraper",
            "specs": [
                [
                    ">=",
                    "1.2.71"
                ]
            ]
        },
        {
            "name": "deckar01-ratelimit",
            "specs": [
                [
                    ">=",
                    "3.0.2"
                ]
            ]
        },
        {
            "name": "deepl",
            "specs": [
                [
                    ">=",
                    "1.18.0"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    ">=",
                    "3.7"
                ]
            ]
        },
        {
            "name": "lxml",
            "specs": [
                [
                    ">=",
                    "5.2.2"
                ]
            ]
        },
        {
            "name": "pyparsing",
            "specs": [
                [
                    ">=",
                    "3.1.2"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.32.2"
                ]
            ]
        },
        {
            "name": "requests-toolbelt",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "soupsieve",
            "specs": [
                [
                    ">=",
                    "2.5"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    ">=",
                    "2.2.1"
                ]
            ]
        }
    ],
    "lcname": "wordhoard"
}
        
Elapsed time: 0.30220s