lexio


Namelexio JSON
Version 0.0.3 PyPI version JSON
download
home_page
SummarylexIO is a Natural Language Processing (NLP) library in python that is built on top of the numpy library.
upload_time2023-11-16 09:29:05
maintainer
docs_urlNone
authorRijul Dhungana
requires_python>=3.7
license
keywords lexio nlp language
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# lexIO

lexIO is a Natural Language Processing (NLP) library in python that is built on top of the numpy library.

lexIO allows you to do some basic NLP tasks like guessing the topic of an essay, finding the most used words in an essay, removing the stopwords and many more to come in the future.

**Installing lexIO** 

*Using pip*
```
pip install lexio
```

or 

```
pip3 install lexio
```

*Using conda*
```
conda install lexio
```

**Using lexIO**

* Import lexIO 
```
import lexio
```

* Create a 'language_processor' object
```
processor = lexio.language_processor()
```

* Load the text either your own or from lexio.datasets

*Your text*

```
text = "lexIO is a Natural Language Processing (NLP) library in python"
```
now you can get the topic by

```
processor.get_topic(text)
output: lexio
```

note: The output is automatically lowercased by the processor

The 'get_topic' function automatically tokenizes the text, but it you want to do it manually, you can:

```
text = 'lexIO is a Natural Language Processing (NLP) library in python

#tokenizing
tokenized = processor.tokenize(text)

#guessing the topic
processor.get_topic(tokenized, tokenized=True)
output: lexio
```

You can get the most repeated words in the text after removing all the stopwords

Stopwords are all the unnecessary words that dont have their own meaning like is, am, are, hello, they, you etc

```
processor.highlights(text, highlights=5)
output: {'python': 1, 'library': 1, 'nlp': 1, 'processing': 1, 'language': 1}
```
This will return the Top-5 most repeated words 

*Importing pre-built text datasets*

```
apple = lexio.datasets.load_apple
google = lexio.datasets.load_google
```

for more datasets, use:
```
print(dir(lexio.datasets.availables))
output: ['load_apple', 'load_earth', 'load_nepal', 'load_AI', 'load_discipline', 'load_essay_AI', 'load_nature', 'load_google']
```

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "lexio",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "lexio,nlp,language",
    "author": "Rijul Dhungana",
    "author_email": "rijuldhungana37@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/13/d8/e0ea0881ee84fda35fee065b0bc157f0573ff1bb9ab93b7ceb57592e9702/lexio-0.0.3.tar.gz",
    "platform": null,
    "description": "\n# lexIO\n\nlexIO is a Natural Language Processing (NLP) library in python that is built on top of the numpy library.\n\nlexIO allows you to do some basic NLP tasks like guessing the topic of an essay, finding the most used words in an essay, removing the stopwords and many more to come in the future.\n\n**Installing lexIO** \n\n*Using pip*\n```\npip install lexio\n```\n\nor \n\n```\npip3 install lexio\n```\n\n*Using conda*\n```\nconda install lexio\n```\n\n**Using lexIO**\n\n* Import lexIO \n```\nimport lexio\n```\n\n* Create a 'language_processor' object\n```\nprocessor = lexio.language_processor()\n```\n\n* Load the text either your own or from lexio.datasets\n\n*Your text*\n\n```\ntext = \"lexIO is a Natural Language Processing (NLP) library in python\"\n```\nnow you can get the topic by\n\n```\nprocessor.get_topic(text)\noutput: lexio\n```\n\nnote: The output is automatically lowercased by the processor\n\nThe 'get_topic' function automatically tokenizes the text, but it you want to do it manually, you can:\n\n```\ntext = 'lexIO is a Natural Language Processing (NLP) library in python\n\n#tokenizing\ntokenized = processor.tokenize(text)\n\n#guessing the topic\nprocessor.get_topic(tokenized, tokenized=True)\noutput: lexio\n```\n\nYou can get the most repeated words in the text after removing all the stopwords\n\nStopwords are all the unnecessary words that dont have their own meaning like is, am, are, hello, they, you etc\n\n```\nprocessor.highlights(text, highlights=5)\noutput: {'python': 1, 'library': 1, 'nlp': 1, 'processing': 1, 'language': 1}\n```\nThis will return the Top-5 most repeated words \n\n*Importing pre-built text datasets*\n\n```\napple = lexio.datasets.load_apple\ngoogle = lexio.datasets.load_google\n```\n\nfor more datasets, use:\n```\nprint(dir(lexio.datasets.availables))\noutput: ['load_apple', 'load_earth', 'load_nepal', 'load_AI', 'load_discipline', 'load_essay_AI', 'load_nature', 'load_google']\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "lexIO is a Natural Language Processing (NLP) library in python that is built on top of the numpy library.",
    "version": "0.0.3",
    "project_urls": null,
    "split_keywords": [
        "lexio",
        "nlp",
        "language"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "67675fb53c9f752bfcc052a6d0eec956c9a41544434550031225c0a447bdbc0b",
                "md5": "676cec7cc04861c307ddfdb05d61bfa6",
                "sha256": "347ec41c769253c9d5c6cf794afdf09ebd6fd567c67cfed4b1dda3e097d3ea2d"
            },
            "downloads": -1,
            "filename": "lexio-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "676cec7cc04861c307ddfdb05d61bfa6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 15136,
            "upload_time": "2023-11-16T09:29:02",
            "upload_time_iso_8601": "2023-11-16T09:29:02.976347Z",
            "url": "https://files.pythonhosted.org/packages/67/67/5fb53c9f752bfcc052a6d0eec956c9a41544434550031225c0a447bdbc0b/lexio-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "13d8e0ea0881ee84fda35fee065b0bc157f0573ff1bb9ab93b7ceb57592e9702",
                "md5": "9b61e4bdca296094bbc77ef59fa59b89",
                "sha256": "3a3097150a82aacf44bf635ce55f0e34d3adce70697c26cea4fa10e7e60a7d55"
            },
            "downloads": -1,
            "filename": "lexio-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "9b61e4bdca296094bbc77ef59fa59b89",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 15437,
            "upload_time": "2023-11-16T09:29:05",
            "upload_time_iso_8601": "2023-11-16T09:29:05.001108Z",
            "url": "https://files.pythonhosted.org/packages/13/d8/e0ea0881ee84fda35fee065b0bc157f0573ff1bb9ab93b7ceb57592e9702/lexio-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-16 09:29:05",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "lexio"
}
        
Elapsed time: 0.16220s