# LingPatLab: Linguistic Pattern Laboratory
## Overview
LingPatLab is a robust API designed to perform advanced Natural Language Processing (NLP) tasks, utilizing the capabilities of the spaCy library. This tool is expertly crafted to convert raw textual data into structured, analyzable forms. It is ideal for developers, researchers, and linguists who require comprehensive processing capabilities, from tokenization to sophisticated text summarization.
## Features
- **Tokenization**: Splits raw text into individual tokens.
- **Parsing**: Analyzes tokens to construct sentences with detailed linguistic annotations.
- **Phrase Extraction**: Identifies and extracts significant phrases from sentences.
- **Text Summarization**: Produces concise summaries of input text, optionally leveraging extracted phrases.
## Usage
To get started with LingPatLab, you can set up the API as follows:
```python
from spacy_core.api import SpacyCoreAPI
api = LingPatLab()
```
### Tokenization and Parsing
To tokenize and parse input text into structured sentences:
```python
parsed_sentence: Sentence = api.parse_input_text("Your input text here.")
print(parsed_sentence.to_string())
```
### Phrase Extraction
To extract phrases from a structured Sentences object:
```python
phrases: List[str] = api.extract_topics(parsed_sentences)
for phrase in phrases:
print(phrase)
```
### Summarization
To generate a summary of the input text:
```python
summary: str = api.generate_summary("Your input text here.")
print(summary)
```
### Data Classes
LingPatLab utilizes several custom data classes to structure the data throughout the NLP process:
- `Sentence`: Represents a single sentence, containing a list of tokens (`SpacyResult` objects).
- `Sentences`: Represents a collection of sentences, useful for processing paragraphs or multiple lines of text.
- `SpacyResult`: Encapsulates the detailed analysis of a single token, including part of speech, dependency relations, and additional linguistic features.
- `OtherInfo`: Contains additional information about a token, particularly in relation to its syntactic head.
Raw data
{
"_id": null,
"home_page": "https://github.com/craigtrim/lingpatlab",
"name": "lingpatlab",
"maintainer": "Craig Trim",
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": "craigtrim@gmail.com",
"keywords": "nlp, spacy, text-analysis, linguistic-patterns, natural-language-processing, machine-learning, api, cloud, AWS, microservice, utility",
"author": "Craig Trim",
"author_email": "craigtrim@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/74/67/4d7f00eafc59a917cb9170d02a6ae293169126b66d210dd47c86393af523/lingpatlab-0.2.10.tar.gz",
"platform": null,
"description": "# LingPatLab: Linguistic Pattern Laboratory\n\n## Overview\n\nLingPatLab is a robust API designed to perform advanced Natural Language Processing (NLP) tasks, utilizing the capabilities of the spaCy library. This tool is expertly crafted to convert raw textual data into structured, analyzable forms. It is ideal for developers, researchers, and linguists who require comprehensive processing capabilities, from tokenization to sophisticated text summarization.\n\n## Features\n\n- **Tokenization**: Splits raw text into individual tokens.\n- **Parsing**: Analyzes tokens to construct sentences with detailed linguistic annotations.\n- **Phrase Extraction**: Identifies and extracts significant phrases from sentences.\n- **Text Summarization**: Produces concise summaries of input text, optionally leveraging extracted phrases.\n\n## Usage\n\nTo get started with LingPatLab, you can set up the API as follows:\n\n```python\nfrom spacy_core.api import SpacyCoreAPI\n\napi = LingPatLab()\n```\n\n### Tokenization and Parsing\n\nTo tokenize and parse input text into structured sentences:\n\n```python\nparsed_sentence: Sentence = api.parse_input_text(\"Your input text here.\")\nprint(parsed_sentence.to_string())\n```\n\n### Phrase Extraction\n\nTo extract phrases from a structured Sentences object:\n\n```python\nphrases: List[str] = api.extract_topics(parsed_sentences)\nfor phrase in phrases:\n print(phrase)\n```\n\n### Summarization\n\nTo generate a summary of the input text:\n\n```python\nsummary: str = api.generate_summary(\"Your input text here.\")\nprint(summary)\n```\n\n### Data Classes\n\nLingPatLab utilizes several custom data classes to structure the data throughout the NLP process:\n\n- `Sentence`: Represents a single sentence, containing a list of tokens (`SpacyResult` objects).\n- `Sentences`: Represents a collection of sentences, useful for processing paragraphs or multiple lines of text.\n- `SpacyResult`: Encapsulates the detailed analysis of a single token, including part of speech, dependency relations, and additional linguistic features.\n- `OtherInfo`: Contains additional information about a token, particularly in relation to its syntactic head.\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Linguistic Pattern Lab using spaCy",
"version": "0.2.10",
"project_urls": {
"Bug Tracker": "https://github.com/craigtrim/lingpatlab/issues",
"Homepage": "https://github.com/craigtrim/lingpatlab",
"Repository": "https://github.com/craigtrim/lingpatlab"
},
"split_keywords": [
"nlp",
" spacy",
" text-analysis",
" linguistic-patterns",
" natural-language-processing",
" machine-learning",
" api",
" cloud",
" aws",
" microservice",
" utility"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "747cd74093505bc25b30f3721583d8eb1ca278e1aaed10b5cd856544227c2f55",
"md5": "c8a2857aed563489f6662fc9baa759e5",
"sha256": "d4ea4ee50b4c466c070f0b8ce45b35bdda78b8f4a823acdecbc26cb4219d20ad"
},
"downloads": -1,
"filename": "lingpatlab-0.2.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c8a2857aed563489f6662fc9baa759e5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 377820,
"upload_time": "2024-10-11T18:02:36",
"upload_time_iso_8601": "2024-10-11T18:02:36.260350Z",
"url": "https://files.pythonhosted.org/packages/74/7c/d74093505bc25b30f3721583d8eb1ca278e1aaed10b5cd856544227c2f55/lingpatlab-0.2.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "74674d7f00eafc59a917cb9170d02a6ae293169126b66d210dd47c86393af523",
"md5": "624e3bf2b508dbe90134b27da4c5283d",
"sha256": "eaa96c5873b87ca88c33e39f87c294e74c910c2a0d0d6928a720180d3bd5dad0"
},
"downloads": -1,
"filename": "lingpatlab-0.2.10.tar.gz",
"has_sig": false,
"md5_digest": "624e3bf2b508dbe90134b27da4c5283d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 347371,
"upload_time": "2024-10-11T18:02:37",
"upload_time_iso_8601": "2024-10-11T18:02:37.956572Z",
"url": "https://files.pythonhosted.org/packages/74/67/4d7f00eafc59a917cb9170d02a6ae293169126b66d210dd47c86393af523/lingpatlab-0.2.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-11 18:02:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "craigtrim",
"github_project": "lingpatlab",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "lingpatlab"
}