# spaCy Token Parser (spacy-token-parser)
Use spaCy to Parse Input Tokens
## Usage
Call the service with this code
```python
from spacy_token_parser import parse_tokens
parse_tokens(input_text.split())
```
The output is a tuple.
The first element of the tuple is a list of dictionary tokens (below).
The second element of the tuple is a wrapped instance of `spacy.tokens.doc.Doc`.
### List Output
```json
[
{
"dep":"compound",
"ent":"NORP",
"head":"5665575797947403677",
"id":"6042939320535660714",
"is_alpha":true,
"is_punct":false,
"is_stop":false,
"is_wordnet":true,
"lemma":"american",
"noun_number":"singular",
"other":{
"head_i":3,
"head_idx":24,
"head_orth":5665575797947403677,
"head_text":"films",
"i":0,
"idx":0,
"orth":6042939320535660714
},
"pos":"PROPN",
"sentiment":0.0,
"shape":"xxxx",
"tag":"NNP",
"tense":"",
"text":"american",
"verb_form":"",
"x":0,
"y":8
},
{
"dep":"compound",
"ent":"",
"head":"5665575797947403677",
"id":"16602643206033239142",
"is_alpha":true,
"is_punct":false,
"is_stop":false,
"is_wordnet":true,
"lemma":"silent",
"noun_number":"singular",
"other":{
"head_i":3,
"head_idx":24,
"head_orth":5665575797947403677,
"head_text":"films",
"i":1,
"idx":9,
"orth":16602643206033239142
},
"pos":"PROPN",
"sentiment":0.0,
"shape":"xxxx",
"tag":"NNP",
"tense":"",
"text":"silent",
"verb_form":"",
"x":8,
"y":14
},
{
"dep":"compound",
"ent":"",
"head":"5665575797947403677",
"id":"16417888112635110788",
"is_alpha":true,
"is_punct":false,
"is_stop":false,
"is_wordnet":true,
"lemma":"feature",
"noun_number":"singular",
"other":{
"head_i":3,
"head_idx":24,
"head_orth":5665575797947403677,
"head_text":"films",
"i":2,
"idx":16,
"orth":16417888112635110788
},
"pos":"NOUN",
"sentiment":0.0,
"shape":"xxxx",
"tag":"NN",
"tense":"",
"text":"feature",
"verb_form":"",
"x":14,
"y":21
},
{
"dep":"ROOT",
"ent":"",
"head":"5665575797947403677",
"id":"5665575797947403677",
"is_alpha":true,
"is_punct":false,
"is_stop":false,
"is_wordnet":true,
"lemma":"film",
"noun_number":"plural",
"other":{
"head_i":3,
"head_idx":24,
"head_orth":5665575797947403677,
"head_text":"films",
"i":3,
"idx":24,
"orth":5665575797947403677
},
"pos":"NOUN",
"sentiment":0.0,
"shape":"xxxx",
"tag":"NNS",
"tense":"",
"text":"films",
"verb_form":"",
"x":21,
"y":26
}
]
```
Raw data
{
"_id": null,
"home_page": "https://github.com/craigtrim/spacy-token-parser",
"name": "spacy-token-parser",
"maintainer": "Craig Trim",
"docs_url": null,
"requires_python": ">=3.8.5,<4.0.0",
"maintainer_email": "craigtrim@gmail.com",
"keywords": "nlp,nlu,ai,parser,spacy",
"author": "Craig Trim",
"author_email": "craigtrim@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/9a/cc/bb39463f6ad21d0d8cf54960f083b513b2f8290e6f4341368838e8a185d4/spacy_token_parser-0.1.16.tar.gz",
"platform": null,
"description": "# spaCy Token Parser (spacy-token-parser)\nUse spaCy to Parse Input Tokens\n\n## Usage\n\nCall the service with this code\n```python\nfrom spacy_token_parser import parse_tokens\n\nparse_tokens(input_text.split())\n```\n\nThe output is a tuple.\n\nThe first element of the tuple is a list of dictionary tokens (below).\n\nThe second element of the tuple is a wrapped instance of `spacy.tokens.doc.Doc`.\n\n### List Output\n```json\n[\n {\n \"dep\":\"compound\",\n \"ent\":\"NORP\",\n \"head\":\"5665575797947403677\",\n \"id\":\"6042939320535660714\",\n \"is_alpha\":true,\n \"is_punct\":false,\n \"is_stop\":false,\n \"is_wordnet\":true,\n \"lemma\":\"american\",\n \"noun_number\":\"singular\",\n \"other\":{\n \"head_i\":3,\n \"head_idx\":24,\n \"head_orth\":5665575797947403677,\n \"head_text\":\"films\",\n \"i\":0,\n \"idx\":0,\n \"orth\":6042939320535660714\n },\n \"pos\":\"PROPN\",\n \"sentiment\":0.0,\n \"shape\":\"xxxx\",\n \"tag\":\"NNP\",\n \"tense\":\"\",\n \"text\":\"american\",\n \"verb_form\":\"\",\n \"x\":0,\n \"y\":8\n },\n {\n \"dep\":\"compound\",\n \"ent\":\"\",\n \"head\":\"5665575797947403677\",\n \"id\":\"16602643206033239142\",\n \"is_alpha\":true,\n \"is_punct\":false,\n \"is_stop\":false,\n \"is_wordnet\":true,\n \"lemma\":\"silent\",\n \"noun_number\":\"singular\",\n \"other\":{\n \"head_i\":3,\n \"head_idx\":24,\n \"head_orth\":5665575797947403677,\n \"head_text\":\"films\",\n \"i\":1,\n \"idx\":9,\n \"orth\":16602643206033239142\n },\n \"pos\":\"PROPN\",\n \"sentiment\":0.0,\n \"shape\":\"xxxx\",\n \"tag\":\"NNP\",\n \"tense\":\"\",\n \"text\":\"silent\",\n \"verb_form\":\"\",\n \"x\":8,\n \"y\":14\n },\n {\n \"dep\":\"compound\",\n \"ent\":\"\",\n \"head\":\"5665575797947403677\",\n \"id\":\"16417888112635110788\",\n \"is_alpha\":true,\n \"is_punct\":false,\n \"is_stop\":false,\n \"is_wordnet\":true,\n \"lemma\":\"feature\",\n \"noun_number\":\"singular\",\n \"other\":{\n \"head_i\":3,\n \"head_idx\":24,\n \"head_orth\":5665575797947403677,\n \"head_text\":\"films\",\n \"i\":2,\n \"idx\":16,\n \"orth\":16417888112635110788\n },\n \"pos\":\"NOUN\",\n \"sentiment\":0.0,\n \"shape\":\"xxxx\",\n \"tag\":\"NN\",\n \"tense\":\"\",\n \"text\":\"feature\",\n \"verb_form\":\"\",\n \"x\":14,\n \"y\":21\n },\n {\n \"dep\":\"ROOT\",\n \"ent\":\"\",\n \"head\":\"5665575797947403677\",\n \"id\":\"5665575797947403677\",\n \"is_alpha\":true,\n \"is_punct\":false,\n \"is_stop\":false,\n \"is_wordnet\":true,\n \"lemma\":\"film\",\n \"noun_number\":\"plural\",\n \"other\":{\n \"head_i\":3,\n \"head_idx\":24,\n \"head_orth\":5665575797947403677,\n \"head_text\":\"films\",\n \"i\":3,\n \"idx\":24,\n \"orth\":5665575797947403677\n },\n \"pos\":\"NOUN\",\n \"sentiment\":0.0,\n \"shape\":\"xxxx\",\n \"tag\":\"NNS\",\n \"tense\":\"\",\n \"text\":\"films\",\n \"verb_form\":\"\",\n \"x\":21,\n \"y\":26\n }\n]\n```\n",
"bugtrack_url": null,
"license": "None",
"summary": "Use spaCy to Parse Input Tokens",
"version": "0.1.16",
"project_urls": {
"Bug Tracker": "https://github.com/craigtrim/spacy-token-parser/issues",
"Homepage": "https://github.com/craigtrim/spacy-token-parser",
"Repository": "https://github.com/craigtrim/spacy-token-parser"
},
"split_keywords": [
"nlp",
"nlu",
"ai",
"parser",
"spacy"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b68a5cede13645b09a28c1f4a366c2b6b8b78b859ecf85cf78bdccab66db2e55",
"md5": "3268c05b94b044af2db38c562ae5b942",
"sha256": "44bdcdbb86596db9fe12ab1ff486b9d740bf61e6ab4f509a8d568a7fe0fd0407"
},
"downloads": -1,
"filename": "spacy_token_parser-0.1.16-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3268c05b94b044af2db38c562ae5b942",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8.5,<4.0.0",
"size": 15867,
"upload_time": "2023-07-15T05:37:40",
"upload_time_iso_8601": "2023-07-15T05:37:40.177223Z",
"url": "https://files.pythonhosted.org/packages/b6/8a/5cede13645b09a28c1f4a366c2b6b8b78b859ecf85cf78bdccab66db2e55/spacy_token_parser-0.1.16-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9accbb39463f6ad21d0d8cf54960f083b513b2f8290e6f4341368838e8a185d4",
"md5": "192456ed014ba79196265c3a008e5a50",
"sha256": "6a82d11eeb50028f53b6985b5842904b952e152f3da7770c90ffae3fe95c66ca"
},
"downloads": -1,
"filename": "spacy_token_parser-0.1.16.tar.gz",
"has_sig": false,
"md5_digest": "192456ed014ba79196265c3a008e5a50",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8.5,<4.0.0",
"size": 10395,
"upload_time": "2023-07-15T05:37:42",
"upload_time_iso_8601": "2023-07-15T05:37:42.873085Z",
"url": "https://files.pythonhosted.org/packages/9a/cc/bb39463f6ad21d0d8cf54960f083b513b2f8290e6f4341368838e8a185d4/spacy_token_parser-0.1.16.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-15 05:37:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "craigtrim",
"github_project": "spacy-token-parser",
"github_not_found": true,
"lcname": "spacy-token-parser"
}