suparkanbun


Namesuparkanbun JSON
Version 1.5.2 PyPI version JSON
download
home_pagehttps://github.com/KoichiYasuoka/SuPar-Kanbun
SummaryTokenizer POS-tagger and Dependency-parser for Classical Chinese
upload_time2024-02-29 02:33:54
maintainer
docs_urlNone
authorKoichi Yasuoka
requires_python>=3.7
licenseMIT
keywords nlp chinese
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Current PyPI packages](https://badge.fury.io/py/suparkanbun.svg)](https://pypi.org/project/suparkanbun/)

# SuPar-Kanbun

Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with [spaCy](https://spacy.io), [Transformers](https://huggingface.co/transformers/) and [SuPar](https://github.com/yzhangcs/parser).

## Basic usage

```py
>>> import suparkanbun
>>> nlp=suparkanbun.load()
>>> doc=nlp("不入虎穴不得虎子")
>>> print(type(doc))
<class 'spacy.tokens.doc.Doc'>
>>> print(suparkanbun.to_conllu(doc))
# text = 不入虎穴不得虎子
1	不	不	ADV	v,副詞,否定,無界	Polarity=Neg	2	advmod	_	Gloss=not|SpaceAfter=No
2	入	入	VERB	v,動詞,行為,移動	_	0	root	_	Gloss=enter|SpaceAfter=No
3	虎	虎	NOUN	n,名詞,主体,動物	_	4	nmod	_	Gloss=tiger|SpaceAfter=No
4	穴	穴	NOUN	n,名詞,固定物,地形	Case=Loc	2	obj	_	Gloss=cave|SpaceAfter=No
5	不	不	ADV	v,副詞,否定,無界	Polarity=Neg	6	advmod	_	Gloss=not|SpaceAfter=No
6	得	得	VERB	v,動詞,行為,得失	_	2	parataxis	_	Gloss=get|SpaceAfter=No
7	虎	虎	NOUN	n,名詞,主体,動物	_	8	nmod	_	Gloss=tiger|SpaceAfter=No
8	子	子	NOUN	n,名詞,人,関係	_	6	obj	_	Gloss=child|SpaceAfter=No

>>> import deplacy
>>> deplacy.render(doc)
不 ADV  <════╗   advmod
入 VERB ═══╗═╝═╗ ROOT
虎 NOUN <╗ ║   ║ nmod
穴 NOUN ═╝<╝   ║ obj
不 ADV  <════╗ ║ advmod
得 VERB ═══╗═╝<╝ parataxis
虎 NOUN <╗ ║     nmod
子 NOUN ═╝<╝     obj
```

`suparkanbun.load()` has two options `suparkanbun.load(BERT="roberta-classical-chinese-base-char",Danku=False)`. With the option `Danku=True` the pipeline tries to segment sentences automatically. Available `BERT` options are:

* `BERT="roberta-classical-chinese-base-char"` utilizes [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char) (default)
* `BERT="roberta-classical-chinese-large-char"` utilizes [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char)
* `BERT="guwenbert-base"` utilizes [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base)
* `BERT="guwenbert-large"` utilizes [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large)
* `BERT="sikubert"` utilizes [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert)
* `BERT="sikuroberta"` utilizes [SikuRoBERTa](https://huggingface.co/SIKU-BERT/sikuroberta)

## Installation for Linux

```sh
pip3 install suparkanbun --user
```

## Installation for Cygwin64

Make sure to get `python37-devel` `python37-pip` `python37-cython` `python37-numpy` `python37-wheel` `gcc-g++` `mingw64-x86_64-gcc-g++` `git` `curl` `make` `cmake` packages, and then:
```sh
curl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh
pip3.7 install suparkanbun
```

## Installation for Jupyter Notebook (Google Colaboratory)

```py
!pip install suparkanbun 
```

Try [notebook](https://colab.research.google.com/github/KoichiYasuoka/SuPar-Kanbun/blob/main/suparkanbun.ipynb) for Google Colaboratory.

## Author

Koichi Yasuoka (安岡孝一)

## Reference

Koichi Yasuoka, Christian Wittern, Tomohiko Morioka, Takumi Ikeda, Naoki Yamazaki, Yoshihiro Nikaido, Shingo Suzuki, Shigeki Moro, Kazunori Fujita: [Designing Universal Dependencies for Classical Chinese and Its Application](http://id.nii.ac.jp/1001/00216242/), Journal of Information Processing Society of Japan, Vol.63, No.2 (February 2022), pp.355-363.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/KoichiYasuoka/SuPar-Kanbun",
    "name": "suparkanbun",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "NLP Chinese",
    "author": "Koichi Yasuoka",
    "author_email": "yasuoka@kanji.zinbun.kyoto-u.ac.jp",
    "download_url": "",
    "platform": null,
    "description": "[![Current PyPI packages](https://badge.fury.io/py/suparkanbun.svg)](https://pypi.org/project/suparkanbun/)\n\n# SuPar-Kanbun\n\nTokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (\u6f22\u6587/\u6587\u8a00\u6587) with [spaCy](https://spacy.io), [Transformers](https://huggingface.co/transformers/) and [SuPar](https://github.com/yzhangcs/parser).\n\n## Basic usage\n\n```py\n>>> import suparkanbun\n>>> nlp=suparkanbun.load()\n>>> doc=nlp(\"\u4e0d\u5165\u864e\u7a74\u4e0d\u5f97\u864e\u5b50\")\n>>> print(type(doc))\n<class 'spacy.tokens.doc.Doc'>\n>>> print(suparkanbun.to_conllu(doc))\n# text = \u4e0d\u5165\u864e\u7a74\u4e0d\u5f97\u864e\u5b50\n1\t\u4e0d\t\u4e0d\tADV\tv,\u526f\u8a5e,\u5426\u5b9a,\u7121\u754c\tPolarity=Neg\t2\tadvmod\t_\tGloss=not|SpaceAfter=No\n2\t\u5165\t\u5165\tVERB\tv,\u52d5\u8a5e,\u884c\u70ba,\u79fb\u52d5\t_\t0\troot\t_\tGloss=enter|SpaceAfter=No\n3\t\u864e\t\u864e\tNOUN\tn,\u540d\u8a5e,\u4e3b\u4f53,\u52d5\u7269\t_\t4\tnmod\t_\tGloss=tiger|SpaceAfter=No\n4\t\u7a74\t\u7a74\tNOUN\tn,\u540d\u8a5e,\u56fa\u5b9a\u7269,\u5730\u5f62\tCase=Loc\t2\tobj\t_\tGloss=cave|SpaceAfter=No\n5\t\u4e0d\t\u4e0d\tADV\tv,\u526f\u8a5e,\u5426\u5b9a,\u7121\u754c\tPolarity=Neg\t6\tadvmod\t_\tGloss=not|SpaceAfter=No\n6\t\u5f97\t\u5f97\tVERB\tv,\u52d5\u8a5e,\u884c\u70ba,\u5f97\u5931\t_\t2\tparataxis\t_\tGloss=get|SpaceAfter=No\n7\t\u864e\t\u864e\tNOUN\tn,\u540d\u8a5e,\u4e3b\u4f53,\u52d5\u7269\t_\t8\tnmod\t_\tGloss=tiger|SpaceAfter=No\n8\t\u5b50\t\u5b50\tNOUN\tn,\u540d\u8a5e,\u4eba,\u95a2\u4fc2\t_\t6\tobj\t_\tGloss=child|SpaceAfter=No\n\n>>> import deplacy\n>>> deplacy.render(doc)\n\u4e0d ADV  <\u2550\u2550\u2550\u2550\u2557   advmod\n\u5165 VERB \u2550\u2550\u2550\u2557\u2550\u255d\u2550\u2557 ROOT\n\u864e NOUN <\u2557 \u2551   \u2551 nmod\n\u7a74 NOUN \u2550\u255d<\u255d   \u2551 obj\n\u4e0d ADV  <\u2550\u2550\u2550\u2550\u2557 \u2551 advmod\n\u5f97 VERB \u2550\u2550\u2550\u2557\u2550\u255d<\u255d parataxis\n\u864e NOUN <\u2557 \u2551     nmod\n\u5b50 NOUN \u2550\u255d<\u255d     obj\n```\n\n`suparkanbun.load()` has two options `suparkanbun.load(BERT=\"roberta-classical-chinese-base-char\",Danku=False)`. With the option `Danku=True` the pipeline tries to segment sentences automatically. Available `BERT` options are:\n\n* `BERT=\"roberta-classical-chinese-base-char\"` utilizes [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char) (default)\n* `BERT=\"roberta-classical-chinese-large-char\"` utilizes [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char)\n* `BERT=\"guwenbert-base\"` utilizes [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base)\n* `BERT=\"guwenbert-large\"` utilizes [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large)\n* `BERT=\"sikubert\"` utilizes [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert)\n* `BERT=\"sikuroberta\"` utilizes [SikuRoBERTa](https://huggingface.co/SIKU-BERT/sikuroberta)\n\n## Installation for Linux\n\n```sh\npip3 install suparkanbun --user\n```\n\n## Installation for Cygwin64\n\nMake sure to get `python37-devel` `python37-pip` `python37-cython` `python37-numpy` `python37-wheel` `gcc-g++` `mingw64-x86_64-gcc-g++` `git` `curl` `make` `cmake` packages, and then:\n```sh\ncurl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh\npip3.7 install suparkanbun\n```\n\n## Installation for Jupyter Notebook (Google Colaboratory)\n\n```py\n!pip install suparkanbun \n```\n\nTry [notebook](https://colab.research.google.com/github/KoichiYasuoka/SuPar-Kanbun/blob/main/suparkanbun.ipynb) for Google Colaboratory.\n\n## Author\n\nKoichi Yasuoka (\u5b89\u5ca1\u5b5d\u4e00)\n\n## Reference\n\nKoichi Yasuoka, Christian Wittern, Tomohiko Morioka, Takumi Ikeda, Naoki Yamazaki, Yoshihiro Nikaido, Shingo Suzuki, Shigeki Moro, Kazunori Fujita: [Designing Universal Dependencies for Classical Chinese and Its Application](http://id.nii.ac.jp/1001/00216242/), Journal of Information Processing Society of Japan, Vol.63, No.2 (February 2022), pp.355-363.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Tokenizer POS-tagger and Dependency-parser for Classical Chinese",
    "version": "1.5.2",
    "project_urls": {
        "Homepage": "https://github.com/KoichiYasuoka/SuPar-Kanbun",
        "Source": "https://github.com/KoichiYasuoka/SuPar-Kanbun",
        "Tracker": "https://github.com/KoichiYasuoka/SuPar-Kanbun/issues"
    },
    "split_keywords": [
        "nlp",
        "chinese"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9840ab91da486bfe32753dca7e6e0bd681c72ecc1bd1709533ff001ea8029b66",
                "md5": "3ad1f0f750edf14618b8f596ea3d69e6",
                "sha256": "4d964bff1385c4aec3bb4a5cdd071286d0467639371c56441c816b8e564ea418"
            },
            "downloads": -1,
            "filename": "suparkanbun-1.5.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3ad1f0f750edf14618b8f596ea3d69e6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 957536,
            "upload_time": "2024-02-29T02:33:54",
            "upload_time_iso_8601": "2024-02-29T02:33:54.283503Z",
            "url": "https://files.pythonhosted.org/packages/98/40/ab91da486bfe32753dca7e6e0bd681c72ecc1bd1709533ff001ea8029b66/suparkanbun-1.5.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-29 02:33:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "KoichiYasuoka",
    "github_project": "SuPar-Kanbun",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "suparkanbun"
}
        
Elapsed time: 0.20001s