# Sudachi Transformers (chiTra)
[](https://www.python.org/downloads/release/python-360/)
[](https://github.com/WorksApplications/SudachiTra/actions/workflows/test.yaml)
[](https://github.com/WorksApplications/SudachiTra/blob/main/LICENSE)
chiTraは事前学習済みの大規模な言語モデルと [Transformers](https://github.com/huggingface/transformers) 向けの日本語形態素解析器を提供します。 / chiTra provides the pre-trained language models and a Japanese tokenizer for [Transformers](https://github.com/huggingface/transformers).
chiTraはSuda**chi Tra**nsformersの略称です。 / chiTra stands for Suda**chi Tra**nsformers.
## 事前学習済みモデル / Pretrained Model
公開データは [Open Data Sponsorship Program](https://registry.opendata.aws/sudachi/) を使用してAWSでホストされています。 / Datas are generously hosted by AWS with their [Open Data Sponsorship Program](https://registry.opendata.aws/sudachi/).
| Version | Normalized | SudachiTra | Sudachi | SudachiDict | Text | Pretrained Model |
| ------- | ---------------------- | ---------- | ------- | ------------- | ------------ | ------------------------------------------------------------------------------------------- |
| v1.0 | normalized_and_surface | v0.1.7 | 0.6.2 | 20211220-core | NWJC (109GB) | 395 MB ([tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/chiTra-1.0.tar.gz)) |
| v1.1 | normalized_nouns | v0.1.8 | 0.6.6 | 20220729-core | NWJC with additional cleaning (79GB) | 396 MB ([tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/chiTra-1.1.tar.gz)) |
### 特長 / Features
- 大規模テキストによる学習 / Training on large texts
- 国語研日本語ウェブコーパス (NWJC) をつかってモデルを学習することで多様な表現とさまざまなドメインに対応しています / Models are trained on NINJAL Web Japanese Corpus (NWJC) to support a wide variety of expressions and domains.
- Sudachi の利用 / Using Sudachi
- 形態素解析器 Sudachi を利用することで表記ゆれによる弊害を抑えています / By using the morphological analyzer Sudachi, reduce the negative effects of various notations.
# chiTraの使い方 / How to use chiTra
## クイックツアー / Quick Tour
事前準備 / Requirements
```bash
$ pip install sudachitra
$ wget https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/chiTra-1.1.tar.gz
$ tar -zxvf chiTra-1.1.tar.gz
```
モデルの読み込み / Load the model
```python
>>> from sudachitra.tokenization_bert_sudachipy import BertSudachipyTokenizer
>>> from transformers import BertModel
>>> tokenizer = BertSudachipyTokenizer.from_pretrained('chiTra-1.1')
>>> tokenizer.tokenize("選挙管理委員会とすだち")
['選挙', '##管理', '##委員会', 'と', '酢', '##橘']
>>> model = BertModel.from_pretrained('chiTra-1.1')
>>> model(**tokenizer("まさにオールマイティーな商品だ。", return_tensors="pt")).last_hidden_state
tensor([[[ 0.8583, -1.1752, -0.7987, ..., -1.1691, -0.8355, 3.4678],
[ 0.0220, 1.1702, -2.3334, ..., 0.6673, -2.0774, 2.7731],
[ 0.0894, -1.3009, 3.4650, ..., -0.1140, 0.1767, 1.9859],
...,
[-0.4429, -1.6267, -2.1493, ..., -1.7801, -1.8009, 2.5343],
[ 1.7204, -1.0540, -0.4362, ..., -0.0228, 0.5622, 2.5800],
[ 1.1125, -0.3986, 1.8532, ..., -0.8021, -1.5888, 2.9520]]],
grad_fn=<NativeLayerNormBackward0>)
```
## インストール / Installation
```shell script
$ pip install sudachitra
```
デフォルトの [Sudachi dictionary](https://github.com/WorksApplications/SudachiDict) は [SudachiDict-core](https://pypi.org/project/SudachiDict-core/) を使用します。 / The default [Sudachi dictionary](https://github.com/WorksApplications/SudachiDict) is [SudachiDict-core](https://pypi.org/project/SudachiDict-core/).
[SudachiDict-small](https://pypi.org/project/SudachiDict-small/) や [SudachiDict-full](https://pypi.org/project/SudachiDict-full/) など他の辞書をインストールして使用することもできます。 / You can use other dictionaries, such as [SudachiDict-small](https://pypi.org/project/SudachiDict-small/) and [SudachiDict-full](https://pypi.org/project/SudachiDict-full/) .<br/>
その場合は以下のように使いたい辞書をインストールしてください。 / In such cases, you need to install the dictionaries.<br/>
事前学習済みモデルを使いたい場合はcore辞書を使用して学習されていることに注意してください。 / If you want to use a pre-trained model, note that it is trained with SudachiDict-core.
```shell script
$ pip install sudachidict_small sudachidict_full
```
## 事前学習 / Pretraining
事前学習方法の詳細は [pretraining/bert/README.md](https://github.com/WorksApplications/SudachiTra/tree/main/pretraining/bert) を参照ください。 / Please refer to [pretraining/bert/README.md](https://github.com/WorksApplications/SudachiTra/tree/main/pretraining/bert).
## 開発者向け / For Developers
TBD
## ライセンス / License
Copyright (c) 2022 National Institute for Japanese Language and Linguistics and Works Applications Co., Ltd. All rights reserved.
"chiTra"は [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) で [国立国語研究所](https://www.ninjal.ac.jp/) 及び [株式会社ワークスアプリケーションズ](https://www.worksap.co.jp/) によって提供されています。 / "chiTra" is distributed by [National Institute for Japanese Language and Linguistics](https://www.ninjal.ac.jp/) and [Works Applications Co.,Ltd.](https://www.worksap.co.jp/) under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## 連絡先 / Contact us
質問があれば、issueやslackをご利用ください。 / Open an issue, or come to our Slack workspace for questions and discussion.
開発者やユーザーの方々が質問したり議論するためのSlackワークスペースを用意しています。 / We have a Slack workspace for developers and users to ask questions and discuss.
https://sudachi-dev.slack.com/ ( [こちら](https://join.slack.com/t/sudachi-dev/shared_invite/enQtMzg2NTI2NjYxNTUyLTMyYmNkZWQ0Y2E5NmQxMTI3ZGM3NDU0NzU4NGE1Y2UwYTVmNTViYjJmNDI0MWZiYTg4ODNmMzgxYTQ3ZmI2OWU) から招待を受けてください) / https://sudachi-dev.slack.com/ (Get invitation [here](https://join.slack.com/t/sudachi-dev/shared_invite/enQtMzg2NTI2NjYxNTUyLTMyYmNkZWQ0Y2E5NmQxMTI3ZGM3NDU0NzU4NGE1Y2UwYTVmNTViYjJmNDI0MWZiYTg4ODNmMzgxYTQ3ZmI2OWU) )
## chiTraの引用 / Citing chiTra
chiTraについての論文を発表しています。 / We have published a following paper about chiTra;
- 勝田哲弘, 林政義, 山村崇, Tolmachev Arseny, 高岡一馬, 内田佳孝, 浅原正幸, 単語正規化による表記ゆれに頑健な BERT モデルの構築. 言語処理学会第28回年次大会, 2022.
chiTraを論文や書籍、サービスなどで引用される際には、以下のBibTexをご利用ください。 / When citing chiTra in papers, books, or services, please use the follow BibTex entries;
```
@INPROCEEDINGS{katsuta2022chitra,
author = {勝田哲弘, 林政義, 山村崇, Tolmachev Arseny, 高岡一馬, 内田佳孝, 浅原正幸},
title = {単語正規化による表記ゆれに頑健な BERT モデルの構築},
booktitle = "言語処理学会第28回年次大会(NLP2022)",
year = "2022",
pages = "",
publisher = "言語処理学会",
}
```
### 実験に使用したモデル / Model used for experiment
「単語正規化による表記ゆれに頑健なBERTモデルの構築」の実験において使用したモデルを以下で公開しています。/ The model used in the experiment of "単語正規化による表記ゆれに頑健なBERTモデルの構築" is published below.
| Normalized | Text | Pretrained Model |
| ---------------------- | -------- | ---------------------------------------------------------------------------------------------------------------- |
| surface | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_surface.tar.gz) |
| normalized_and_surface | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_normalized_and_surface.tar.gz) |
| normalized_conjugation | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_normalized_conjugation.tar.gz) |
| normalized | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_normalized.tar.gz) |
Enjoy chiTra!
Raw data
{
"_id": null,
"home_page": "https://github.com/WorksApplications/SudachiTra",
"name": "SudachiTra",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Works Applications",
"author_email": "sudachi@worksap.co.jp",
"download_url": "https://files.pythonhosted.org/packages/80/76/0bd8a390e50291de2e19e3f8dadc79a256868a1b0f32b278d0ee4a6928d9/SudachiTra-0.1.9.tar.gz",
"platform": null,
"description": "# Sudachi Transformers (chiTra)\n\n[](https://www.python.org/downloads/release/python-360/)\n[](https://github.com/WorksApplications/SudachiTra/actions/workflows/test.yaml)\n[](https://github.com/WorksApplications/SudachiTra/blob/main/LICENSE)\n\nchiTra\u306f\u4e8b\u524d\u5b66\u7fd2\u6e08\u307f\u306e\u5927\u898f\u6a21\u306a\u8a00\u8a9e\u30e2\u30c7\u30eb\u3068 [Transformers](https://github.com/huggingface/transformers) \u5411\u3051\u306e\u65e5\u672c\u8a9e\u5f62\u614b\u7d20\u89e3\u6790\u5668\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002 / chiTra provides the pre-trained language models and a Japanese tokenizer for [Transformers](https://github.com/huggingface/transformers).\n\nchiTra\u306fSuda**chi Tra**nsformers\u306e\u7565\u79f0\u3067\u3059\u3002 / chiTra stands for Suda**chi Tra**nsformers.\n\n## \u4e8b\u524d\u5b66\u7fd2\u6e08\u307f\u30e2\u30c7\u30eb / Pretrained Model\n\u516c\u958b\u30c7\u30fc\u30bf\u306f [Open Data Sponsorship Program](https://registry.opendata.aws/sudachi/) \u3092\u4f7f\u7528\u3057\u3066AWS\u3067\u30db\u30b9\u30c8\u3055\u308c\u3066\u3044\u307e\u3059\u3002 / Datas are generously hosted by AWS with their [Open Data Sponsorship Program](https://registry.opendata.aws/sudachi/).\n\n| Version | Normalized | SudachiTra | Sudachi | SudachiDict | Text | Pretrained Model |\n| ------- | ---------------------- | ---------- | ------- | ------------- | ------------ | ------------------------------------------------------------------------------------------- |\n| v1.0 | normalized_and_surface | v0.1.7 | 0.6.2 | 20211220-core | NWJC (109GB) | 395 MB ([tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/chiTra-1.0.tar.gz)) | \n| v1.1 | normalized_nouns | v0.1.8 | 0.6.6 | 20220729-core | NWJC with additional cleaning (79GB) | 396 MB ([tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/chiTra-1.1.tar.gz)) |\n\n### \u7279\u9577 / Features\n- \u5927\u898f\u6a21\u30c6\u30ad\u30b9\u30c8\u306b\u3088\u308b\u5b66\u7fd2 / Training on large texts\n - \u56fd\u8a9e\u7814\u65e5\u672c\u8a9e\u30a6\u30a7\u30d6\u30b3\u30fc\u30d1\u30b9 (NWJC) \u3092\u3064\u304b\u3063\u3066\u30e2\u30c7\u30eb\u3092\u5b66\u7fd2\u3059\u308b\u3053\u3068\u3067\u591a\u69d8\u306a\u8868\u73fe\u3068\u3055\u307e\u3056\u307e\u306a\u30c9\u30e1\u30a4\u30f3\u306b\u5bfe\u5fdc\u3057\u3066\u3044\u307e\u3059 / Models are trained on NINJAL Web Japanese Corpus (NWJC) to support a wide variety of expressions and domains.\n- Sudachi \u306e\u5229\u7528 / Using Sudachi\n - \u5f62\u614b\u7d20\u89e3\u6790\u5668 Sudachi \u3092\u5229\u7528\u3059\u308b\u3053\u3068\u3067\u8868\u8a18\u3086\u308c\u306b\u3088\u308b\u5f0a\u5bb3\u3092\u6291\u3048\u3066\u3044\u307e\u3059 / By using the morphological analyzer Sudachi, reduce the negative effects of various notations.\n\n# chiTra\u306e\u4f7f\u3044\u65b9 / How to use chiTra\n\n## \u30af\u30a4\u30c3\u30af\u30c4\u30a2\u30fc / Quick Tour\n\u4e8b\u524d\u6e96\u5099 / Requirements\n```bash\n$ pip install sudachitra\n$ wget https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/chiTra-1.1.tar.gz\n$ tar -zxvf chiTra-1.1.tar.gz\n```\n\n\u30e2\u30c7\u30eb\u306e\u8aad\u307f\u8fbc\u307f / Load the model\n```python\n>>> from sudachitra.tokenization_bert_sudachipy import BertSudachipyTokenizer\n>>> from transformers import BertModel\n\n>>> tokenizer = BertSudachipyTokenizer.from_pretrained('chiTra-1.1')\n>>> tokenizer.tokenize(\"\u9078\u6319\u7ba1\u7406\u59d4\u54e1\u4f1a\u3068\u3059\u3060\u3061\")\n['\u9078\u6319', '##\u7ba1\u7406', '##\u59d4\u54e1\u4f1a', '\u3068', '\u9162', '##\u6a58']\n\n>>> model = BertModel.from_pretrained('chiTra-1.1')\n>>> model(**tokenizer(\"\u307e\u3055\u306b\u30aa\u30fc\u30eb\u30de\u30a4\u30c6\u30a3\u30fc\u306a\u5546\u54c1\u3060\u3002\", return_tensors=\"pt\")).last_hidden_state\ntensor([[[ 0.8583, -1.1752, -0.7987, ..., -1.1691, -0.8355, 3.4678],\n [ 0.0220, 1.1702, -2.3334, ..., 0.6673, -2.0774, 2.7731],\n [ 0.0894, -1.3009, 3.4650, ..., -0.1140, 0.1767, 1.9859],\n ...,\n [-0.4429, -1.6267, -2.1493, ..., -1.7801, -1.8009, 2.5343],\n [ 1.7204, -1.0540, -0.4362, ..., -0.0228, 0.5622, 2.5800],\n [ 1.1125, -0.3986, 1.8532, ..., -0.8021, -1.5888, 2.9520]]],\n grad_fn=<NativeLayerNormBackward0>)\n```\n\n## \u30a4\u30f3\u30b9\u30c8\u30fc\u30eb / Installation\n\n```shell script\n$ pip install sudachitra\n```\n\n\u30c7\u30d5\u30a9\u30eb\u30c8\u306e [Sudachi dictionary](https://github.com/WorksApplications/SudachiDict) \u306f [SudachiDict-core](https://pypi.org/project/SudachiDict-core/) \u3092\u4f7f\u7528\u3057\u307e\u3059\u3002 / The default [Sudachi dictionary](https://github.com/WorksApplications/SudachiDict) is [SudachiDict-core](https://pypi.org/project/SudachiDict-core/).\n\n[SudachiDict-small](https://pypi.org/project/SudachiDict-small/) \u3084 [SudachiDict-full](https://pypi.org/project/SudachiDict-full/) \u306a\u3069\u4ed6\u306e\u8f9e\u66f8\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u3066\u4f7f\u7528\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059\u3002 / You can use other dictionaries, such as [SudachiDict-small](https://pypi.org/project/SudachiDict-small/) and [SudachiDict-full](https://pypi.org/project/SudachiDict-full/) .<br/>\n\u305d\u306e\u5834\u5408\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u4f7f\u3044\u305f\u3044\u8f9e\u66f8\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u3066\u304f\u3060\u3055\u3044\u3002 / In such cases, you need to install the dictionaries.<br/>\n\u4e8b\u524d\u5b66\u7fd2\u6e08\u307f\u30e2\u30c7\u30eb\u3092\u4f7f\u3044\u305f\u3044\u5834\u5408\u306fcore\u8f9e\u66f8\u3092\u4f7f\u7528\u3057\u3066\u5b66\u7fd2\u3055\u308c\u3066\u3044\u308b\u3053\u3068\u306b\u6ce8\u610f\u3057\u3066\u304f\u3060\u3055\u3044\u3002 / If you want to use a pre-trained model, note that it is trained with SudachiDict-core.\n\n```shell script\n$ pip install sudachidict_small sudachidict_full\n```\n\n## \u4e8b\u524d\u5b66\u7fd2 / Pretraining\n\n\u4e8b\u524d\u5b66\u7fd2\u65b9\u6cd5\u306e\u8a73\u7d30\u306f [pretraining/bert/README.md](https://github.com/WorksApplications/SudachiTra/tree/main/pretraining/bert) \u3092\u53c2\u7167\u304f\u3060\u3055\u3044\u3002 / Please refer to [pretraining/bert/README.md](https://github.com/WorksApplications/SudachiTra/tree/main/pretraining/bert).\n\n\n## \u958b\u767a\u8005\u5411\u3051 / For Developers\nTBD\n\n## \u30e9\u30a4\u30bb\u30f3\u30b9 / License\n\nCopyright (c) 2022 National Institute for Japanese Language and Linguistics and Works Applications Co., Ltd. All rights reserved.\n\n\"chiTra\"\u306f [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) \u3067 [\u56fd\u7acb\u56fd\u8a9e\u7814\u7a76\u6240](https://www.ninjal.ac.jp/) \u53ca\u3073 [\u682a\u5f0f\u4f1a\u793e\u30ef\u30fc\u30af\u30b9\u30a2\u30d7\u30ea\u30b1\u30fc\u30b7\u30e7\u30f3\u30ba](https://www.worksap.co.jp/) \u306b\u3088\u3063\u3066\u63d0\u4f9b\u3055\u308c\u3066\u3044\u307e\u3059\u3002 / \"chiTra\" is distributed by [National Institute for Japanese Language and Linguistics](https://www.ninjal.ac.jp/) and [Works Applications Co.,Ltd.](https://www.worksap.co.jp/) under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).\n\n\n## \u9023\u7d61\u5148 / Contact us\n\u8cea\u554f\u304c\u3042\u308c\u3070\u3001issue\u3084slack\u3092\u3054\u5229\u7528\u304f\u3060\u3055\u3044\u3002 / Open an issue, or come to our Slack workspace for questions and discussion.\n\n\u958b\u767a\u8005\u3084\u30e6\u30fc\u30b6\u30fc\u306e\u65b9\u3005\u304c\u8cea\u554f\u3057\u305f\u308a\u8b70\u8ad6\u3059\u308b\u305f\u3081\u306eSlack\u30ef\u30fc\u30af\u30b9\u30da\u30fc\u30b9\u3092\u7528\u610f\u3057\u3066\u3044\u307e\u3059\u3002 / We have a Slack workspace for developers and users to ask questions and discuss.\nhttps://sudachi-dev.slack.com/ ( [\u3053\u3061\u3089](https://join.slack.com/t/sudachi-dev/shared_invite/enQtMzg2NTI2NjYxNTUyLTMyYmNkZWQ0Y2E5NmQxMTI3ZGM3NDU0NzU4NGE1Y2UwYTVmNTViYjJmNDI0MWZiYTg4ODNmMzgxYTQ3ZmI2OWU) \u304b\u3089\u62db\u5f85\u3092\u53d7\u3051\u3066\u304f\u3060\u3055\u3044) / https://sudachi-dev.slack.com/ (Get invitation [here](https://join.slack.com/t/sudachi-dev/shared_invite/enQtMzg2NTI2NjYxNTUyLTMyYmNkZWQ0Y2E5NmQxMTI3ZGM3NDU0NzU4NGE1Y2UwYTVmNTViYjJmNDI0MWZiYTg4ODNmMzgxYTQ3ZmI2OWU) )\n\n\n\n## chiTra\u306e\u5f15\u7528 / Citing chiTra\nchiTra\u306b\u3064\u3044\u3066\u306e\u8ad6\u6587\u3092\u767a\u8868\u3057\u3066\u3044\u307e\u3059\u3002 / We have published a following paper about chiTra;\n- \u52dd\u7530\u54f2\u5f18, \u6797\u653f\u7fa9, \u5c71\u6751\u5d07, Tolmachev Arseny, \u9ad8\u5ca1\u4e00\u99ac, \u5185\u7530\u4f73\u5b5d, \u6d45\u539f\u6b63\u5e78, \u5358\u8a9e\u6b63\u898f\u5316\u306b\u3088\u308b\u8868\u8a18\u3086\u308c\u306b\u9811\u5065\u306a BERT \u30e2\u30c7\u30eb\u306e\u69cb\u7bc9. \u8a00\u8a9e\u51e6\u7406\u5b66\u4f1a\u7b2c28\u56de\u5e74\u6b21\u5927\u4f1a, 2022.\n\nchiTra\u3092\u8ad6\u6587\u3084\u66f8\u7c4d\u3001\u30b5\u30fc\u30d3\u30b9\u306a\u3069\u3067\u5f15\u7528\u3055\u308c\u308b\u969b\u306b\u306f\u3001\u4ee5\u4e0b\u306eBibTex\u3092\u3054\u5229\u7528\u304f\u3060\u3055\u3044\u3002 / When citing chiTra in papers, books, or services, please use the follow BibTex entries;\n```\n@INPROCEEDINGS{katsuta2022chitra,\n author = {\u52dd\u7530\u54f2\u5f18, \u6797\u653f\u7fa9, \u5c71\u6751\u5d07, Tolmachev Arseny, \u9ad8\u5ca1\u4e00\u99ac, \u5185\u7530\u4f73\u5b5d, \u6d45\u539f\u6b63\u5e78},\n title = {\u5358\u8a9e\u6b63\u898f\u5316\u306b\u3088\u308b\u8868\u8a18\u3086\u308c\u306b\u9811\u5065\u306a BERT \u30e2\u30c7\u30eb\u306e\u69cb\u7bc9},\n booktitle = \"\u8a00\u8a9e\u51e6\u7406\u5b66\u4f1a\u7b2c28\u56de\u5e74\u6b21\u5927\u4f1a(NLP2022)\",\n year = \"2022\",\n pages = \"\",\n publisher = \"\u8a00\u8a9e\u51e6\u7406\u5b66\u4f1a\",\n}\n```\n\n### \u5b9f\u9a13\u306b\u4f7f\u7528\u3057\u305f\u30e2\u30c7\u30eb / Model used for experiment\n\u300c\u5358\u8a9e\u6b63\u898f\u5316\u306b\u3088\u308b\u8868\u8a18\u3086\u308c\u306b\u9811\u5065\u306aBERT\u30e2\u30c7\u30eb\u306e\u69cb\u7bc9\u300d\u306e\u5b9f\u9a13\u306b\u304a\u3044\u3066\u4f7f\u7528\u3057\u305f\u30e2\u30c7\u30eb\u3092\u4ee5\u4e0b\u3067\u516c\u958b\u3057\u3066\u3044\u307e\u3059\u3002/ The model used in the experiment of \"\u5358\u8a9e\u6b63\u898f\u5316\u306b\u3088\u308b\u8868\u8a18\u3086\u308c\u306b\u9811\u5065\u306aBERT\u30e2\u30c7\u30eb\u306e\u69cb\u7bc9\" is published below.\n\n| \u3000 Normalized | Text | Pretrained Model |\n| ---------------------- | -------- | ---------------------------------------------------------------------------------------------------------------- |\n| surface | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_surface.tar.gz) |\n| normalized_and_surface | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_normalized_and_surface.tar.gz) |\n| normalized_conjugation | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_normalized_conjugation.tar.gz) |\n| normalized | Wiki-40B | [tar.gz](https://sudachi.s3.ap-northeast-1.amazonaws.com/chitra/nlp2022/Wikipedia_normalized.tar.gz) |\n\nEnjoy chiTra!\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Japanese tokenizer for Transformers.",
"version": "0.1.9",
"project_urls": {
"Homepage": "https://github.com/WorksApplications/SudachiTra"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "80760bd8a390e50291de2e19e3f8dadc79a256868a1b0f32b278d0ee4a6928d9",
"md5": "08c93ace994c3e0c69f7f56a9151544d",
"sha256": "c07b8b799a7d498c7d1bb386fdaac11c921c5fea01ba934f1745990df3c99e33"
},
"downloads": -1,
"filename": "SudachiTra-0.1.9.tar.gz",
"has_sig": false,
"md5_digest": "08c93ace994c3e0c69f7f56a9151544d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 336455,
"upload_time": "2023-12-18T02:52:43",
"upload_time_iso_8601": "2023-12-18T02:52:43.145620Z",
"url": "https://files.pythonhosted.org/packages/80/76/0bd8a390e50291de2e19e3f8dadc79a256868a1b0f32b278d0ee4a6928d9/SudachiTra-0.1.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-18 02:52:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "WorksApplications",
"github_project": "SudachiTra",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "sudachitra"
}