Name | xiwen JSON |
Version |
0.2.1
JSON |
| download |
home_page | None |
Summary | A tool to scan HTML for Chinese characters |
upload_time | 2024-06-17 03:19:46 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT License Copyright (c) 2024 Elliott Steer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
cli
chinese
tool
|
VCS |
|
bugtrack_url |
|
requirements |
beautifulsoup4
certifi
charset-normalizer
idna
masquer
polars
requests
soupsieve
urllib3
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<h1 align="center">Xiwen 析文</h1>
<p align="center">
<a href="https://github.com/essteer/xiwen/actions/workflows/test.yaml"><img src="https://github.com/essteer/xiwen/actions/workflows/test.yaml/badge.svg"></a>
<a href="https://github.com/essteer/xiwen"><img src="https://img.shields.io/badge/Python-3.9_|_3.10_|_3.11_|_3.12-3776AB.svg?style=flat&logo=Python&logoColor=white"></a>
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json"></a>
<a href="https://snyk.io/test/github/essteer/xiwen"><img src="https://snyk.io/test/github/essteer/xiwen/badge.svg?name=Snyk&style=flat&logo=Snyk"></a>
</p>
<p align="center">
A tool to scan HTML for Chinese characters.
</p>
## Overview
Use Xiwen to scan websites for Chinese characters — hanzi — and:
- analyse the content by HSK grade
- identify character variants
- export character sets for further use
The analysis describes the breakdown by HSK grade (see below) and character lists can be exported for any combination of those levels, or less common hanzi beyond the HSK grades.
Data exports provide hanzi by HSK grade in traditional and simplified Chinese, their pinyin, count within the text, and character frequency.
## Who this is for
Mandarin learners can use Xiwen to determine the expected difficulty of an article or book relative to their current reading level, and create character lists for further study.
Instructors can use it to assess the suitability of reading materials for their students, and produce vocabulary lists.
## HSK
HSK — Hanyu Shuiping Kaoshi 汉语水平考试 — is a series of examinations designed to test Chinese language proficiency in simplified Chinese.
In its latest form the HSK consists of nine levels, and covers 3,000 simplified hanzi and 11,092 vocabulary items. The advanced levels — seven to nine — share 1,200 hanzi that are tested together.
To approximate a traditional hanzi version of the HSK, Xiwen maps the HSK hanzi to traditional Chinese equivalents. In most cases this is a one-to-one conversion, but in several cases there are two or more traditional hanzi that reflect distinct meanings of the single simplified character.
For example:
- "发": ["發", "髮"]
- "了": ["了", "瞭"]
- "面": ["面", "麵"]
Or even:
- "只": ["只", "衹", "隻"]
- "台": ["台", "檯", "臺", "颱"]
A list of these "polymaps" — not all of which relate to hanzi in the HSK — can be found in the Wikipedia article [Ambiguous character mappings](https://en.wikipedia.org/wiki/Ambiguities_in_Chinese_character_simplification).
This approach isn't perfect: obscure definitions implied by a distinct traditional hanzi may be far less frequent than the common conversion of a simplified hanzi.
The table below lists the number of simplified hanzi per grade, and the number of mappings to traditional equivalents.
| HSK Grade | Simp. Hanzi | Running Total | Trad. Hanzi Equivalents | Running Total |
| :-------: | :---------: | :-----------: | :---------------------: | :-----------: |
| 1 | 300 | 300 | 313 | 313 |
| 2 | 300 | 600 | 314 | 627 |
| 3 | 300 | 900 | 312 | 939 |
| 4 | 300 | 1200 | 316 | 1255 |
| 5 | 300 | 1500 | 310 | 1565 |
| 6 | 300 | 1800 | 310 | 1875 |
| 7-9 | 1200 | 3000 | 1214 | 3089 |
## Installation
### GitHub repo
[![](https://img.shields.io/badge/GitHub-xiwen-181717.svg?flat&logo=GitHub&logoColor=white)](https://github.com/essteer/xiwen)
Clone `xiwen` from GitHub for the full code, files used to generate the character lists and a test suite.
```console
$ git clone git@github.com:essteer/xiwen
```
Change into the `xiwen` directory then create and activate a virtual environment — the below example uses [Astral's](https://astral.sh/blog/uv) `uv`; substitute `pip` or use another package manager as needed — then install the `dev` dependencies:
![](https://img.shields.io/badge/Linux-FCC624.svg?style=flat&logo=Linux&logoColor=black)
![](https://img.shields.io/badge/macOS-000000.svg?style=flat&logo=Apple&logoColor=white)
```console
$ uv venv
$ source .venv/bin/activate
$ uv pip install -r requirements.txt
```
![](https://img.shields.io/badge/Windows-0078D4.svg?style=flat&logo=Windows&logoColor=white)
```console
$ uv venv
$ .venv\Scripts\activate
$ uv pip install -r requirements.txt
```
## Operation
### GitHub repo
[![](https://img.shields.io/badge/GitHub-xiwen-181717.svg?flat&logo=GitHub&logoColor=white)](https://github.com/essteer/xiwen)
To run `xiwen` as a CLI tool, navigate to the project root directory and run:
![](https://img.shields.io/badge/Linux-FCC624.svg?style=flat&logo=Linux&logoColor=black)
![](https://img.shields.io/badge/macOS-000000.svg?style=flat&logo=Apple&logoColor=white)
```console
$ source .venv/bin/activate
$ python3 -m main
```
![](https://img.shields.io/badge/Windows-0078D4.svg?style=flat&logo=Windows&logoColor=white)
```console
$ .venv\Scripts\activate
$ python -m main
```
The `src/resources/` directory contains `main.py`, which was used to create the dataset needed to run this program under `src/xiwen/assets/` by pairing simplified and traditional character sets with their pinyin, HSK grades, and character frequencies as identified in the MTSU dataset. The source data is kept under `src/resources/assets/`.
The functional program is contained in `src/xiwen/`. `interface.py` is the interactive component for the CLI tool. It receives user input and makes function calls to modules in `utils/`. Those files form the program's ETL pipeline including the following functions:
- break down text into individual hanzi (`extract.py`)
- sort hanzi as HSK-level simplified or traditional hanzi, or outliers (`transform.py`)
- determine the overall character variant of the text as simplified or traditional, or a mix (`analyse.py`)
- compute the grade-based and cumulative numbers of unique hanzi and total hanzi in the text (`analyse.py`)
Character sets can then be exported to CSV.
## Sources
This repo makes use of datasets of HSK vocabulary and character frequency lists in the public domain as indicated below - credit goes to those involved in their creation and distribution.
- Hanyu Shuiping Kaoshi (HSK) 3.0 character list: "hsk30-chars.csv", [hsk30](https://github.com/ivankra/hsk30), ivankra, GitHub
- Character frequency list: "CharFreq-Modern.csv", Da, Jun. 2004, [Chinese text computing](http://lingua.mtsu.edu/chinese-computing), Middle Tennessee State University
- Multiple character mappings: "[Ambiguous character mappings](https://en.wikipedia.org/wiki/Ambiguities_in_Chinese_character_simplification)", Wikipedia
- Simplified character set demo: "[Folding Beijing](https://web.archive.org/web/20160822161228/http://jessica-hjf.blog.163.com/blog/static/278128102015240444791/)" 《北京折叠》, Hao Jingfang 郝景芳, 2012
- Traditional character set demo: "Tao Te Ching" 《道德經》, Lao Tzu 老子, 400BC
Raw data
{
"_id": null,
"home_page": null,
"name": "xiwen",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "CLI, Chinese, tool",
"author": null,
"author_email": "Elliott Steer <essteer@pm.me>",
"download_url": "https://files.pythonhosted.org/packages/26/7f/04b1dddbc31b198c8f6a808f2600f04716d31c4736f292b03d569a14fed2/xiwen-0.2.1.tar.gz",
"platform": null,
"description": "<h1 align=\"center\">Xiwen \u6790\u6587</h1>\n\n<p align=\"center\">\n <a href=\"https://github.com/essteer/xiwen/actions/workflows/test.yaml\"><img src=\"https://github.com/essteer/xiwen/actions/workflows/test.yaml/badge.svg\"></a>\n <a href=\"https://github.com/essteer/xiwen\"><img src=\"https://img.shields.io/badge/Python-3.9_|_3.10_|_3.11_|_3.12-3776AB.svg?style=flat&logo=Python&logoColor=white\"></a>\n <a href=\"https://github.com/astral-sh/ruff\"><img src=\"https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json\"></a>\n <a href=\"https://snyk.io/test/github/essteer/xiwen\"><img src=\"https://snyk.io/test/github/essteer/xiwen/badge.svg?name=Snyk&style=flat&logo=Snyk\"></a>\n</p>\n\n<p align=\"center\">\nA tool to scan HTML for Chinese characters.\n</p>\n\n## Overview \n\nUse Xiwen to scan websites for Chinese characters \u2014 hanzi \u2014 and:\n\n- analyse the content by HSK grade\n- identify character variants\n- export character sets for further use\n\nThe analysis describes the breakdown by HSK grade (see below) and character lists can be exported for any combination of those levels, or less common hanzi beyond the HSK grades.\n\nData exports provide hanzi by HSK grade in traditional and simplified Chinese, their pinyin, count within the text, and character frequency.\n\n## Who this is for\n\nMandarin learners can use Xiwen to determine the expected difficulty of an article or book relative to their current reading level, and create character lists for further study.\n\nInstructors can use it to assess the suitability of reading materials for their students, and produce vocabulary lists.\n\n## HSK\n\nHSK \u2014 Hanyu Shuiping Kaoshi \u6c49\u8bed\u6c34\u5e73\u8003\u8bd5 \u2014 is a series of examinations designed to test Chinese language proficiency in simplified Chinese.\n\nIn its latest form the HSK consists of nine levels, and covers 3,000 simplified hanzi and 11,092 vocabulary items. The advanced levels \u2014 seven to nine \u2014 share 1,200 hanzi that are tested together.\n\nTo approximate a traditional hanzi version of the HSK, Xiwen maps the HSK hanzi to traditional Chinese equivalents. In most cases this is a one-to-one conversion, but in several cases there are two or more traditional hanzi that reflect distinct meanings of the single simplified character.\n\nFor example:\n\n- \"\u53d1\": [\"\u767c\", \"\u9aee\"]\n- \"\u4e86\": [\"\u4e86\", \"\u77ad\"]\n- \"\u9762\": [\"\u9762\", \"\u9eb5\"]\n\nOr even:\n\n- \"\u53ea\": [\"\u53ea\", \"\u8879\", \"\u96bb\"]\n- \"\u53f0\": [\"\u53f0\", \"\u6aaf\", \"\u81fa\", \"\u98b1\"]\n\nA list of these \"polymaps\" \u2014 not all of which relate to hanzi in the HSK \u2014 can be found in the Wikipedia article [Ambiguous character mappings](https://en.wikipedia.org/wiki/Ambiguities_in_Chinese_character_simplification).\n\nThis approach isn't perfect: obscure definitions implied by a distinct traditional hanzi may be far less frequent than the common conversion of a simplified hanzi.\n\nThe table below lists the number of simplified hanzi per grade, and the number of mappings to traditional equivalents.\n\n| HSK Grade | Simp. Hanzi | Running Total | Trad. Hanzi Equivalents | Running Total |\n| :-------: | :---------: | :-----------: | :---------------------: | :-----------: |\n| 1 | 300 | 300 | 313 | 313 |\n| 2 | 300 | 600 | 314 | 627 |\n| 3 | 300 | 900 | 312 | 939 |\n| 4 | 300 | 1200 | 316 | 1255 |\n| 5 | 300 | 1500 | 310 | 1565 |\n| 6 | 300 | 1800 | 310 | 1875 |\n| 7-9 | 1200 | 3000 | 1214 | 3089 |\n\n## Installation\n\n### GitHub repo\n\n[![](https://img.shields.io/badge/GitHub-xiwen-181717.svg?flat&logo=GitHub&logoColor=white)](https://github.com/essteer/xiwen)\n\nClone `xiwen` from GitHub for the full code, files used to generate the character lists and a test suite.\n\n```console\n$ git clone git@github.com:essteer/xiwen\n```\n\nChange into the `xiwen` directory then create and activate a virtual environment \u2014 the below example uses [Astral's](https://astral.sh/blog/uv) `uv`; substitute `pip` or use another package manager as needed \u2014 then install the `dev` dependencies:\n\n![](https://img.shields.io/badge/Linux-FCC624.svg?style=flat&logo=Linux&logoColor=black)\n![](https://img.shields.io/badge/macOS-000000.svg?style=flat&logo=Apple&logoColor=white)\n\n```console\n$ uv venv\n$ source .venv/bin/activate\n$ uv pip install -r requirements.txt\n```\n\n![](https://img.shields.io/badge/Windows-0078D4.svg?style=flat&logo=Windows&logoColor=white)\n\n```console\n$ uv venv\n$ .venv\\Scripts\\activate\n$ uv pip install -r requirements.txt\n```\n\n## Operation\n\n### GitHub repo\n\n[![](https://img.shields.io/badge/GitHub-xiwen-181717.svg?flat&logo=GitHub&logoColor=white)](https://github.com/essteer/xiwen)\n\nTo run `xiwen` as a CLI tool, navigate to the project root directory and run:\n\n![](https://img.shields.io/badge/Linux-FCC624.svg?style=flat&logo=Linux&logoColor=black)\n![](https://img.shields.io/badge/macOS-000000.svg?style=flat&logo=Apple&logoColor=white)\n\n```console\n$ source .venv/bin/activate\n$ python3 -m main\n```\n\n![](https://img.shields.io/badge/Windows-0078D4.svg?style=flat&logo=Windows&logoColor=white)\n\n```console\n$ .venv\\Scripts\\activate\n$ python -m main\n```\n\nThe `src/resources/` directory contains `main.py`, which was used to create the dataset needed to run this program under `src/xiwen/assets/` by pairing simplified and traditional character sets with their pinyin, HSK grades, and character frequencies as identified in the MTSU dataset. The source data is kept under `src/resources/assets/`.\n\nThe functional program is contained in `src/xiwen/`. `interface.py` is the interactive component for the CLI tool. It receives user input and makes function calls to modules in `utils/`. Those files form the program's ETL pipeline including the following functions:\n\n- break down text into individual hanzi (`extract.py`)\n- sort hanzi as HSK-level simplified or traditional hanzi, or outliers (`transform.py`)\n- determine the overall character variant of the text as simplified or traditional, or a mix (`analyse.py`)\n- compute the grade-based and cumulative numbers of unique hanzi and total hanzi in the text (`analyse.py`)\n\nCharacter sets can then be exported to CSV.\n\n## Sources\n\nThis repo makes use of datasets of HSK vocabulary and character frequency lists in the public domain as indicated below - credit goes to those involved in their creation and distribution.\n\n- Hanyu Shuiping Kaoshi (HSK) 3.0 character list: \"hsk30-chars.csv\", [hsk30](https://github.com/ivankra/hsk30), ivankra, GitHub\n\n- Character frequency list: \"CharFreq-Modern.csv\", Da, Jun. 2004, [Chinese text computing](http://lingua.mtsu.edu/chinese-computing), Middle Tennessee State University\n\n- Multiple character mappings: \"[Ambiguous character mappings](https://en.wikipedia.org/wiki/Ambiguities_in_Chinese_character_simplification)\", Wikipedia\n\n- Simplified character set demo: \"[Folding Beijing](https://web.archive.org/web/20160822161228/http://jessica-hjf.blog.163.com/blog/static/278128102015240444791/)\" \u300a\u5317\u4eac\u6298\u53e0\u300b, Hao Jingfang \u90dd\u666f\u82b3, 2012\n\n- Traditional character set demo: \"Tao Te Ching\" \u300a\u9053\u5fb7\u7d93\u300b, Lao Tzu \u8001\u5b50, 400BC\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2024 Elliott Steer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "A tool to scan HTML for Chinese characters",
"version": "0.2.1",
"project_urls": {
"documentation": "https://github.com/essteer/xiwen/blob/main/README.md",
"issues": "https://github.com/essteer/xiwen/issues",
"repository": "https://github.com/essteer/xiwen"
},
"split_keywords": [
"cli",
" chinese",
" tool"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b2bd9dec252bb5e58fd8d2cfcb4c29f3be71f26575e7e80458ae3dc1cb4405a3",
"md5": "2d09d5a02dc224cb0677003e7ecd50fc",
"sha256": "0c2bb6e662daddfdd550974d605c58a40cd5e3303565509f34aa6559ad3a9d09"
},
"downloads": -1,
"filename": "xiwen-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2d09d5a02dc224cb0677003e7ecd50fc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 279567,
"upload_time": "2024-06-17T03:19:44",
"upload_time_iso_8601": "2024-06-17T03:19:44.271334Z",
"url": "https://files.pythonhosted.org/packages/b2/bd/9dec252bb5e58fd8d2cfcb4c29f3be71f26575e7e80458ae3dc1cb4405a3/xiwen-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "267f04b1dddbc31b198c8f6a808f2600f04716d31c4736f292b03d569a14fed2",
"md5": "1c0971f5dd50a8735f0240d123fe0248",
"sha256": "13a2a4ed28204d116f7ffe8dc557b87ca36bb10c7ea893ef88a300f5fff809bd"
},
"downloads": -1,
"filename": "xiwen-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "1c0971f5dd50a8735f0240d123fe0248",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 278777,
"upload_time": "2024-06-17T03:19:46",
"upload_time_iso_8601": "2024-06-17T03:19:46.605405Z",
"url": "https://files.pythonhosted.org/packages/26/7f/04b1dddbc31b198c8f6a808f2600f04716d31c4736f292b03d569a14fed2/xiwen-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-17 03:19:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "essteer",
"github_project": "xiwen",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "beautifulsoup4",
"specs": [
[
"==",
"4.12.3"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2024.2.2"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.3.2"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.7"
]
]
},
{
"name": "masquer",
"specs": [
[
"==",
"1.1.1"
]
]
},
{
"name": "polars",
"specs": [
[
"==",
"0.20.31"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.3"
]
]
},
{
"name": "soupsieve",
"specs": [
[
"==",
"2.5"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.2.1"
]
]
}
],
"lcname": "xiwen"
}