# CharCut
Character-based MT evaluation and difference highlighting
CharCut compares outputs of MT systems with reference translations. It can compare multiple file pairs simultaneously and produce HTML outputs showing character-based differences along with scores that are directly inferred from the lengths of those differences, thus making the link between evaluation and visualisation straightforward.
The matching algorithm is based on an iterative search for longest common substrings, combined with a length-based threshold that limits short and noisy character matches. As a similarity metric this is not new, but to the best of our knowledge it was never applied to highlighting and scoring of MT outputs. It has the neat effect of keeping character-based differences readable by humans.
Accidentally, the scores inferred from those differences correlate very well with human judgments, similarly to other great character-based metrics like [chrF(++)](https://github.com/m-popovic/chrF) or [CharacTER](https://github.com/rwth-i6/CharacTER). It was evaluated here:
> Adrien Lardilleux and Yves Lepage: "CharCut: Human-Targeted Character-Based MT Evaluation with Loose Differences". In [Proceedings of IWSLT 2017](http://workshop2017.iwslt.org/64.php).
It is intended to be lightweight and easy to use, so the HTML outputs are, and will be kept, slick on purpose.
Note (Bram Vanroy): the remainder of this README has been changed to reflect the changes I have made to make the package more usable from a Python package perspective,
e.g., by using hypotheses/references directly without files.
## Installation
```shell
pip install charcut
```
This will install the command `calculate-charcut`.
Basic usage:
```shell
calculate-charcut cand.txt,ref.txt
```
where `cand.txt` and `ref.txt` contain corresponding candidate (MT) and reference (human) segments, 1 per line. Multiple file pairs can be specified on the command line: candidates with references, candidates with other candidates, etc.
By default, only document-level scores are displayed on standard output. To produce an HTML output file, use the `-o` option:
```shell
calculate-charcut cand.txt,ref.txt -o mydiff.html
```
A few more options are available; call
```shell
calculate-charcut -h
```
to list them.
Consider lowering the `-m` option value (minimum match size) for non-alphabetical writing systems such as Chinese or Japanese. The default value (3 characters) should be acceptable for most European languages, but depending on the language and data, larger values might produce better looking results.
## Modifications by Bram Vanroy
Bram Vanroy made some changes to this package that do not affect the result of the metric but that should improve usability. He also packaged the library for pip and added some tests to ensure the same results with the original library. Code has been rewritten to make it easier to use from within Python without the need of files as input. In Python, the following entry point now exists:
```python
def calculate_charcut(
hyps: Union[str, List[str]],
refs: Union[str, List[str]],
html_output_file: str = None,
plain_output_file: str = None,
src_file: str = None,
match_size: int = 3,
alt_norm: bool = False,
verbose: bool = False
) -> Tuple[float, int]:
```
where `hyps` and `refs` are indiviual sentences `str` or a list of sentences `List[str]`. This function has the same capabilities and arguments as the command-line script that is available (discussed above). This command line script is now available as an installed entry point rather than a separate Python script. You can call that from the command line with `calculate-charcut`.
## License
[GPLv3](LICENSE)
Raw data
{
"_id": null,
"home_page": "https://github.com/BramVanroy/CharCut",
"name": "charcut",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "machine-translation machine-translation-evaluation evaluation mt",
"author": "Lardilleux; Bram Vanroy",
"author_email": "bramvanroy@hotmail.com",
"download_url": "https://files.pythonhosted.org/packages/df/51/7cdaf270d31cc4112082b9508c318b237d9628d783e87bca3d7c66479c6e/charcut-1.1.1.tar.gz",
"platform": null,
"description": "# CharCut\nCharacter-based MT evaluation and difference highlighting\n\nCharCut compares outputs of MT systems with reference translations. It can compare multiple file pairs simultaneously and produce HTML outputs showing character-based differences along with scores that are directly inferred from the lengths of those differences, thus making the link between evaluation and visualisation straightforward.\n\nThe matching algorithm is based on an iterative search for longest common substrings, combined with a length-based threshold that limits short and noisy character matches. As a similarity metric this is not new, but to the best of our knowledge it was never applied to highlighting and scoring of MT outputs. It has the neat effect of keeping character-based differences readable by humans.\n\nAccidentally, the scores inferred from those differences correlate very well with human judgments, similarly to other great character-based metrics like [chrF(++)](https://github.com/m-popovic/chrF) or [CharacTER](https://github.com/rwth-i6/CharacTER). It was evaluated here:\n> Adrien Lardilleux and Yves Lepage: \"CharCut: Human-Targeted Character-Based MT Evaluation with Loose Differences\". In [Proceedings of IWSLT 2017](http://workshop2017.iwslt.org/64.php).\n\nIt is intended to be lightweight and easy to use, so the HTML outputs are, and will be kept, slick on purpose.\n\nNote (Bram Vanroy): the remainder of this README has been changed to reflect the changes I have made to make the package more usable from a Python package perspective,\ne.g., by using hypotheses/references directly without files. \n\n## Installation\n\n```shell\npip install charcut\n```\n\nThis will install the command `calculate-charcut`.\n\n\nBasic usage:\n```shell\ncalculate-charcut cand.txt,ref.txt\n```\nwhere `cand.txt` and `ref.txt` contain corresponding candidate (MT) and reference (human) segments, 1 per line. Multiple file pairs can be specified on the command line: candidates with references, candidates with other candidates, etc.\nBy default, only document-level scores are displayed on standard output. To produce an HTML output file, use the `-o` option:\n\n```shell\ncalculate-charcut cand.txt,ref.txt -o mydiff.html\n```\n\nA few more options are available; call\n```shell\ncalculate-charcut -h\n```\nto list them.\n\nConsider lowering the `-m` option value (minimum match size) for non-alphabetical writing systems such as Chinese or Japanese. The default value (3 characters) should be acceptable for most European languages, but depending on the language and data, larger values might produce better looking results.\n\n## Modifications by Bram Vanroy\n\nBram Vanroy made some changes to this package that do not affect the result of the metric but that should improve usability. He also packaged the library for pip and added some tests to ensure the same results with the original library. Code has been rewritten to make it easier to use from within Python without the need of files as input. In Python, the following entry point now exists:\n\n```python\ndef calculate_charcut(\n hyps: Union[str, List[str]],\n refs: Union[str, List[str]],\n html_output_file: str = None,\n plain_output_file: str = None,\n src_file: str = None,\n match_size: int = 3,\n alt_norm: bool = False,\n verbose: bool = False\n) -> Tuple[float, int]:\n```\n\nwhere `hyps` and `refs` are indiviual sentences `str` or a list of sentences `List[str]`. This function has the same capabilities and arguments as the command-line script that is available (discussed above). This command line script is now available as an installed entry point rather than a separate Python script. You can call that from the command line with `calculate-charcut`.\n\n## License\n\n[GPLv3](LICENSE)\n",
"bugtrack_url": null,
"license": "GPLv3",
"summary": "Character-based MT evaluation and difference highlighting",
"version": "1.1.1",
"split_keywords": [
"machine-translation",
"machine-translation-evaluation",
"evaluation",
"mt"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "71ab4e2b55938ab8dbef2f1369fbfe82",
"sha256": "c282204398dda841396ea73f3821ac271b8901336b9336e458f23fbf0fe0bd08"
},
"downloads": -1,
"filename": "charcut-1.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "71ab4e2b55938ab8dbef2f1369fbfe82",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 24626,
"upload_time": "2022-12-06T10:32:07",
"upload_time_iso_8601": "2022-12-06T10:32:07.527515Z",
"url": "https://files.pythonhosted.org/packages/75/53/c937d13cd514523406d90e55d2acb17c9c0aebb525d3c3b826e3c9c4d5c1/charcut-1.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "361e7f39d9ac940496f1d392df441979",
"sha256": "a169accd61afbc5175b68fc02905d5da20450142082fa171934c55dc2812d6ab"
},
"downloads": -1,
"filename": "charcut-1.1.1.tar.gz",
"has_sig": false,
"md5_digest": "361e7f39d9ac940496f1d392df441979",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 26615,
"upload_time": "2022-12-06T10:32:09",
"upload_time_iso_8601": "2022-12-06T10:32:09.375807Z",
"url": "https://files.pythonhosted.org/packages/df/51/7cdaf270d31cc4112082b9508c318b237d9628d783e87bca3d7c66479c6e/charcut-1.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-06 10:32:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "BramVanroy",
"github_project": "CharCut",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "charcut"
}