unbabel-comet


Nameunbabel-comet JSON
Version 2.2.2 PyPI version JSON
download
home_pagehttps://github.com/Unbabel/COMET
SummaryHigh-quality Machine Translation Evaluation
upload_time2024-03-13 11:27:34
maintainer
docs_urlNone
authorRicardo Rei, Craig Stewart, Catarina Farinha, Alon Lavie
requires_python>=3.8.0,<4.0.0
licenseApache-2.0
keywords machine translation evaluation unbabel comet
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="https://raw.githubusercontent.com/Unbabel/COMET/master/docs/source/_static/img/COMET_lockup-dark.png">
  <br />
  <br />
  <a href="https://github.com/Unbabel/COMET/blob/master/LICENSE"><img alt="License" src="https://img.shields.io/github/license/Unbabel/COMET" /></a>
  <a href="https://github.com/Unbabel/COMET/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/Unbabel/COMET" /></a>
  <a href=""><img alt="PyPI" src="https://img.shields.io/pypi/v/unbabel-comet" /></a>
  <a href="https://github.com/psf/black"><img alt="Code Style" src="https://img.shields.io/badge/code%20style-black-black" /></a>
</p>

**NEWS:** 
1) [AfriCOMET](https://arxiv.org/pdf/2311.09828.pdf) released, a new model to embrace under-resourced African Languages.
2) We released our new eXplainable COMET models ([XCOMET-XL](https://huggingface.co/Unbabel/XCOMET-XL) and [-XXL](https://huggingface.co/Unbabel/XCOMET-XXL)) which along with quality scores detects which errors in the translation are minor, major or critical according to MQM typology
3) We release [CometKiwi -XL (3.5B)](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xl) and [-XXL (10.7B)](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xxl) QE models. These models were the best performing QE models on the WMT23 QE shared task.

Please check all available models [here](https://github.com/Unbabel/COMET/blob/master/MODELS.md)
 
# Quick Installation

COMET requires python 3.8 or above. Simple installation from PyPI

```bash
pip install --upgrade pip  # ensures that pip is current 
pip install unbabel-comet
```

**Note:** To use some COMET models such as `Unbabel/wmt22-cometkiwi-da` you must acknowledge it's license on Hugging Face Hub and [log-in into hugging face hub](https://huggingface.co/docs/huggingface_hub/quick-start#:~:text=Once%20you%20have%20your%20User%20Access%20Token%2C%20run%20the%20following%20command%20in%20your%20terminal%3A).


To develop locally install run the following commands:
```bash
git clone https://github.com/Unbabel/COMET
cd COMET
pip install poetry
poetry install
```

For development, you can run the CLI tools directly, e.g.,

```bash
PYTHONPATH=. ./comet/cli/score.py
```

# Scoring MT outputs:

## CLI Usage:

Test examples:

```bash
echo -e "10 到 15 分钟可以送到吗\nPode ser entregue dentro de 10 a 15 minutos?" >> src.txt
echo -e "Can I receive my food in 10 to 15 minutes?\nCan it be delivered in 10 to 15 minutes?" >> hyp1.txt
echo -e "Can it be delivered within 10 to 15 minutes?\nCan you send it for 10 to 15 minutes?" >> hyp2.txt
echo -e "Can it be delivered between 10 to 15 minutes?\nCan it be delivered between 10 to 15 minutes?" >> ref.txt
```

Basic scoring command:
```bash
comet-score -s src.txt -t hyp1.txt -r ref.txt
```
> you can set the number of gpus using `--gpus` (0 to test on CPU).

For better error analysis, you can use XCOMET models such as [`Unbabel/XCOMET-XL`](https://huggingface.co/Unbabel/XCOMET-XL), you can export the identified errors using the `--to_json` flag:

```bash
comet-score -s src.txt -t hyp1.txt -r ref.txt --model Unbabel/XCOMET-XL --to_json output.json
```

Scoring multiple systems:
```bash
comet-score -s src.txt -t hyp1.txt hyp2.txt -r ref.txt
```

WMT test sets via [SacreBLEU](https://github.com/mjpost/sacrebleu):

```bash
comet-score -d wmt22:en-de -t PATH/TO/TRANSLATIONS
```

If you are only interested in a system-level score use the following command:

```bash
comet-score -s src.txt -t hyp1.txt -r ref.txt --quiet --only_system
```

### Reference-free evaluation:

```bash
comet-score -s src.txt -t hyp1.txt --model Unbabel/wmt22-cometkiwi-da
```

**Note:** To use the `Unbabel/wmt22-cometkiwi-da-xl` you first have to acknowledge its license on [Hugging Face Hub](https://huggingface.co/Unbabel/Unbabel/wmt23-cometkiwi-da-xl).

### Comparing multiple systems:

When comparing multiple MT systems we encourage you to run the `comet-compare` command to get **statistical significance** with Paired T-Test and bootstrap resampling [(Koehn, et al 2004)](https://aclanthology.org/W04-3250/).

```bash
comet-compare -s src.de -t hyp1.en hyp2.en hyp3.en -r ref.en
```

### Minimum Bayes Risk Decoding:

The MBR command allows you to rank translations and select the best one according to COMET metrics. For more details you can read our paper on [Quality-Aware Decoding for Neural Machine Translation](https://aclanthology.org/2022.naacl-main.100.pdf).


```bash
comet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt --num_sample [X] -o [OUTPUT_FILE].txt
```

If working with a very large candidate list you can use `--rerank_top_k` flag to prune the topK most promissing candidates according to a reference-free metric.

Example for a candidate list of 1000 samples:

```bash
comet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt -o [OUTPUT_FILE].txt --num_sample 1000 --rerank_top_k 100 --gpus 4 --qe_model Unbabel/wmt23-cometkiwi-da-xl
```

Your source and samples file should be [formatted in this way](https://unbabel.github.io/COMET/html/running.html#:~:text=Example%20with%202%20source%20and%203%20samples%3A).

# COMET Models

Within COMET, there are several evaluation models available. You can refer to the [MODELS](MODELS.md) page for a comprehensive list of all available models. Here is a concise list of the main reference-based and reference-free models:

- **Default Model:** [`Unbabel/wmt22-comet-da`](https://huggingface.co/Unbabel/wmt22-comet-da) - This model employs a reference-based regression approach and is built upon the XLM-R architecture. It has been trained on direct assessments from WMT17 to WMT20 and provides scores ranging from 0 to 1, where 1 signifies a perfect translation.
- **Reference-free Model:** [`Unbabel/wmt22-cometkiwi-da`](https://huggingface.co/Unbabel/wmt22-cometkiwi-da) - This reference-free model employs a regression approach and is built on top of InfoXLM. It has been trained using direct assessments from WMT17 to WMT20, as well as direct assessments from the MLQE-PE corpus. Similar to other models, it generates scores ranging from 0 to 1. For those interested, we also offer larger versions of this model: [`Unbabel/wmt23-cometkiwi-da-xl`](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xl) with 3.5 billion parameters and [`Unbabel/wmt23-cometkiwi-da-xxl`](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xxl) with 10.7 billion parameters.
- **eXplainable COMET (XCOMET):** [`Unbabel/XCOMET-XXL`](https://huggingface.co/Unbabel/XCOMET-XXL) - Our latest model is trained to identify error spans and assign a final quality score, resulting in an explainable neural metric. We offer this version in XXL with 10.7 billion parameters, as well as the XL variant with 3.5 billion parameters ([`Unbabel/XCOMET-XL`](https://huggingface.co/Unbabel/XCOMET-XL)). These models have demonstrated the highest correlation with MQM and are our best performing evaluation models.

Please be aware that different models may be subject to varying licenses. To learn more, kindly refer to the [LICENSES.models](LICENSE.models.md) and model licenses sections.

If you intend to compare your results with papers published before 2022, it's likely that they used older evaluation models. In such cases, please refer to [`Unbabel/wmt20-comet-da`](https://huggingface.co/Unbabel/wmt20-comet-da) and [`Unbabel/wmt20-comet-qe-da`](https://huggingface.co/Unbabel/wmt20-comet-qe-da), which were the primary checkpoints used in previous versions (<2.0) of COMET.

Also, [UniTE Metric](https://aclanthology.org/2022.acl-long.558/) developed by the NLP2CT Lab at the University of Macau and Alibaba Group can be used directly through COMET check [here for more details](https://huggingface.co/Unbabel/unite-mup).

## Interpreting Scores:

**New:** An excellent reference for learning how to interpret machine translation metrics is the analysis paper by Kocmi et al. (2024), available [at this link.](https://arxiv.org/pdf/2401.06760.pdf)

When using COMET to evaluate machine translation, it's important to understand how to interpret the scores it produces.

In general, COMET models are trained to predict quality scores for translations. These scores are typically normalized using a [z-score transformation](https://simplypsychology.org/z-score.html) to account for individual differences among annotators. While the raw score itself does not have a direct interpretation, it is useful for ranking translations and systems according to their quality.

However, since 2022 we have introduced a new training approach that scales the scores between 0 and 1. This makes it easier to interpret the scores: a score close to 1 indicates a high-quality translation, while a score close to 0 indicates a translation that is no better than random chance. Also, with the introduction of XCOMET models we can now analyse which text spans are part of minor, major or critical errors according to the MQM typology.

It's worth noting that when using COMET to compare the performance of two different translation systems, it's important to run the `comet-compare` command to obtain statistical significance measures. This command compares the output of two systems using a statistical hypothesis test, providing an estimate of the probability that the observed difference in scores between the systems is due to chance. This is an important step to ensure that any differences in scores between systems are statistically significant.

Overall, the added interpretability of scores in the latest COMET models, combined with the ability to assess statistical significance between systems using `comet-compare`, make COMET a valuable tool for evaluating machine translation.

## Languages Covered:

All the above mentioned models are build on top of XLM-R (variants) which cover the following languages:

Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskrit, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.

**Thus, results for language pairs containing uncovered languages are unreliable!**

### COMET for African Languages:

If you are interested in COMET metrics for african languages please visit [afriCOMET](https://github.com/masakhane-io/africomet). 

## Scoring within Python:

```python
from comet import download_model, load_from_checkpoint

# Choose your model from Hugging Face Hub
model_path = download_model("Unbabel/XCOMET-XL")
# or for example:
# model_path = download_model("Unbabel/wmt22-comet-da")

# Load the model checkpoint:
model = load_from_checkpoint(model_path)

# Data must be in the following format:
data = [
    {
        "src": "10 到 15 分钟可以送到吗",
        "mt": "Can I receive my food in 10 to 15 minutes?",
        "ref": "Can it be delivered between 10 to 15 minutes?"
    },
    {
        "src": "Pode ser entregue dentro de 10 a 15 minutos?",
        "mt": "Can you send it for 10 to 15 minutes?",
        "ref": "Can it be delivered between 10 to 15 minutes?"
    }
]
# Call predict method:
model_output = model.predict(data, batch_size=8, gpus=1)
print(model_output)
print(model_output.scores) # sentence-level scores
print(model_output.system_score) # system-level score

# Not all COMET models return metadata with detected errors.
print(model_output.metadata.error_spans) # detected error spans
```

# Train your own Metric: 

Instead of using pretrained models your can train your own model with the following command:
```bash
comet-train --cfg configs/models/{your_model_config}.yaml
```

You can then use your own metric to score:

```bash
comet-score -s src.de -t hyp1.en -r ref.en --model PATH/TO/CHECKPOINT
```

You can also upload your model to [Hugging Face Hub](https://huggingface.co/docs/hub/index). Use [`Unbabel/wmt22-comet-da`](https://huggingface.co/Unbabel/wmt22-comet-da) as example. Then you can use your model directly from the hub.

# unittest:
In order to run the toolkit tests you must run the following command:

```bash
poetry run coverage run --source=comet -m unittest discover
poetry run coverage report -m # Expected coverage 76%
```

**Note:** Testing on CPU takes a long time

# Publications

If you use COMET please cite our work **and don't forget to say which model you used!**

- [xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection](https://arxiv.org/pdf/2310.10482.pdf)

- [Scaling up CometKiwi: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task](https://arxiv.org/pdf/2309.11925.pdf)

- [CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60/)

- [COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52/)

- [Searching for Cometinho: The Little Metric That Could](https://aclanthology.org/2022.eamt-1.9/)

- [Are References Really Needed? Unbabel-IST 2021 Submission for the Metrics Shared Task](https://aclanthology.org/2021.wmt-1.111/)

- [Uncertainty-Aware Machine Translation Evaluation](https://aclanthology.org/2021.findings-emnlp.330/) 

- [COMET - Deploying a New State-of-the-art MT Evaluation Metric in Production](https://www.aclweb.org/anthology/2020.amta-user.4)

- [Unbabel's Participation in the WMT20 Metrics Shared Task](https://aclanthology.org/2020.wmt-1.101/)

- [COMET: A Neural Framework for MT Evaluation](https://www.aclweb.org/anthology/2020.emnlp-main.213)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Unbabel/COMET",
    "name": "unbabel-comet",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0,<4.0.0",
    "maintainer_email": "",
    "keywords": "Machine Translation,Evaluation,Unbabel,COMET",
    "author": "Ricardo Rei, Craig Stewart, Catarina Farinha, Alon Lavie",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/24/b4/8d34bca2a1190127f368985522e4f02bfebf78f87624ce8bdb5b38632f31/unbabel_comet-2.2.2.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/Unbabel/COMET/master/docs/source/_static/img/COMET_lockup-dark.png\">\n  <br />\n  <br />\n  <a href=\"https://github.com/Unbabel/COMET/blob/master/LICENSE\"><img alt=\"License\" src=\"https://img.shields.io/github/license/Unbabel/COMET\" /></a>\n  <a href=\"https://github.com/Unbabel/COMET/stargazers\"><img alt=\"GitHub stars\" src=\"https://img.shields.io/github/stars/Unbabel/COMET\" /></a>\n  <a href=\"\"><img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/unbabel-comet\" /></a>\n  <a href=\"https://github.com/psf/black\"><img alt=\"Code Style\" src=\"https://img.shields.io/badge/code%20style-black-black\" /></a>\n</p>\n\n**NEWS:** \n1) [AfriCOMET](https://arxiv.org/pdf/2311.09828.pdf) released, a new model to embrace under-resourced African Languages.\n2) We released our new eXplainable COMET models ([XCOMET-XL](https://huggingface.co/Unbabel/XCOMET-XL) and [-XXL](https://huggingface.co/Unbabel/XCOMET-XXL)) which along with quality scores detects which errors in the translation are minor, major or critical according to MQM typology\n3) We release [CometKiwi -XL (3.5B)](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xl) and [-XXL (10.7B)](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xxl) QE models. These models were the best performing QE models on the WMT23 QE shared task.\n\nPlease check all available models [here](https://github.com/Unbabel/COMET/blob/master/MODELS.md)\n \n# Quick Installation\n\nCOMET requires python 3.8 or above. Simple installation from PyPI\n\n```bash\npip install --upgrade pip  # ensures that pip is current \npip install unbabel-comet\n```\n\n**Note:** To use some COMET models such as `Unbabel/wmt22-cometkiwi-da` you must acknowledge it's license on Hugging Face Hub and [log-in into hugging face hub](https://huggingface.co/docs/huggingface_hub/quick-start#:~:text=Once%20you%20have%20your%20User%20Access%20Token%2C%20run%20the%20following%20command%20in%20your%20terminal%3A).\n\n\nTo develop locally install run the following commands:\n```bash\ngit clone https://github.com/Unbabel/COMET\ncd COMET\npip install poetry\npoetry install\n```\n\nFor development, you can run the CLI tools directly, e.g.,\n\n```bash\nPYTHONPATH=. ./comet/cli/score.py\n```\n\n# Scoring MT outputs:\n\n## CLI Usage:\n\nTest examples:\n\n```bash\necho -e \"10 \u5230 15 \u5206\u949f\u53ef\u4ee5\u9001\u5230\u5417\\nPode ser entregue dentro de 10 a 15 minutos?\" >> src.txt\necho -e \"Can I receive my food in 10 to 15 minutes?\\nCan it be delivered in 10 to 15 minutes?\" >> hyp1.txt\necho -e \"Can it be delivered within 10 to 15 minutes?\\nCan you send it for 10 to 15 minutes?\" >> hyp2.txt\necho -e \"Can it be delivered between 10 to 15 minutes?\\nCan it be delivered between 10 to 15 minutes?\" >> ref.txt\n```\n\nBasic scoring command:\n```bash\ncomet-score -s src.txt -t hyp1.txt -r ref.txt\n```\n> you can set the number of gpus using `--gpus` (0 to test on CPU).\n\nFor better error analysis, you can use XCOMET models such as [`Unbabel/XCOMET-XL`](https://huggingface.co/Unbabel/XCOMET-XL), you can export the identified errors using the `--to_json` flag:\n\n```bash\ncomet-score -s src.txt -t hyp1.txt -r ref.txt --model Unbabel/XCOMET-XL --to_json output.json\n```\n\nScoring multiple systems:\n```bash\ncomet-score -s src.txt -t hyp1.txt hyp2.txt -r ref.txt\n```\n\nWMT test sets via [SacreBLEU](https://github.com/mjpost/sacrebleu):\n\n```bash\ncomet-score -d wmt22:en-de -t PATH/TO/TRANSLATIONS\n```\n\nIf you are only interested in a system-level score use the following command:\n\n```bash\ncomet-score -s src.txt -t hyp1.txt -r ref.txt --quiet --only_system\n```\n\n### Reference-free evaluation:\n\n```bash\ncomet-score -s src.txt -t hyp1.txt --model Unbabel/wmt22-cometkiwi-da\n```\n\n**Note:** To use the `Unbabel/wmt22-cometkiwi-da-xl` you first have to acknowledge its license on [Hugging Face Hub](https://huggingface.co/Unbabel/Unbabel/wmt23-cometkiwi-da-xl).\n\n### Comparing multiple systems:\n\nWhen comparing multiple MT systems we encourage you to run the `comet-compare` command to get **statistical significance** with Paired T-Test and bootstrap resampling [(Koehn, et al 2004)](https://aclanthology.org/W04-3250/).\n\n```bash\ncomet-compare -s src.de -t hyp1.en hyp2.en hyp3.en -r ref.en\n```\n\n### Minimum Bayes Risk Decoding:\n\nThe MBR command allows you to rank translations and select the best one according to COMET metrics. For more details you can read our paper on [Quality-Aware Decoding for Neural Machine Translation](https://aclanthology.org/2022.naacl-main.100.pdf).\n\n\n```bash\ncomet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt --num_sample [X] -o [OUTPUT_FILE].txt\n```\n\nIf working with a very large candidate list you can use `--rerank_top_k` flag to prune the topK most promissing candidates according to a reference-free metric.\n\nExample for a candidate list of 1000 samples:\n\n```bash\ncomet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt -o [OUTPUT_FILE].txt --num_sample 1000 --rerank_top_k 100 --gpus 4 --qe_model Unbabel/wmt23-cometkiwi-da-xl\n```\n\nYour source and samples file should be [formatted in this way](https://unbabel.github.io/COMET/html/running.html#:~:text=Example%20with%202%20source%20and%203%20samples%3A).\n\n# COMET Models\n\nWithin COMET, there are several evaluation models available. You can refer to the [MODELS](MODELS.md) page for a comprehensive list of all available models. Here is a concise list of the main reference-based and reference-free models:\n\n- **Default Model:** [`Unbabel/wmt22-comet-da`](https://huggingface.co/Unbabel/wmt22-comet-da) - This model employs a reference-based regression approach and is built upon the XLM-R architecture. It has been trained on direct assessments from WMT17 to WMT20 and provides scores ranging from 0 to 1, where 1 signifies a perfect translation.\n- **Reference-free Model:** [`Unbabel/wmt22-cometkiwi-da`](https://huggingface.co/Unbabel/wmt22-cometkiwi-da) - This reference-free model employs a regression approach and is built on top of InfoXLM. It has been trained using direct assessments from WMT17 to WMT20, as well as direct assessments from the MLQE-PE corpus. Similar to other models, it generates scores ranging from 0 to 1. For those interested, we also offer larger versions of this model: [`Unbabel/wmt23-cometkiwi-da-xl`](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xl) with 3.5 billion parameters and [`Unbabel/wmt23-cometkiwi-da-xxl`](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xxl) with 10.7 billion parameters.\n- **eXplainable COMET (XCOMET):** [`Unbabel/XCOMET-XXL`](https://huggingface.co/Unbabel/XCOMET-XXL) - Our latest model is trained to identify error spans and assign a final quality score, resulting in an explainable neural metric. We offer this version in XXL with 10.7 billion parameters, as well as the XL variant with 3.5 billion parameters ([`Unbabel/XCOMET-XL`](https://huggingface.co/Unbabel/XCOMET-XL)). These models have demonstrated the highest correlation with MQM and are our best performing evaluation models.\n\nPlease be aware that different models may be subject to varying licenses. To learn more, kindly refer to the [LICENSES.models](LICENSE.models.md) and model licenses sections.\n\nIf you intend to compare your results with papers published before 2022, it's likely that they used older evaluation models. In such cases, please refer to [`Unbabel/wmt20-comet-da`](https://huggingface.co/Unbabel/wmt20-comet-da) and [`Unbabel/wmt20-comet-qe-da`](https://huggingface.co/Unbabel/wmt20-comet-qe-da), which were the primary checkpoints used in previous versions (<2.0) of COMET.\n\nAlso, [UniTE Metric](https://aclanthology.org/2022.acl-long.558/) developed by the NLP2CT Lab at the University of Macau and Alibaba Group can be used directly through COMET check [here for more details](https://huggingface.co/Unbabel/unite-mup).\n\n## Interpreting Scores:\n\n**New:** An excellent reference for learning how to interpret machine translation metrics is the analysis paper by Kocmi et al. (2024), available [at this link.](https://arxiv.org/pdf/2401.06760.pdf)\n\nWhen using COMET to evaluate machine translation, it's important to understand how to interpret the scores it produces.\n\nIn general, COMET models are trained to predict quality scores for translations. These scores are typically normalized using a [z-score transformation](https://simplypsychology.org/z-score.html) to account for individual differences among annotators. While the raw score itself does not have a direct interpretation, it is useful for ranking translations and systems according to their quality.\n\nHowever, since 2022 we have introduced a new training approach that scales the scores between 0 and 1. This makes it easier to interpret the scores: a score close to 1 indicates a high-quality translation, while a score close to 0 indicates a translation that is no better than random chance. Also, with the introduction of XCOMET models we can now analyse which text spans are part of minor, major or critical errors according to the MQM typology.\n\nIt's worth noting that when using COMET to compare the performance of two different translation systems, it's important to run the `comet-compare` command to obtain statistical significance measures. This command compares the output of two systems using a statistical hypothesis test, providing an estimate of the probability that the observed difference in scores between the systems is due to chance. This is an important step to ensure that any differences in scores between systems are statistically significant.\n\nOverall, the added interpretability of scores in the latest COMET models, combined with the ability to assess statistical significance between systems using `comet-compare`, make COMET a valuable tool for evaluating machine translation.\n\n## Languages Covered:\n\nAll the above mentioned models are build on top of XLM-R (variants) which cover the following languages:\n\nAfrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskrit, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.\n\n**Thus, results for language pairs containing uncovered languages are unreliable!**\n\n### COMET for African Languages:\n\nIf you are interested in COMET metrics for african languages please visit [afriCOMET](https://github.com/masakhane-io/africomet). \n\n## Scoring within Python:\n\n```python\nfrom comet import download_model, load_from_checkpoint\n\n# Choose your model from Hugging Face Hub\nmodel_path = download_model(\"Unbabel/XCOMET-XL\")\n# or for example:\n# model_path = download_model(\"Unbabel/wmt22-comet-da\")\n\n# Load the model checkpoint:\nmodel = load_from_checkpoint(model_path)\n\n# Data must be in the following format:\ndata = [\n    {\n        \"src\": \"10 \u5230 15 \u5206\u949f\u53ef\u4ee5\u9001\u5230\u5417\",\n        \"mt\": \"Can I receive my food in 10 to 15 minutes?\",\n        \"ref\": \"Can it be delivered between 10 to 15 minutes?\"\n    },\n    {\n        \"src\": \"Pode ser entregue dentro de 10 a 15 minutos?\",\n        \"mt\": \"Can you send it for 10 to 15 minutes?\",\n        \"ref\": \"Can it be delivered between 10 to 15 minutes?\"\n    }\n]\n# Call predict method:\nmodel_output = model.predict(data, batch_size=8, gpus=1)\nprint(model_output)\nprint(model_output.scores) # sentence-level scores\nprint(model_output.system_score) # system-level score\n\n# Not all COMET models return metadata with detected errors.\nprint(model_output.metadata.error_spans) # detected error spans\n```\n\n# Train your own Metric: \n\nInstead of using pretrained models your can train your own model with the following command:\n```bash\ncomet-train --cfg configs/models/{your_model_config}.yaml\n```\n\nYou can then use your own metric to score:\n\n```bash\ncomet-score -s src.de -t hyp1.en -r ref.en --model PATH/TO/CHECKPOINT\n```\n\nYou can also upload your model to [Hugging Face Hub](https://huggingface.co/docs/hub/index). Use [`Unbabel/wmt22-comet-da`](https://huggingface.co/Unbabel/wmt22-comet-da) as example. Then you can use your model directly from the hub.\n\n# unittest:\nIn order to run the toolkit tests you must run the following command:\n\n```bash\npoetry run coverage run --source=comet -m unittest discover\npoetry run coverage report -m # Expected coverage 76%\n```\n\n**Note:** Testing on CPU takes a long time\n\n# Publications\n\nIf you use COMET please cite our work **and don't forget to say which model you used!**\n\n- [xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection](https://arxiv.org/pdf/2310.10482.pdf)\n\n- [Scaling up CometKiwi: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task](https://arxiv.org/pdf/2309.11925.pdf)\n\n- [CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60/)\n\n- [COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52/)\n\n- [Searching for Cometinho: The Little Metric That Could](https://aclanthology.org/2022.eamt-1.9/)\n\n- [Are References Really Needed? Unbabel-IST 2021 Submission for the Metrics Shared Task](https://aclanthology.org/2021.wmt-1.111/)\n\n- [Uncertainty-Aware Machine Translation Evaluation](https://aclanthology.org/2021.findings-emnlp.330/) \n\n- [COMET - Deploying a New State-of-the-art MT Evaluation Metric in Production](https://www.aclweb.org/anthology/2020.amta-user.4)\n\n- [Unbabel's Participation in the WMT20 Metrics Shared Task](https://aclanthology.org/2020.wmt-1.101/)\n\n- [COMET: A Neural Framework for MT Evaluation](https://www.aclweb.org/anthology/2020.emnlp-main.213)\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "High-quality Machine Translation Evaluation",
    "version": "2.2.2",
    "project_urls": {
        "Documentation": "https://unbabel.github.io/COMET/html/index.html",
        "Homepage": "https://github.com/Unbabel/COMET",
        "Repository": "https://github.com/Unbabel/COMET"
    },
    "split_keywords": [
        "machine translation",
        "evaluation",
        "unbabel",
        "comet"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "aeb7fa94131168834aafdacaa913b5ff98ced6adbb4946ca1d2cda8419b3852a",
                "md5": "dd3c87584b8a47116b46a193bfffaea0",
                "sha256": "55bc93b496bb3e1d7163470e63eb9a1fbcfade66f2e4fdecfeeb1b5473b284ea"
            },
            "downloads": -1,
            "filename": "unbabel_comet-2.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dd3c87584b8a47116b46a193bfffaea0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0,<4.0.0",
            "size": 93122,
            "upload_time": "2024-03-13T11:27:31",
            "upload_time_iso_8601": "2024-03-13T11:27:31.828432Z",
            "url": "https://files.pythonhosted.org/packages/ae/b7/fa94131168834aafdacaa913b5ff98ced6adbb4946ca1d2cda8419b3852a/unbabel_comet-2.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "24b48d34bca2a1190127f368985522e4f02bfebf78f87624ce8bdb5b38632f31",
                "md5": "b86e4de4aecc0fbd09ccc868efbd8589",
                "sha256": "6b78d463ffe7afd5b6b50e1d56fbcb125e6df4f2bbc0772a396a7ff899af931d"
            },
            "downloads": -1,
            "filename": "unbabel_comet-2.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "b86e4de4aecc0fbd09ccc868efbd8589",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0,<4.0.0",
            "size": 64086,
            "upload_time": "2024-03-13T11:27:34",
            "upload_time_iso_8601": "2024-03-13T11:27:34.221345Z",
            "url": "https://files.pythonhosted.org/packages/24/b4/8d34bca2a1190127f368985522e4f02bfebf78f87624ce8bdb5b38632f31/unbabel_comet-2.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-13 11:27:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Unbabel",
    "github_project": "COMET",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "unbabel-comet"
}
        
Elapsed time: 1.06243s