jury


Namejury JSON
Version 2.3.1 PyPI version JSON
download
home_pagehttps://github.com/obss/jury
SummaryEvaluation toolkit for neural language generation.
upload_time2024-05-20 08:28:12
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords machine-learning deep-learning ml pytorch nlp evaluation question-answering question-generation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">Jury</h1>

<p align="center">
<a href="https://pypi.org/project/jury"><img src="https://img.shields.io/pypi/pyversions/jury" alt="Python versions"></a>
<a href="https://pepy.tech/project/jury"><img src="https://pepy.tech/badge/jury" alt="downloads"></a>
<a href="https://pypi.org/project/jury"><img src="https://img.shields.io/pypi/v/jury?color=blue" alt="PyPI version"></a>
<a href="https://github.com/obss/jury/releases/latest"><img alt="Latest Release" src="https://img.shields.io/github/release-date/obss/jury"></a>
<a href="https://colab.research.google.com/github/obss/jury/blob/main/examples/jury_evaluate.ipynb" target="_blank"><img alt="Open in Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a>
<br>
<a href="https://github.com/obss/jury/actions"><img alt="Build status" src="https://github.com/obss/jury/actions/workflows/ci.yml/badge.svg"></a>
<a href="https://libraries.io/pypi/jury"><img alt="Dependencies" src="https://img.shields.io/librariesio/github/obss/jury"></a>
<a href="https://github.com/psf/black"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
<a href="https://github.com/obss/jury/blob/main/LICENSE"><img alt="License: MIT" src="https://img.shields.io/pypi/l/jury"></a>
<br>
<a href="https://doi.org/10.48550/arXiv.2310.02040"><img src="https://img.shields.io/badge/DOI-10.48550%2FarXiv.2310.02040-blue" alt="DOI"></a>
</p>

A comprehensive toolkit for evaluating NLP experiments offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses a more advanced version of [evaluate](https://github.com/huggingface/evaluate/) design for underlying metric computation, so that adding custom metric is easy as extending proper class.

Main advantages that Jury offers are:

- Easy to use for any NLP project.
- Unified structure for computation input across all metrics.
- Calculate many metrics at once.
- Metrics calculations can be handled concurrently to save processing time.
- It seamlessly supports evaluation for multiple predictions/multiple references.

To see more, check the [official Jury blog post](https://medium.com/codable/jury-evaluating-performance-of-nlg-models-730eb9c9999f).

## 🔥 News

* (2023.10.03) Jury paper is out currently is on [arxiv](https://arxiv.org/abs/2310.02040). Please cite this paper if your work use Jury, and if your publication material will be submitted to the venues after this date.  
* (2023.07.30) **Public notice:** You can reach our official [Public Notice](https://docs.google.com/document/d/1mFFT0cR8BUHKJki8mAg6b36QhmsRxvKR3pwOlcxbnss/edit?usp=sharing) document that poses a claim about plagiarism of the work, *jury*, presented in this codebase.

## Available Metrics

The table below shows the current support status for available metrics.

| Metric                                                                        | Jury Support       | HF/evaluate Support |
|-------------------------------------------------------------------------------|--------------------|---------------------|
| Accuracy-Numeric                                                              | :heavy_check_mark: | :white_check_mark:  |
| Accuracy-Text                                                                 | :heavy_check_mark: | :x:                 |
| Bartscore                                                                     | :heavy_check_mark: | :x:                 |
| Bertscore                                                                     | :heavy_check_mark: | :white_check_mark:  |
| Bleu                                                                          | :heavy_check_mark: | :white_check_mark:  |
| Bleurt                                                                        | :heavy_check_mark: | :white_check_mark:  |
| CER                                                                           | :heavy_check_mark: | :white_check_mark:  |
| CHRF                                                                          | :heavy_check_mark: | :white_check_mark:  |
| COMET                                                                         | :heavy_check_mark: | :white_check_mark:  |
| F1-Numeric                                                                    | :heavy_check_mark: | :white_check_mark:  |
| F1-Text                                                                       | :heavy_check_mark: | :x:                 |
| METEOR                                                                        | :heavy_check_mark: | :white_check_mark:  |
| Precision-Numeric                                                             | :heavy_check_mark: | :white_check_mark:  |
| Precision-Text                                                                | :heavy_check_mark: | :x:                 |
| Prism                                                                         | :heavy_check_mark: | :x:                 |
| Recall-Numeric                                                                | :heavy_check_mark: | :white_check_mark:  |
| Recall-Text                                                                   | :heavy_check_mark: | :x:                 |
| ROUGE                                                                         | :heavy_check_mark: | :white_check_mark:  |
| SacreBleu                                                                     | :heavy_check_mark: | :white_check_mark:  |
| Seqeval                                                                       | :heavy_check_mark: | :white_check_mark:  |
| Squad                                                                         | :heavy_check_mark: | :white_check_mark:  |
| TER                                                                           | :heavy_check_mark: | :white_check_mark:  |
| WER                                                                           | :heavy_check_mark: | :white_check_mark:  |
| [Other metrics](https://github.com/huggingface/evaluate/tree/master/metrics)* | :white_check_mark: | :white_check_mark:  |

_*_ Placeholder for the rest of the metrics available in `evaluate` package apart from those which are present in the 
table. 

**Notes**

* The entry :heavy_check_mark: represents that full Jury support is available meaning that all combinations of input 
types (single prediction & single reference, single prediction & multiple references, multiple predictions & multiple 
references) are supported

* The entry :white_check_mark: means that this metric is supported (for Jury through the `evaluate`), so that it 
can (and should) be used just like the `evaluate` metric as instructed in `evaluate` implementation although 
unfortunately full Jury support for those metrics are not yet available.

## Request for a New Metric

For the request of a new metric please [open an issue](https://github.com/obss/jury/issues/new?assignees=&labels=&template=new-metric.md&title=) providing the minimum information. Also, PRs addressing new metric 
supports are welcomed :).

## <div align="center"> Installation </div>

Through pip,

    pip install jury

or build from source,

    git clone https://github.com/obss/jury.git
    cd jury
    python setup.py install

**NOTE:** There may be malfunctions of some metrics depending on `sacrebleu` package on Windows machines which is 
mainly due to the package `pywin32`. For this, we fixed pywin32 version on our setup config for Windows platforms. 
However, if pywin32 causes trouble in your environment we strongly recommend using `conda` manager install the package 
as `conda install pywin32`.

## <div align="center"> Usage </div>

### API Usage

It is only two lines of code to evaluate generated outputs.

```python
from jury import Jury

scorer = Jury()
predictions = [
    ["the cat is on the mat", "There is cat playing on the mat"], 
    ["Look!    a wonderful day."]
]
references = [
    ["the cat is playing on the mat.", "The cat plays on the mat."], 
    ["Today is a wonderful day", "The weather outside is wonderful."]
]
scores = scorer(predictions=predictions, references=references)
```

Specify metrics you want to use on instantiation.

```python
scorer = Jury(metrics=["bleu", "meteor"])
scores = scorer(predictions, references)
```

#### Use of Metrics standalone

You can directly import metrics from `jury.metrics` as classes, and then instantiate and use as desired.

```python
from jury.metrics import Bleu

bleu = Bleu.construct()
score = bleu.compute(predictions=predictions, references=references)
```

The additional parameters can either be specified on `compute()`

```python
from jury.metrics import Bleu

bleu = Bleu.construct()
score = bleu.compute(predictions=predictions, references=references, max_order=4)
```

, or alternatively on instantiation

```python
from jury.metrics import Bleu
bleu = Bleu.construct(compute_kwargs={"max_order": 1})
score = bleu.compute(predictions=predictions, references=references)
```

Note that you can seemlessly access both `jury` and `evaluate` metrics through `jury.load_metric`. 

```python
import jury

bleu = jury.load_metric("bleu")
bleu_1 = jury.load_metric("bleu", resulting_name="bleu_1", compute_kwargs={"max_order": 1})
# metrics not available in `jury` but in `evaluate`
wer = jury.load_metric("competition_math") # It falls back to `evaluate` package with a warning
```

### CLI Usage

You can specify predictions file and references file paths and get the resulting scores. Each line should be paired in both files. You can optionally provide reduce function and an export path for results to be written.

    jury eval --predictions /path/to/predictions.txt --references /path/to/references.txt --reduce_fn max --export /path/to/export.txt

You can also provide prediction folders and reference folders to evaluate multiple experiments. In this set up, however, it is required that the prediction and references files you need to evaluate as a pair have the same file name. These common names are paired together for prediction and reference.

    jury eval --predictions /path/to/predictions_folder --references /path/to/references_folder --reduce_fn max --export /path/to/export.txt

If you want to specify metrics, and do not want to use default, specify it in config file (json) in `metrics` key.

```json
{
  "predictions": "/path/to/predictions.txt",
  "references": "/path/to/references.txt",
  "reduce_fn": "max",
  "metrics": [
    "bleu",
    "meteor"
  ]
}
```

Then, you can call jury eval with `config` argument.

    jury eval --config path/to/config.json

### Custom Metrics

You can use custom metrics with inheriting `jury.metrics.Metric`, you can see current metrics implemented on Jury from [jury/metrics](https://github.com/obss/jury/tree/master/jury/metrics). Jury falls back to `evaluate` implementation of metrics for the ones that are currently not supported by Jury, you can see the metrics available for `evaluate` on [evaluate/metrics](https://github.com/huggingface/evaluate/tree/master/metrics). 

Jury itself uses `evaluate.Metric` as a base class to drive its own base class as `jury.metrics.Metric`. The interface is similar; however, Jury makes the metrics to take a unified input type by handling the inputs for each metrics, and allows supporting several input types as;

- single prediction & single reference
- single prediction & multiple reference
- multiple prediction & multiple reference

As a custom metric both base classes can be used; however, we strongly recommend using `jury.metrics.Metric` as it has several advantages such as supporting computations for the input types above or unifying the type of the input.

```python
from jury.metrics import MetricForTask

class CustomMetric(MetricForTask):
    def _compute_single_pred_single_ref(
        self, predictions, references, reduce_fn = None, **kwargs
    ):
        raise NotImplementedError

    def _compute_single_pred_multi_ref(
        self, predictions, references, reduce_fn = None, **kwargs
    ):
        raise NotImplementedError

    def _compute_multi_pred_multi_ref(
            self, predictions, references, reduce_fn = None, **kwargs
    ):
        raise NotImplementedError
```

For more details, have a look at base metric implementation [jury.metrics.Metric](./jury/metrics/_base.py)

## <div align="center"> Contributing </div>

PRs are welcomed as always :)

### Installation

    git clone https://github.com/obss/jury.git
    cd jury
    pip install -e ".[dev]"

Also, you need to install the packages which are available through a git source separately with the following command. 
For the folks who are curious about "why?"; a short explaination is that PYPI does not allow indexing a package which 
are directly dependent on non-pypi packages due to security reasons. The file `requirements-dev.txt` includes packages 
which are currently only available through a git source, or they are PYPI packages with no recent release or 
incompatible with Jury, so that they are added as git sources or pointing to specific commits.

    pip install -r requirements-dev.txt

### Tests

To tests simply run.

    python tests/run_tests.py

### Code Style

To check code style,

    python tests/run_code_style.py check

To format codebase,

    python tests/run_code_style.py format


## <div align="center"> Citation </div>

If you use this package in your work, please cite it as:

    @misc{cavusoglu2023jury,
      title={Jury: A Comprehensive Evaluation Toolkit}, 
      author={Devrim Cavusoglu and Ulas Sert and Secil Sen and Sinan Altinuc},
      year={2023},
      eprint={2310.02040},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      doi={10.48550/arXiv.2310.02040}
    }

## Community Interaction

We use the GitHub Issue Tracker to track issues in general. Issues can be bug reports, feature requests or implementation of a new metric type. Please refer to the related issue template for opening new issues.

|                                | Location                                                                                           |
|--------------------------------|----------------------------------------------------------------------------------------------------|
| Bug Report                     | [Bug Report Template](https://github.com/obss/jury/issues/new?assignees=&labels=&projects=&template=bug_report.md&title=) |
| New Metric Request             | [Request Metric Implementation](https://github.com/obss/jury/issues/new?assignees=&labels=&projects=&template=new-metric.md&title=) |
| All other issues and questions | [General Issues](https://github.com/obss/jury/issues/new)                                                            |

## <div align="center"> License </div>

Licensed under the [MIT](LICENSE) License.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/obss/jury",
    "name": "jury",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "machine-learning, deep-learning, ml, pytorch, NLP, evaluation, question-answering, question-generation",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/c8/9a/d6677b763a6054ef2ceec1c779ffa95b4825e72cab62f96db6a6d351a890/jury-2.3.1.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">Jury</h1>\n\n<p align=\"center\">\n<a href=\"https://pypi.org/project/jury\"><img src=\"https://img.shields.io/pypi/pyversions/jury\" alt=\"Python versions\"></a>\n<a href=\"https://pepy.tech/project/jury\"><img src=\"https://pepy.tech/badge/jury\" alt=\"downloads\"></a>\n<a href=\"https://pypi.org/project/jury\"><img src=\"https://img.shields.io/pypi/v/jury?color=blue\" alt=\"PyPI version\"></a>\n<a href=\"https://github.com/obss/jury/releases/latest\"><img alt=\"Latest Release\" src=\"https://img.shields.io/github/release-date/obss/jury\"></a>\n<a href=\"https://colab.research.google.com/github/obss/jury/blob/main/examples/jury_evaluate.ipynb\" target=\"_blank\"><img alt=\"Open in Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"></a>\n<br>\n<a href=\"https://github.com/obss/jury/actions\"><img alt=\"Build status\" src=\"https://github.com/obss/jury/actions/workflows/ci.yml/badge.svg\"></a>\n<a href=\"https://libraries.io/pypi/jury\"><img alt=\"Dependencies\" src=\"https://img.shields.io/librariesio/github/obss/jury\"></a>\n<a href=\"https://github.com/psf/black\"><img alt=\"Code style: black\" src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"></a>\n<a href=\"https://github.com/obss/jury/blob/main/LICENSE\"><img alt=\"License: MIT\" src=\"https://img.shields.io/pypi/l/jury\"></a>\n<br>\n<a href=\"https://doi.org/10.48550/arXiv.2310.02040\"><img src=\"https://img.shields.io/badge/DOI-10.48550%2FarXiv.2310.02040-blue\" alt=\"DOI\"></a>\n</p>\n\nA comprehensive toolkit for evaluating NLP experiments offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses a more advanced version of [evaluate](https://github.com/huggingface/evaluate/) design for underlying metric computation, so that adding custom metric is easy as extending proper class.\n\nMain advantages that Jury offers are:\n\n- Easy to use for any NLP project.\n- Unified structure for computation input across all metrics.\n- Calculate many metrics at once.\n- Metrics calculations can be handled concurrently to save processing time.\n- It seamlessly supports evaluation for multiple predictions/multiple references.\n\nTo see more, check the [official Jury blog post](https://medium.com/codable/jury-evaluating-performance-of-nlg-models-730eb9c9999f).\n\n## \ud83d\udd25 News\n\n* (2023.10.03) Jury paper is out currently is on [arxiv](https://arxiv.org/abs/2310.02040). Please cite this paper if your work use Jury, and if your publication material will be submitted to the venues after this date.  \n* (2023.07.30) **Public notice:** You can reach our official [Public Notice](https://docs.google.com/document/d/1mFFT0cR8BUHKJki8mAg6b36QhmsRxvKR3pwOlcxbnss/edit?usp=sharing) document that poses a claim about plagiarism of the work, *jury*, presented in this codebase.\n\n## Available Metrics\n\nThe table below shows the current support status for available metrics.\n\n| Metric                                                                        | Jury Support       | HF/evaluate Support |\n|-------------------------------------------------------------------------------|--------------------|---------------------|\n| Accuracy-Numeric                                                              | :heavy_check_mark: | :white_check_mark:  |\n| Accuracy-Text                                                                 | :heavy_check_mark: | :x:                 |\n| Bartscore                                                                     | :heavy_check_mark: | :x:                 |\n| Bertscore                                                                     | :heavy_check_mark: | :white_check_mark:  |\n| Bleu                                                                          | :heavy_check_mark: | :white_check_mark:  |\n| Bleurt                                                                        | :heavy_check_mark: | :white_check_mark:  |\n| CER                                                                           | :heavy_check_mark: | :white_check_mark:  |\n| CHRF                                                                          | :heavy_check_mark: | :white_check_mark:  |\n| COMET                                                                         | :heavy_check_mark: | :white_check_mark:  |\n| F1-Numeric                                                                    | :heavy_check_mark: | :white_check_mark:  |\n| F1-Text                                                                       | :heavy_check_mark: | :x:                 |\n| METEOR                                                                        | :heavy_check_mark: | :white_check_mark:  |\n| Precision-Numeric                                                             | :heavy_check_mark: | :white_check_mark:  |\n| Precision-Text                                                                | :heavy_check_mark: | :x:                 |\n| Prism                                                                         | :heavy_check_mark: | :x:                 |\n| Recall-Numeric                                                                | :heavy_check_mark: | :white_check_mark:  |\n| Recall-Text                                                                   | :heavy_check_mark: | :x:                 |\n| ROUGE                                                                         | :heavy_check_mark: | :white_check_mark:  |\n| SacreBleu                                                                     | :heavy_check_mark: | :white_check_mark:  |\n| Seqeval                                                                       | :heavy_check_mark: | :white_check_mark:  |\n| Squad                                                                         | :heavy_check_mark: | :white_check_mark:  |\n| TER                                                                           | :heavy_check_mark: | :white_check_mark:  |\n| WER                                                                           | :heavy_check_mark: | :white_check_mark:  |\n| [Other metrics](https://github.com/huggingface/evaluate/tree/master/metrics)* | :white_check_mark: | :white_check_mark:  |\n\n_*_ Placeholder for the rest of the metrics available in `evaluate` package apart from those which are present in the \ntable. \n\n**Notes**\n\n* The entry :heavy_check_mark: represents that full Jury support is available meaning that all combinations of input \ntypes (single prediction & single reference, single prediction & multiple references, multiple predictions & multiple \nreferences) are supported\n\n* The entry :white_check_mark: means that this metric is supported (for Jury through the `evaluate`), so that it \ncan (and should) be used just like the `evaluate` metric as instructed in `evaluate` implementation although \nunfortunately full Jury support for those metrics are not yet available.\n\n## Request for a New Metric\n\nFor the request of a new metric please [open an issue](https://github.com/obss/jury/issues/new?assignees=&labels=&template=new-metric.md&title=) providing the minimum information. Also, PRs addressing new metric \nsupports are welcomed :).\n\n## <div align=\"center\"> Installation </div>\n\nThrough pip,\n\n    pip install jury\n\nor build from source,\n\n    git clone https://github.com/obss/jury.git\n    cd jury\n    python setup.py install\n\n**NOTE:** There may be malfunctions of some metrics depending on `sacrebleu` package on Windows machines which is \nmainly due to the package `pywin32`. For this, we fixed pywin32 version on our setup config for Windows platforms. \nHowever, if pywin32 causes trouble in your environment we strongly recommend using `conda` manager install the package \nas `conda install pywin32`.\n\n## <div align=\"center\"> Usage </div>\n\n### API Usage\n\nIt is only two lines of code to evaluate generated outputs.\n\n```python\nfrom jury import Jury\n\nscorer = Jury()\npredictions = [\n    [\"the cat is on the mat\", \"There is cat playing on the mat\"], \n    [\"Look!    a wonderful day.\"]\n]\nreferences = [\n    [\"the cat is playing on the mat.\", \"The cat plays on the mat.\"], \n    [\"Today is a wonderful day\", \"The weather outside is wonderful.\"]\n]\nscores = scorer(predictions=predictions, references=references)\n```\n\nSpecify metrics you want to use on instantiation.\n\n```python\nscorer = Jury(metrics=[\"bleu\", \"meteor\"])\nscores = scorer(predictions, references)\n```\n\n#### Use of Metrics standalone\n\nYou can directly import metrics from `jury.metrics` as classes, and then instantiate and use as desired.\n\n```python\nfrom jury.metrics import Bleu\n\nbleu = Bleu.construct()\nscore = bleu.compute(predictions=predictions, references=references)\n```\n\nThe additional parameters can either be specified on `compute()`\n\n```python\nfrom jury.metrics import Bleu\n\nbleu = Bleu.construct()\nscore = bleu.compute(predictions=predictions, references=references, max_order=4)\n```\n\n, or alternatively on instantiation\n\n```python\nfrom jury.metrics import Bleu\nbleu = Bleu.construct(compute_kwargs={\"max_order\": 1})\nscore = bleu.compute(predictions=predictions, references=references)\n```\n\nNote that you can seemlessly access both `jury` and `evaluate` metrics through `jury.load_metric`. \n\n```python\nimport jury\n\nbleu = jury.load_metric(\"bleu\")\nbleu_1 = jury.load_metric(\"bleu\", resulting_name=\"bleu_1\", compute_kwargs={\"max_order\": 1})\n# metrics not available in `jury` but in `evaluate`\nwer = jury.load_metric(\"competition_math\") # It falls back to `evaluate` package with a warning\n```\n\n### CLI Usage\n\nYou can specify predictions file and references file paths and get the resulting scores. Each line should be paired in both files. You can optionally provide reduce function and an export path for results to be written.\n\n    jury eval --predictions /path/to/predictions.txt --references /path/to/references.txt --reduce_fn max --export /path/to/export.txt\n\nYou can also provide prediction folders and reference folders to evaluate multiple experiments. In this set up, however, it is required that the prediction and references files you need to evaluate as a pair have the same file name. These common names are paired together for prediction and reference.\n\n    jury eval --predictions /path/to/predictions_folder --references /path/to/references_folder --reduce_fn max --export /path/to/export.txt\n\nIf you want to specify metrics, and do not want to use default, specify it in config file (json) in `metrics` key.\n\n```json\n{\n  \"predictions\": \"/path/to/predictions.txt\",\n  \"references\": \"/path/to/references.txt\",\n  \"reduce_fn\": \"max\",\n  \"metrics\": [\n    \"bleu\",\n    \"meteor\"\n  ]\n}\n```\n\nThen, you can call jury eval with `config` argument.\n\n    jury eval --config path/to/config.json\n\n### Custom Metrics\n\nYou can use custom metrics with inheriting `jury.metrics.Metric`, you can see current metrics implemented on Jury from [jury/metrics](https://github.com/obss/jury/tree/master/jury/metrics). Jury falls back to `evaluate` implementation of metrics for the ones that are currently not supported by Jury, you can see the metrics available for `evaluate` on [evaluate/metrics](https://github.com/huggingface/evaluate/tree/master/metrics). \n\nJury itself uses `evaluate.Metric` as a base class to drive its own base class as `jury.metrics.Metric`. The interface is similar; however, Jury makes the metrics to take a unified input type by handling the inputs for each metrics, and allows supporting several input types as;\n\n- single prediction & single reference\n- single prediction & multiple reference\n- multiple prediction & multiple reference\n\nAs a custom metric both base classes can be used; however, we strongly recommend using `jury.metrics.Metric` as it has several advantages such as supporting computations for the input types above or unifying the type of the input.\n\n```python\nfrom jury.metrics import MetricForTask\n\nclass CustomMetric(MetricForTask):\n    def _compute_single_pred_single_ref(\n        self, predictions, references, reduce_fn = None, **kwargs\n    ):\n        raise NotImplementedError\n\n    def _compute_single_pred_multi_ref(\n        self, predictions, references, reduce_fn = None, **kwargs\n    ):\n        raise NotImplementedError\n\n    def _compute_multi_pred_multi_ref(\n            self, predictions, references, reduce_fn = None, **kwargs\n    ):\n        raise NotImplementedError\n```\n\nFor more details, have a look at base metric implementation [jury.metrics.Metric](./jury/metrics/_base.py)\n\n## <div align=\"center\"> Contributing </div>\n\nPRs are welcomed as always :)\n\n### Installation\n\n    git clone https://github.com/obss/jury.git\n    cd jury\n    pip install -e \".[dev]\"\n\nAlso, you need to install the packages which are available through a git source separately with the following command. \nFor the folks who are curious about \"why?\"; a short explaination is that PYPI does not allow indexing a package which \nare directly dependent on non-pypi packages due to security reasons. The file `requirements-dev.txt` includes packages \nwhich are currently only available through a git source, or they are PYPI packages with no recent release or \nincompatible with Jury, so that they are added as git sources or pointing to specific commits.\n\n    pip install -r requirements-dev.txt\n\n### Tests\n\nTo tests simply run.\n\n    python tests/run_tests.py\n\n### Code Style\n\nTo check code style,\n\n    python tests/run_code_style.py check\n\nTo format codebase,\n\n    python tests/run_code_style.py format\n\n\n## <div align=\"center\"> Citation </div>\n\nIf you use this package in your work, please cite it as:\n\n    @misc{cavusoglu2023jury,\n      title={Jury: A Comprehensive Evaluation Toolkit}, \n      author={Devrim Cavusoglu and Ulas Sert and Secil Sen and Sinan Altinuc},\n      year={2023},\n      eprint={2310.02040},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL},\n      doi={10.48550/arXiv.2310.02040}\n    }\n\n## Community Interaction\n\nWe use the GitHub Issue Tracker to track issues in general. Issues can be bug reports, feature requests or implementation of a new metric type. Please refer to the related issue template for opening new issues.\n\n|                                | Location                                                                                           |\n|--------------------------------|----------------------------------------------------------------------------------------------------|\n| Bug Report                     | [Bug Report Template](https://github.com/obss/jury/issues/new?assignees=&labels=&projects=&template=bug_report.md&title=) |\n| New Metric Request             | [Request Metric Implementation](https://github.com/obss/jury/issues/new?assignees=&labels=&projects=&template=new-metric.md&title=) |\n| All other issues and questions | [General Issues](https://github.com/obss/jury/issues/new)                                                            |\n\n## <div align=\"center\"> License </div>\n\nLicensed under the [MIT](LICENSE) License.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Evaluation toolkit for neural language generation.",
    "version": "2.3.1",
    "project_urls": {
        "Homepage": "https://github.com/obss/jury"
    },
    "split_keywords": [
        "machine-learning",
        " deep-learning",
        " ml",
        " pytorch",
        " nlp",
        " evaluation",
        " question-answering",
        " question-generation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2305242177fde365f5a3af994e37f791553a33b97d74818f835a14325bde7591",
                "md5": "8043a3c4427eb291506d6dad614d2b14",
                "sha256": "3cc235070a30342ffdc3a44631aaf2eec390b5eb6f9757d0c7ab28ca115cfe52"
            },
            "downloads": -1,
            "filename": "jury-2.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8043a3c4427eb291506d6dad614d2b14",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 122140,
            "upload_time": "2024-05-20T08:28:10",
            "upload_time_iso_8601": "2024-05-20T08:28:10.480578Z",
            "url": "https://files.pythonhosted.org/packages/23/05/242177fde365f5a3af994e37f791553a33b97d74818f835a14325bde7591/jury-2.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c89ad6677b763a6054ef2ceec1c779ffa95b4825e72cab62f96db6a6d351a890",
                "md5": "43f7e634f2f0b5b4af54a4334f7b6c94",
                "sha256": "c3497cb33ddb7685bcb4a3c4b1b93df63936e4667f55c7d6202fbc1537b18cec"
            },
            "downloads": -1,
            "filename": "jury-2.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "43f7e634f2f0b5b4af54a4334f7b6c94",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 76826,
            "upload_time": "2024-05-20T08:28:12",
            "upload_time_iso_8601": "2024-05-20T08:28:12.105390Z",
            "url": "https://files.pythonhosted.org/packages/c8/9a/d6677b763a6054ef2ceec1c779ffa95b4825e72cab62f96db6a6d351a890/jury-2.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-20 08:28:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "obss",
    "github_project": "jury",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "jury"
}
        
Elapsed time: 0.35192s