evaluate


Nameevaluate JSON
Version 0.4.3 PyPI version JSON
download
home_pagehttps://github.com/huggingface/evaluate
SummaryHuggingFace community-driven open-source library of evaluation
upload_time2024-09-11 10:15:32
maintainerNone
docs_urlNone
authorHuggingFace Inc.
requires_python>=3.8.0
licenseApache 2.0
keywords metrics machine learning evaluate evaluation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <br>
    <img src="https://huggingface.co/datasets/evaluate/media/resolve/main/evaluate-banner.png" width="400"/>
    <br>
</p>

<p align="center">
    <a href="https://github.com/huggingface/evaluate/actions/workflows/ci.yml?query=branch%3Amain">
        <img alt="Build" src="https://github.com/huggingface/evaluate/actions/workflows/ci.yml/badge.svg?branch=main">
    </a>
    <a href="https://github.com/huggingface/evaluate/blob/master/LICENSE">
        <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/evaluate.svg?color=blue">
    </a>
    <a href="https://huggingface.co/docs/evaluate/index">
        <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/evaluate/index.svg?down_color=red&down_message=offline&up_message=online">
    </a>
    <a href="https://github.com/huggingface/evaluate/releases">
        <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/evaluate.svg">
    </a>
    <a href="CODE_OF_CONDUCT.md">
        <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg">
    </a>
</p>

🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. 

It currently contains:

- **implementations of dozens of popular metrics**: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like `accuracy = load("accuracy")`, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX).
- **comparisons and measurements**: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.
- **an easy way of adding new evaluation modules to the 🤗 Hub**: you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with `evaluate-cli create [metric name]`, which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.

[🎓 **Documentation**](https://huggingface.co/docs/evaluate/)

🔎 **Find a [metric](https://huggingface.co/evaluate-metric), [comparison](https://huggingface.co/evaluate-comparison), [measurement](https://huggingface.co/evaluate-measurement) on the Hub**

[🌟 **Add a new evaluation module**](https://huggingface.co/docs/evaluate/)

🤗 Evaluate also has lots of useful features like:

- **Type checking**: the input types are checked to make sure that you are using the right input formats for each metric
- **Metric cards**: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness.
- **Community metrics:** Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others.


# Installation

## With pip

🤗 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)

```bash
pip install evaluate
```

# Usage

🤗 Evaluate's main methods are:

- `evaluate.list_evaluation_modules()` to list the available metrics, comparisons and measurements
- `evaluate.load(module_name, **kwargs)` to instantiate an evaluation module
- `results = module.compute(*kwargs)` to compute the result of an evaluation module

# Adding a new evaluation module

First install the necessary dependencies to create a new metric with the following command:
```bash
pip install evaluate[template]
```
Then you can get started with the following command which will create a new folder for your metric and display the necessary steps:
```bash
evaluate-cli create "Awesome Metric"
```
See this [step-by-step guide](https://huggingface.co/docs/evaluate/creating_and_sharing) in the documentation for detailed instructions.

## Credits

Thanks to [@marella](https://github.com/marella) for letting us use the `evaluate` namespace on PyPi previously used by his [library](https://github.com/marella/evaluate).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/huggingface/evaluate",
    "name": "evaluate",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": null,
    "keywords": "metrics machine learning evaluate evaluation",
    "author": "HuggingFace Inc.",
    "author_email": "leandro@huggingface.co",
    "download_url": "https://files.pythonhosted.org/packages/5a/a0/10a56e0939ece94c54276e81459cb4101f46f0e9a6f54fc31a35f64e8854/evaluate-0.4.3.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <br>\n    <img src=\"https://huggingface.co/datasets/evaluate/media/resolve/main/evaluate-banner.png\" width=\"400\"/>\n    <br>\n</p>\n\n<p align=\"center\">\n    <a href=\"https://github.com/huggingface/evaluate/actions/workflows/ci.yml?query=branch%3Amain\">\n        <img alt=\"Build\" src=\"https://github.com/huggingface/evaluate/actions/workflows/ci.yml/badge.svg?branch=main\">\n    </a>\n    <a href=\"https://github.com/huggingface/evaluate/blob/master/LICENSE\">\n        <img alt=\"GitHub\" src=\"https://img.shields.io/github/license/huggingface/evaluate.svg?color=blue\">\n    </a>\n    <a href=\"https://huggingface.co/docs/evaluate/index\">\n        <img alt=\"Documentation\" src=\"https://img.shields.io/website/http/huggingface.co/docs/evaluate/index.svg?down_color=red&down_message=offline&up_message=online\">\n    </a>\n    <a href=\"https://github.com/huggingface/evaluate/releases\">\n        <img alt=\"GitHub release\" src=\"https://img.shields.io/github/release/huggingface/evaluate.svg\">\n    </a>\n    <a href=\"CODE_OF_CONDUCT.md\">\n        <img alt=\"Contributor Covenant\" src=\"https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg\">\n    </a>\n</p>\n\n\ud83e\udd17 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. \n\nIt currently contains:\n\n- **implementations of dozens of popular metrics**: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like `accuracy = load(\"accuracy\")`, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX).\n- **comparisons and measurements**: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.\n- **an easy way of adding new evaluation modules to the \ud83e\udd17 Hub**: you can create new evaluation modules and push them to a dedicated Space in the \ud83e\udd17 Hub with `evaluate-cli create [metric name]`, which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.\n\n[\ud83c\udf93 **Documentation**](https://huggingface.co/docs/evaluate/)\n\n\ud83d\udd0e **Find a [metric](https://huggingface.co/evaluate-metric), [comparison](https://huggingface.co/evaluate-comparison), [measurement](https://huggingface.co/evaluate-measurement) on the Hub**\n\n[\ud83c\udf1f **Add a new evaluation module**](https://huggingface.co/docs/evaluate/)\n\n\ud83e\udd17 Evaluate also has lots of useful features like:\n\n- **Type checking**: the input types are checked to make sure that you are using the right input formats for each metric\n- **Metric cards**: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness.\n- **Community metrics:** Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others.\n\n\n# Installation\n\n## With pip\n\n\ud83e\udd17 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)\n\n```bash\npip install evaluate\n```\n\n# Usage\n\n\ud83e\udd17 Evaluate's main methods are:\n\n- `evaluate.list_evaluation_modules()` to list the available metrics, comparisons and measurements\n- `evaluate.load(module_name, **kwargs)` to instantiate an evaluation module\n- `results = module.compute(*kwargs)` to compute the result of an evaluation module\n\n# Adding a new evaluation module\n\nFirst install the necessary dependencies to create a new metric with the following command:\n```bash\npip install evaluate[template]\n```\nThen you can get started with the following command which will create a new folder for your metric and display the necessary steps:\n```bash\nevaluate-cli create \"Awesome Metric\"\n```\nSee this [step-by-step guide](https://huggingface.co/docs/evaluate/creating_and_sharing) in the documentation for detailed instructions.\n\n## Credits\n\nThanks to [@marella](https://github.com/marella) for letting us use the `evaluate` namespace on PyPi previously used by his [library](https://github.com/marella/evaluate).\n\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "HuggingFace community-driven open-source library of evaluation",
    "version": "0.4.3",
    "project_urls": {
        "Download": "https://github.com/huggingface/evaluate/tags",
        "Homepage": "https://github.com/huggingface/evaluate"
    },
    "split_keywords": [
        "metrics",
        "machine",
        "learning",
        "evaluate",
        "evaluation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a2e7cbca9e2d2590eb9b5aa8f7ebabe1beb1498f9462d2ecede5c9fd9735faaf",
                "md5": "436faeefb4a1258b4c9aadb5f402bc84",
                "sha256": "47d8770bdea76e2c2ed0d40189273027d1a41ccea861bcc7ba12d30ec5d1e517"
            },
            "downloads": -1,
            "filename": "evaluate-0.4.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "436faeefb4a1258b4c9aadb5f402bc84",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 84010,
            "upload_time": "2024-09-11T10:15:30",
            "upload_time_iso_8601": "2024-09-11T10:15:30.018295Z",
            "url": "https://files.pythonhosted.org/packages/a2/e7/cbca9e2d2590eb9b5aa8f7ebabe1beb1498f9462d2ecede5c9fd9735faaf/evaluate-0.4.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5aa010a56e0939ece94c54276e81459cb4101f46f0e9a6f54fc31a35f64e8854",
                "md5": "0db47df4e273b6ce4827afc05e480909",
                "sha256": "3a5700cf83aabee9549264e1e5666f116367c61dbd4d38352015e859a5e2098d"
            },
            "downloads": -1,
            "filename": "evaluate-0.4.3.tar.gz",
            "has_sig": false,
            "md5_digest": "0db47df4e273b6ce4827afc05e480909",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 65679,
            "upload_time": "2024-09-11T10:15:32",
            "upload_time_iso_8601": "2024-09-11T10:15:32.000834Z",
            "url": "https://files.pythonhosted.org/packages/5a/a0/10a56e0939ece94c54276e81459cb4101f46f0e9a6f54fc31a35f64e8854/evaluate-0.4.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-11 10:15:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "huggingface",
    "github_project": "evaluate",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "evaluate"
}
        
Elapsed time: 0.39909s