evaluate


Nameevaluate JSON
Version 0.4.2 PyPI version JSON
download
home_pagehttps://github.com/huggingface/evaluate
SummaryHuggingFace community-driven open-source library of evaluation
upload_time2024-04-30 09:44:19
maintainerNone
docs_urlNone
authorHuggingFace Inc.
requires_python>=3.8.0
licenseApache 2.0
keywords metrics machine learning evaluate evaluation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <br>
    <img src="https://huggingface.co/datasets/evaluate/media/resolve/main/evaluate-banner.png" width="400"/>
    <br>
</p>

<p align="center">
    <a href="https://github.com/huggingface/evaluate/actions/workflows/ci.yml?query=branch%3Amain">
        <img alt="Build" src="https://github.com/huggingface/evaluate/actions/workflows/ci.yml/badge.svg?branch=main">
    </a>
    <a href="https://github.com/huggingface/evaluate/blob/master/LICENSE">
        <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/evaluate.svg?color=blue">
    </a>
    <a href="https://huggingface.co/docs/evaluate/index">
        <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/evaluate/index.svg?down_color=red&down_message=offline&up_message=online">
    </a>
    <a href="https://github.com/huggingface/evaluate/releases">
        <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/evaluate.svg">
    </a>
    <a href="CODE_OF_CONDUCT.md">
        <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg">
    </a>
</p>

🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. 

It currently contains:

- **implementations of dozens of popular metrics**: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like `accuracy = load("accuracy")`, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX).
- **comparisons and measurements**: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.
- **an easy way of adding new evaluation modules to the 🤗 Hub**: you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with `evaluate-cli create [metric name]`, which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.

[🎓 **Documentation**](https://huggingface.co/docs/evaluate/)

🔎 **Find a [metric](https://huggingface.co/evaluate-metric), [comparison](https://huggingface.co/evaluate-comparison), [measurement](https://huggingface.co/evaluate-measurement) on the Hub**

[🌟 **Add a new evaluation module**](https://huggingface.co/docs/evaluate/)

🤗 Evaluate also has lots of useful features like:

- **Type checking**: the input types are checked to make sure that you are using the right input formats for each metric
- **Metric cards**: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness.
- **Community metrics:** Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others.


# Installation

## With pip

🤗 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)

```bash
pip install evaluate
```

# Usage

🤗 Evaluate's main methods are:

- `evaluate.list_evaluation_modules()` to list the available metrics, comparisons and measurements
- `evaluate.load(module_name, **kwargs)` to instantiate an evaluation module
- `results = module.compute(*kwargs)` to compute the result of an evaluation module

# Adding a new evaluation module

First install the necessary dependencies to create a new metric with the following command:
```bash
pip install evaluate[template]
```
Then you can get started with the following command which will create a new folder for your metric and display the necessary steps:
```bash
evaluate-cli create "Awesome Metric"
```
See this [step-by-step guide](https://huggingface.co/docs/evaluate/creating_and_sharing) in the documentation for detailed instructions.

## Credits

Thanks to [@marella](https://github.com/marella) for letting us use the `evaluate` namespace on PyPi previously used by his [library](https://github.com/marella/evaluate).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/huggingface/evaluate",
    "name": "evaluate",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": null,
    "keywords": "metrics machine learning evaluate evaluation",
    "author": "HuggingFace Inc.",
    "author_email": "leandro@huggingface.co",
    "download_url": "https://files.pythonhosted.org/packages/a5/97/5a5261a51545910ec471d3e143b74d932633320b3e3d810d838cddf440ed/evaluate-0.4.2.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <br>\n    <img src=\"https://huggingface.co/datasets/evaluate/media/resolve/main/evaluate-banner.png\" width=\"400\"/>\n    <br>\n</p>\n\n<p align=\"center\">\n    <a href=\"https://github.com/huggingface/evaluate/actions/workflows/ci.yml?query=branch%3Amain\">\n        <img alt=\"Build\" src=\"https://github.com/huggingface/evaluate/actions/workflows/ci.yml/badge.svg?branch=main\">\n    </a>\n    <a href=\"https://github.com/huggingface/evaluate/blob/master/LICENSE\">\n        <img alt=\"GitHub\" src=\"https://img.shields.io/github/license/huggingface/evaluate.svg?color=blue\">\n    </a>\n    <a href=\"https://huggingface.co/docs/evaluate/index\">\n        <img alt=\"Documentation\" src=\"https://img.shields.io/website/http/huggingface.co/docs/evaluate/index.svg?down_color=red&down_message=offline&up_message=online\">\n    </a>\n    <a href=\"https://github.com/huggingface/evaluate/releases\">\n        <img alt=\"GitHub release\" src=\"https://img.shields.io/github/release/huggingface/evaluate.svg\">\n    </a>\n    <a href=\"CODE_OF_CONDUCT.md\">\n        <img alt=\"Contributor Covenant\" src=\"https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg\">\n    </a>\n</p>\n\n\ud83e\udd17 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. \n\nIt currently contains:\n\n- **implementations of dozens of popular metrics**: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like `accuracy = load(\"accuracy\")`, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX).\n- **comparisons and measurements**: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.\n- **an easy way of adding new evaluation modules to the \ud83e\udd17 Hub**: you can create new evaluation modules and push them to a dedicated Space in the \ud83e\udd17 Hub with `evaluate-cli create [metric name]`, which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.\n\n[\ud83c\udf93 **Documentation**](https://huggingface.co/docs/evaluate/)\n\n\ud83d\udd0e **Find a [metric](https://huggingface.co/evaluate-metric), [comparison](https://huggingface.co/evaluate-comparison), [measurement](https://huggingface.co/evaluate-measurement) on the Hub**\n\n[\ud83c\udf1f **Add a new evaluation module**](https://huggingface.co/docs/evaluate/)\n\n\ud83e\udd17 Evaluate also has lots of useful features like:\n\n- **Type checking**: the input types are checked to make sure that you are using the right input formats for each metric\n- **Metric cards**: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness.\n- **Community metrics:** Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others.\n\n\n# Installation\n\n## With pip\n\n\ud83e\udd17 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)\n\n```bash\npip install evaluate\n```\n\n# Usage\n\n\ud83e\udd17 Evaluate's main methods are:\n\n- `evaluate.list_evaluation_modules()` to list the available metrics, comparisons and measurements\n- `evaluate.load(module_name, **kwargs)` to instantiate an evaluation module\n- `results = module.compute(*kwargs)` to compute the result of an evaluation module\n\n# Adding a new evaluation module\n\nFirst install the necessary dependencies to create a new metric with the following command:\n```bash\npip install evaluate[template]\n```\nThen you can get started with the following command which will create a new folder for your metric and display the necessary steps:\n```bash\nevaluate-cli create \"Awesome Metric\"\n```\nSee this [step-by-step guide](https://huggingface.co/docs/evaluate/creating_and_sharing) in the documentation for detailed instructions.\n\n## Credits\n\nThanks to [@marella](https://github.com/marella) for letting us use the `evaluate` namespace on PyPi previously used by his [library](https://github.com/marella/evaluate).\n\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "HuggingFace community-driven open-source library of evaluation",
    "version": "0.4.2",
    "project_urls": {
        "Download": "https://github.com/huggingface/evaluate/tags",
        "Homepage": "https://github.com/huggingface/evaluate"
    },
    "split_keywords": [
        "metrics",
        "machine",
        "learning",
        "evaluate",
        "evaluation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c2d6ff9baefc8fc679dcd9eb21b29da3ef10c81aa36be630a7ae78e4611588e1",
                "md5": "7f988c7e98398af1977bd001e37a67cb",
                "sha256": "5fdcaf8a086b075c2b8e2c5898f501224b020b0ac7d07be76536e47e661c0c65"
            },
            "downloads": -1,
            "filename": "evaluate-0.4.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7f988c7e98398af1977bd001e37a67cb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 84135,
            "upload_time": "2024-04-30T09:44:17",
            "upload_time_iso_8601": "2024-04-30T09:44:17.147467Z",
            "url": "https://files.pythonhosted.org/packages/c2/d6/ff9baefc8fc679dcd9eb21b29da3ef10c81aa36be630a7ae78e4611588e1/evaluate-0.4.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a5975a5261a51545910ec471d3e143b74d932633320b3e3d810d838cddf440ed",
                "md5": "6358290a541a6a6b544f79f1a595506b",
                "sha256": "851ab767df8ec4031366c512eb88d8174adfba65d2c8c4c9bfdfe9c702212234"
            },
            "downloads": -1,
            "filename": "evaluate-0.4.2.tar.gz",
            "has_sig": false,
            "md5_digest": "6358290a541a6a6b544f79f1a595506b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 65749,
            "upload_time": "2024-04-30T09:44:19",
            "upload_time_iso_8601": "2024-04-30T09:44:19.154716Z",
            "url": "https://files.pythonhosted.org/packages/a5/97/5a5261a51545910ec471d3e143b74d932633320b3e3d810d838cddf440ed/evaluate-0.4.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-30 09:44:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "huggingface",
    "github_project": "evaluate",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "evaluate"
}
        
Elapsed time: 0.35915s