permetrics


Namepermetrics JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttps://github.com/thieu1995/permetrics
SummaryPerMetrics: A Framework of Performance Metrics for Machine Learning Models
upload_time2024-02-24 07:58:00
maintainer
docs_urlNone
authorThieu
requires_python>=3.7
licenseGPLv3
keywords regression classification clustering metrics performance metrics rmse mae mape nse nash-sutcliffe-efficiency willmott-index precision accuracy recall f1 score pearson correlation coefficient r2 kling-gupta efficiency gini coefficient matthews correlation coefficient cohen's kappa score jaccard score roc-auc mutual information rand score davies bouldin score completeness score silhouette coefficient score v-measure score folkes mallows score czekanowski-dice score huber gamma score kulczynski score mcnemar score phi score rogers-tanimoto score russel-rao score sokal-sneath score confusion matrix pearson correlation coefficient (pcc) spearman correlation coefficient (scc) performance analysis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<p align="center">
<img style="max-width:100%;" 
src="https://thieu1995.github.io/post/2023-08/permetrics-01.png" 
alt="PERMETRICS"/>
</p>


---

[![GitHub release](https://img.shields.io/badge/release-2.0.0-yellow.svg)](https://github.com/thieu1995/permetrics/releases)
[![Wheel](https://img.shields.io/pypi/wheel/gensim.svg)](https://pypi.python.org/pypi/permetrics) 
[![PyPI version](https://badge.fury.io/py/permetrics.svg)](https://badge.fury.io/py/permetrics)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/permetrics.svg)
![PyPI - Status](https://img.shields.io/pypi/status/permetrics.svg)
![PyPI - Downloads](https://img.shields.io/pypi/dm/permetrics.svg)
[![Downloads](https://static.pepy.tech/badge/permetrics)](https://pepy.tech/project/permetrics)
[![Tests & Publishes to PyPI](https://github.com/thieu1995/permetrics/actions/workflows/publish-package.yaml/badge.svg)](https://github.com/thieu1995/permetrics/actions/workflows/publish-package.yaml)
![GitHub Release Date](https://img.shields.io/github/release-date/thieu1995/permetrics.svg)
[![Documentation Status](https://readthedocs.org/projects/permetrics/badge/?version=latest)](https://permetrics.readthedocs.io/en/latest/?badge=latest)
[![Chat](https://img.shields.io/badge/Chat-on%20Telegram-blue)](https://t.me/+fRVCJGuGJg1mNDg1)
![GitHub contributors](https://img.shields.io/github/contributors/thieu1995/permetrics.svg)
[![GitTutorial](https://img.shields.io/badge/PR-Welcome-%23FF8300.svg?)](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project)
[![DOI](https://zenodo.org/badge/280617738.svg)](https://zenodo.org/badge/latestdoi/280617738)
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)


PerMetrics is a python library for performance metrics of machine learning models. We aim to implement all 
performance metrics for problems such as regression, classification, clustering, ... problems. Helping users in all 
field access metrics as fast as possible. The number of available metrics include **111 (47 regression metrics, 20 classification metrics, 44 clustering 
metrics)**


# Citation Request 

Please include these citations if you plan to use this library:

```code 
@software{nguyen_van_thieu_2023_8220489,
  author       = {Nguyen Van Thieu},
  title        = {PerMetrics: A Framework of Performance Metrics for Machine Learning Models},
  month        = aug,
  year         = 2023,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.3951205},
  url          = {https://github.com/thieu1995/permetrics}
}
```


# Installation

Install the [current PyPI release](https://pypi.python.org/pypi/permetrics):
```sh 
$ pip install permetrics
```

After installation, you can import Permetrics as any other Python module:

```sh
$ python
>>> import permetrics
>>> permetrics.__version__
```

# Example

Below is the most efficient and effective way to use this library compared to other libraries. 
The example below returns the values of metrics such as root mean squared error, mean absolute error...

```python
from permetrics import RegressionMetric

y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]

evaluator = RegressionMetric(y_true, y_pred)
results = evaluator.get_metrics_by_list_names(["RMSE", "MAE", "MAPE", "R2", "NSE", "KGE"])
print(results["RMSE"])
print(results["KGE"])
```

In case your y_true and y_pred data have multiple columns, and you want to return multiple outputs, something that other libraries cannot do, you can do it in Permetrics as follows:


```python
import numpy as np
from permetrics import RegressionMetric

y_true = np.array([[0.5, 1], [-1, 1], [7, -6]])
y_pred = np.array([[0, 2], [-1, 2], [8, -5]])

evaluator = RegressionMetric(y_true, y_pred)

## The 1st way
results = evaluator.get_metrics_by_dict({
  "RMSE": {"multi_output": "raw_values"},
  "MAE": {"multi_output": "raw_values"},
  "MAPE": {"multi_output": "raw_values"},
})

## The 2nd way
results = evaluator.get_metrics_by_list_names(
  list_metric_names=["RMSE", "MAE", "MAPE", "R2", "NSE", "KGE"],
  list_paras=[{"multi_output": "raw_values"},] * 6
)

## The 3rd way
result01 = evaluator.RMSE(multi_output="raw_values")
result02 = evaluator.MAE(multi_output="raw_values")
```

The more complicated cases in the folder: [examples](/examples). You can also read the [documentation](https://permetrics.readthedocs.io/) 
for more detailed installation instructions, explanations, and examples.


# Contributing

There are lots of ways how you can contribute to Permetrics's development, and you are welcome to join in! For example, 
you can report problems or make feature requests on the [issues](/issues) pages. To facilitate contributions, 
please check for the guidelines in the [CONTRIBUTING.md](/CONTRIBUTING.md) file.


# Official channels 

* [Official source code repository](https://github.com/thieu1995/permetrics)
* [Official document](https://permetrics.readthedocs.io/)
* [Download releases](https://pypi.org/project/permetrics/) 
* [Issue tracker](https://github.com/thieu1995/permetrics/issues) 
* [Notable changes log](/ChangeLog.md)
* [Official discussion group](https://t.me/+fRVCJGuGJg1mNDg1) 


# Note

* **Currently, there is a huge misunderstanding among frameworks around the world about the notation of R, R2, and R^2.** 
* Please read the file [R-R2-Rsquared.docx](.github/assets/R-R2-Rsquared.docx) to understand the differences between them and why there is such confusion.

* **My recommendation is to denote the Coefficient of Determination as COD or R2, while the squared Pearson's 
  Correlation Coefficient should be denoted as R^2 or RSQ (as in Excel software).**


---

Developed by: [Thieu](mailto:nguyenthieu2102@gmail.com?Subject=Permetrics_QUESTIONS) @ 2023



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/thieu1995/permetrics",
    "name": "permetrics",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "regression,classification,clustering,metrics,performance metrics,rmse,mae,mape,nse,nash-sutcliffe-efficiency,willmott-index,precision,accuracy,recall,f1 score,pearson correlation coefficient,r2,Kling-Gupta Efficiency,Gini Coefficient,Matthews Correlation Coefficient,Cohen's Kappa score,Jaccard score,ROC-AUC,mutual information,rand score,Davies Bouldin score,completeness score,Silhouette Coefficient score,V-measure score,Folkes Mallows score,Czekanowski-Dice score,Huber Gamma score,Kulczynski score,McNemar score,Phi score,Rogers-Tanimoto score,Russel-Rao score,Sokal-Sneath score,confusion matrix,pearson correlation coefficient (PCC),spearman correlation coefficient (SCC),Performance analysis",
    "author": "Thieu",
    "author_email": "nguyenthieu2102@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c3/2d/3e572d9a08bb79023a56bea7575edba8abbafad73509437c1ffebceabd8f/permetrics-2.0.0.tar.gz",
    "platform": null,
    "description": "\n<p align=\"center\">\n<img style=\"max-width:100%;\" \nsrc=\"https://thieu1995.github.io/post/2023-08/permetrics-01.png\" \nalt=\"PERMETRICS\"/>\n</p>\n\n\n---\n\n[![GitHub release](https://img.shields.io/badge/release-2.0.0-yellow.svg)](https://github.com/thieu1995/permetrics/releases)\n[![Wheel](https://img.shields.io/pypi/wheel/gensim.svg)](https://pypi.python.org/pypi/permetrics) \n[![PyPI version](https://badge.fury.io/py/permetrics.svg)](https://badge.fury.io/py/permetrics)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/permetrics.svg)\n![PyPI - Status](https://img.shields.io/pypi/status/permetrics.svg)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/permetrics.svg)\n[![Downloads](https://static.pepy.tech/badge/permetrics)](https://pepy.tech/project/permetrics)\n[![Tests & Publishes to PyPI](https://github.com/thieu1995/permetrics/actions/workflows/publish-package.yaml/badge.svg)](https://github.com/thieu1995/permetrics/actions/workflows/publish-package.yaml)\n![GitHub Release Date](https://img.shields.io/github/release-date/thieu1995/permetrics.svg)\n[![Documentation Status](https://readthedocs.org/projects/permetrics/badge/?version=latest)](https://permetrics.readthedocs.io/en/latest/?badge=latest)\n[![Chat](https://img.shields.io/badge/Chat-on%20Telegram-blue)](https://t.me/+fRVCJGuGJg1mNDg1)\n![GitHub contributors](https://img.shields.io/github/contributors/thieu1995/permetrics.svg)\n[![GitTutorial](https://img.shields.io/badge/PR-Welcome-%23FF8300.svg?)](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project)\n[![DOI](https://zenodo.org/badge/280617738.svg)](https://zenodo.org/badge/latestdoi/280617738)\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\n\nPerMetrics is a python library for performance metrics of machine learning models. We aim to implement all \nperformance metrics for problems such as regression, classification, clustering, ... problems. Helping users in all \nfield access metrics as fast as possible. The number of available metrics include **111 (47 regression metrics, 20 classification metrics, 44 clustering \nmetrics)**\n\n\n# Citation Request \n\nPlease include these citations if you plan to use this library:\n\n```code \n@software{nguyen_van_thieu_2023_8220489,\n  author       = {Nguyen Van Thieu},\n  title        = {PerMetrics: A Framework of Performance Metrics for Machine Learning Models},\n  month        = aug,\n  year         = 2023,\n  publisher    = {Zenodo},\n  doi          = {10.5281/zenodo.3951205},\n  url          = {https://github.com/thieu1995/permetrics}\n}\n```\n\n\n# Installation\n\nInstall the [current PyPI release](https://pypi.python.org/pypi/permetrics):\n```sh \n$ pip install permetrics\n```\n\nAfter installation, you can import Permetrics as any other Python module:\n\n```sh\n$ python\n>>> import permetrics\n>>> permetrics.__version__\n```\n\n# Example\n\nBelow is the most efficient and effective way to use this library compared to other libraries. \nThe example below returns the values of metrics such as root mean squared error, mean absolute error...\n\n```python\nfrom permetrics import RegressionMetric\n\ny_true = [3, -0.5, 2, 7]\ny_pred = [2.5, 0.0, 2, 8]\n\nevaluator = RegressionMetric(y_true, y_pred)\nresults = evaluator.get_metrics_by_list_names([\"RMSE\", \"MAE\", \"MAPE\", \"R2\", \"NSE\", \"KGE\"])\nprint(results[\"RMSE\"])\nprint(results[\"KGE\"])\n```\n\nIn case your y_true and y_pred data have multiple columns, and you want to return multiple outputs, something that other libraries cannot do, you can do it in Permetrics as follows:\n\n\n```python\nimport numpy as np\nfrom permetrics import RegressionMetric\n\ny_true = np.array([[0.5, 1], [-1, 1], [7, -6]])\ny_pred = np.array([[0, 2], [-1, 2], [8, -5]])\n\nevaluator = RegressionMetric(y_true, y_pred)\n\n## The 1st way\nresults = evaluator.get_metrics_by_dict({\n  \"RMSE\": {\"multi_output\": \"raw_values\"},\n  \"MAE\": {\"multi_output\": \"raw_values\"},\n  \"MAPE\": {\"multi_output\": \"raw_values\"},\n})\n\n## The 2nd way\nresults = evaluator.get_metrics_by_list_names(\n  list_metric_names=[\"RMSE\", \"MAE\", \"MAPE\", \"R2\", \"NSE\", \"KGE\"],\n  list_paras=[{\"multi_output\": \"raw_values\"},] * 6\n)\n\n## The 3rd way\nresult01 = evaluator.RMSE(multi_output=\"raw_values\")\nresult02 = evaluator.MAE(multi_output=\"raw_values\")\n```\n\nThe more complicated cases in the folder: [examples](/examples). You can also read the [documentation](https://permetrics.readthedocs.io/) \nfor more detailed installation instructions, explanations, and examples.\n\n\n# Contributing\n\nThere are lots of ways how you can contribute to Permetrics's development, and you are welcome to join in! For example, \nyou can report problems or make feature requests on the [issues](/issues) pages. To facilitate contributions, \nplease check for the guidelines in the [CONTRIBUTING.md](/CONTRIBUTING.md) file.\n\n\n# Official channels \n\n* [Official source code repository](https://github.com/thieu1995/permetrics)\n* [Official document](https://permetrics.readthedocs.io/)\n* [Download releases](https://pypi.org/project/permetrics/) \n* [Issue tracker](https://github.com/thieu1995/permetrics/issues) \n* [Notable changes log](/ChangeLog.md)\n* [Official discussion group](https://t.me/+fRVCJGuGJg1mNDg1) \n\n\n# Note\n\n* **Currently, there is a huge misunderstanding among frameworks around the world about the notation of R, R2, and R^2.** \n* Please read the file [R-R2-Rsquared.docx](.github/assets/R-R2-Rsquared.docx) to understand the differences between them and why there is such confusion.\n\n* **My recommendation is to denote the Coefficient of Determination as COD or R2, while the squared Pearson's \n  Correlation Coefficient should be denoted as R^2 or RSQ (as in Excel software).**\n\n\n---\n\nDeveloped by: [Thieu](mailto:nguyenthieu2102@gmail.com?Subject=Permetrics_QUESTIONS) @ 2023\n\n\n",
    "bugtrack_url": null,
    "license": "GPLv3",
    "summary": "PerMetrics: A Framework of Performance Metrics for Machine Learning Models",
    "version": "2.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/thieu1995/permetrics/issues",
        "Change Log": "https://github.com/thieu1995/permetrics/blob/master/ChangeLog.md",
        "Documentation": "https://mafese.readthedocs.io/",
        "Forum": "https://t.me/+fRVCJGuGJg1mNDg1",
        "Homepage": "https://github.com/thieu1995/permetrics",
        "Source Code": "https://github.com/thieu1995/permetrics"
    },
    "split_keywords": [
        "regression",
        "classification",
        "clustering",
        "metrics",
        "performance metrics",
        "rmse",
        "mae",
        "mape",
        "nse",
        "nash-sutcliffe-efficiency",
        "willmott-index",
        "precision",
        "accuracy",
        "recall",
        "f1 score",
        "pearson correlation coefficient",
        "r2",
        "kling-gupta efficiency",
        "gini coefficient",
        "matthews correlation coefficient",
        "cohen's kappa score",
        "jaccard score",
        "roc-auc",
        "mutual information",
        "rand score",
        "davies bouldin score",
        "completeness score",
        "silhouette coefficient score",
        "v-measure score",
        "folkes mallows score",
        "czekanowski-dice score",
        "huber gamma score",
        "kulczynski score",
        "mcnemar score",
        "phi score",
        "rogers-tanimoto score",
        "russel-rao score",
        "sokal-sneath score",
        "confusion matrix",
        "pearson correlation coefficient (pcc)",
        "spearman correlation coefficient (scc)",
        "performance analysis"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2603113d55caacf415f9fb29ed3af940592a54b725b62d1f4c3d5297ddf14c10",
                "md5": "491e62eaa49db23d265e52463d7a2487",
                "sha256": "2b7252fff81e2e2dec67ca7db3826f4d6ec65072cfa76b8e41bc534ab53f331e"
            },
            "downloads": -1,
            "filename": "permetrics-2.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "491e62eaa49db23d265e52463d7a2487",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 52621,
            "upload_time": "2024-02-24T07:57:57",
            "upload_time_iso_8601": "2024-02-24T07:57:57.948002Z",
            "url": "https://files.pythonhosted.org/packages/26/03/113d55caacf415f9fb29ed3af940592a54b725b62d1f4c3d5297ddf14c10/permetrics-2.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c32d3e572d9a08bb79023a56bea7575edba8abbafad73509437c1ffebceabd8f",
                "md5": "425c2cafc9416a895a27ff2433310fb3",
                "sha256": "c7d79d654fa54aa31b293b10bfd22c34c50cd80f0d05b7ca38e295c99b10d53e"
            },
            "downloads": -1,
            "filename": "permetrics-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "425c2cafc9416a895a27ff2433310fb3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 56903,
            "upload_time": "2024-02-24T07:58:00",
            "upload_time_iso_8601": "2024-02-24T07:58:00.563940Z",
            "url": "https://files.pythonhosted.org/packages/c3/2d/3e572d9a08bb79023a56bea7575edba8abbafad73509437c1ffebceabd8f/permetrics-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-24 07:58:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "thieu1995",
    "github_project": "permetrics",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "permetrics"
}
        
Elapsed time: 0.20356s