# Evaluating Models with the Fairness Indicators Dashboard [Beta]
![Fairness Indicators](https://raw.githubusercontent.com/tensorflow/tensorboard/master/docs/images/fairness-indicators.png)
Fairness Indicators for TensorBoard enables easy computation of
commonly-identified fairness metrics for _binary_ and _multiclass_ classifiers.
With the plugin, you can visualize fairness evaluations for your runs and easily
compare performance across groups.
In particular, Fairness Indicators for TensorBoard allows you to evaluate and
visualize model performance, sliced across defined groups of users. Feel
confident about your results with confidence intervals and evaluations at
multiple thresholds.
Many existing tools for evaluating fairness concerns don’t work well on large
scale datasets and models. At Google, it is important for us to have tools that
can work on billion-user systems. Fairness Indicators will allow you to evaluate
across any size of use case, in the TensorBoard environment or in
[Colab](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/).
## Requirements
To install Fairness Indicators for TensorBoard, run:
```
python3 -m virtualenv ~/tensorboard_demo
source ~/tensorboard_demo/bin/activate
pip install --upgrade pip
pip install fairness_indicators
pip install tensorboard-plugin-fairness-indicators
```
### Nightly Packages
Tensorboard Plugin also hosts nightly packages at
https://pypi-nightly.tensorflow.org on Google Cloud. To install the latest
nightly package, please use the following command:
```bash
pip install --extra-index-url https://pypi-nightly.tensorflow.org/simple tensorboard-plugin-fairness-indicators
```
This will install the nightly packages for the major dependencies of Tensorboard
Plugin such as TensorFlow Model Analysis (TFMA).
## Demo Colab
[Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb)
contains an end-to-end demo to train and evaluate a model and visualize fairness evaluation
results in TensorBoard.
## Usage
To use the Fairness Indicators with your own data and evaluations:
1. Train a new model and evaluate using
`tensorflow_model_analysis.run_model_analysis` or
`tensorflow_model_analysis.ExtractEvaluateAndWriteResult` API in
[model_eval_lib](https://github.com/tensorflow/model-analysis/blob/master/tensorflow_model_analysis/api/model_eval_lib.py).
For code snippets on how to do this, see the Fairness Indicators colab
[here](https://github.com/tensorflow/fairness-indicators).
2. Write a summary data file using [`demo.py`](https://github.com/tensorflow/fairness-indicators/blob/master/tensorboard_plugin/tensorboard_plugin_fairness_indicators/demo.py), which will be read
by TensorBoard to render the Fairness Indicators dashboard (See the
[TensorBoard tutorial](https://github.com/tensorflow/tensorboard/blob/master/README.md)
for more information on summary data files).
Flags to be used with the `demo.py` utility:
- `--logdir`: Directory where TensorBoard will write the summary
- `--eval_result_output_dir`: Directory containing evaluation results
evaluated by TFMA
```
python demo.py --logdir=<logdir> --eval_result_output_dir=<eval_result_dir>`
```
Or you can also use `tensorboard_plugin_fairness_indicators.summary_v2` API to write the summary file.
```
writer = tf.summary.create_file_writer(<logdir>)
with writer.as_default():
summary_v2.FairnessIndicators(<eval_result_dir>, step=1)
writer.close()
```
3. Run TensorBoard
Note: This will start a local instance. After the local instance is started, a link
will be displayed to the terminal. Open the link in your browser to view the
Fairness Indicators dashboard.
- `tensorboard --logdir=<logdir>`
- Select the new evaluation run using the drop-down on the left side of
the dashboard to visualize results.
## Compatible versions
The following table shows the package versions that are
compatible with each other. This is determined by our testing framework, but
other *untested* combinations may also work.
|tensorboard-pluginn | tensorflow | tensorflow-model-analysis |
|-------------------------------------------------------------------------------------------------------------|---------------|---------------------------|
|[GitHub master](https://github.com/tensorflow/fairness-indicators/blob/master/tensorboard_plugin/README.md) | nightly (2.x) | 0.46.0 |
|[v0.46.0](https://github.com/tensorflow/fairness-indicators/blob/v0.46.0/tensorboard_plugin/README.md) | 2.15.0 | 0.46.0 |
|[v0.44.0](https://github.com/tensorflow/fairness-indicators/blob/v0.44.0/tensorboard_plugin/README.md) | 2.12.0 | 0.44.0 |
|[v0.43.0](https://github.com/tensorflow/fairness-indicators/blob/v0.43.0/tensorboard_plugin/README.md) | 2.11.0 | 0.43.0 |
|[v0.42.0](https://github.com/tensorflow/fairness-indicators/blob/v0.42.0/tensorboard_plugin/README.md) | 2.10.0 | 0.42.0 |
|[v0.41.0](https://github.com/tensorflow/fairness-indicators/blob/v0.41.0/tensorboard_plugin/README.md) | 2.9.0 | 0.41.0 |
|[v0.40.0](https://github.com/tensorflow/fairness-indicators/blob/v0.40.0/tensorboard_plugin/README.md) | 2.9.0 | 0.40.0 |
|[v0.39.0](https://github.com/tensorflow/fairness-indicators/blob/v0.39.0/tensorboard_plugin/README.md) | 2.8.0 | 0.39.0 |
|[v0.38.0](https://github.com/tensorflow/fairness-indicators/blob/v0.38.0/tensorboard_plugin/README.md) | 2.8.0 | 0.38.0 |
|[v0.37.0](https://github.com/tensorflow/fairness-indicators/blob/v0.37.0/tensorboard_plugin/README.md) | 2.7.0 | 0.37.0 |
|[v0.36.0](https://github.com/tensorflow/fairness-indicators/blob/v0.36.0/tensorboard_plugin/README.md) | 2.7.0 | 0.36.0 |
|[v0.35.0](https://github.com/tensorflow/fairness-indicators/blob/v0.35.0/tensorboard_plugin/README.md) | 2.6.0 | 0.35.0 |
|[v0.34.0](https://github.com/tensorflow/fairness-indicators/blob/v0.34.0/tensorboard_plugin/README.md) | 2.6.0 | 0.34.0 |
|[v0.33.0](https://github.com/tensorflow/fairness-indicators/blob/v0.33.0/tensorboard_plugin/README.md) | 2.5.0 | 0.33.0 |
|[v0.30.0](https://github.com/tensorflow/fairness-indicators/blob/v0.30.0/tensorboard_plugin/README.md) | 2.4.0 | 0.30.0 |
|[v0.29.0](https://github.com/tensorflow/fairness-indicators/blob/v0.29.0/tensorboard_plugin/README.md) | 2.4.0 | 0.29.0 |
|[v0.28.0](https://github.com/tensorflow/fairness-indicators/blob/v0.28.0/tensorboard_plugin/README.md) | 2.4.0 | 0.28.0 |
|[v0.27.0](https://github.com/tensorflow/fairness-indicators/blob/v0.27.0/tensorboard_plugin/README.md) | 2.4.0 | 0.27.0 |
|[v0.26.0](https://github.com/tensorflow/fairness-indicators/blob/v0.26.0/tensorboard_plugin/README.md) | 2.3.0 | 0.26.0 |
|[v0.25.0](https://github.com/tensorflow/fairness-indicators/blob/v0.25.0/tensorboard_plugin/README.md) | 2.3.0 | 0.25.0 |
|[v0.24.0](https://github.com/tensorflow/fairness-indicators/blob/v0.24.0/tensorboard_plugin/README.md) | 2.3.0 | 0.24.0 |
|[v0.23.0](https://github.com/tensorflow/fairness-indicators/blob/v0.23.0/tensorboard_plugin/README.md) | 2.3.0 | 0.23.0 |
Raw data
{
"_id": null,
"home_page": "https://github.com/tensorflow/fairness-indicators",
"name": "tensorboard-plugin-fairness-indicators",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.9",
"maintainer_email": null,
"keywords": "tensorflow model analysis fairness indicators tensorboard machine learning",
"author": "Google LLC",
"author_email": "packages@tensorflow.org",
"download_url": null,
"platform": null,
"description": "# Evaluating Models with the Fairness Indicators Dashboard [Beta]\n\n![Fairness Indicators](https://raw.githubusercontent.com/tensorflow/tensorboard/master/docs/images/fairness-indicators.png)\n\nFairness Indicators for TensorBoard enables easy computation of\ncommonly-identified fairness metrics for _binary_ and _multiclass_ classifiers.\nWith the plugin, you can visualize fairness evaluations for your runs and easily\ncompare performance across groups.\n\nIn particular, Fairness Indicators for TensorBoard allows you to evaluate and\nvisualize model performance, sliced across defined groups of users. Feel\nconfident about your results with confidence intervals and evaluations at\nmultiple thresholds.\n\nMany existing tools for evaluating fairness concerns don\u2019t work well on large\nscale datasets and models. At Google, it is important for us to have tools that\ncan work on billion-user systems. Fairness Indicators will allow you to evaluate\nacross any size of use case, in the TensorBoard environment or in\n[Colab](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/).\n\n## Requirements\n\nTo install Fairness Indicators for TensorBoard, run:\n\n```\npython3 -m virtualenv ~/tensorboard_demo\nsource ~/tensorboard_demo/bin/activate\npip install --upgrade pip\npip install fairness_indicators\npip install tensorboard-plugin-fairness-indicators\n```\n### Nightly Packages\n\nTensorboard Plugin also hosts nightly packages at\nhttps://pypi-nightly.tensorflow.org on Google Cloud. To install the latest\nnightly package, please use the following command:\n\n```bash\npip install --extra-index-url https://pypi-nightly.tensorflow.org/simple tensorboard-plugin-fairness-indicators\n```\n\nThis will install the nightly packages for the major dependencies of Tensorboard\nPlugin such as TensorFlow Model Analysis (TFMA).\n\n## Demo Colab\n\n[Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb)\ncontains an end-to-end demo to train and evaluate a model and visualize fairness evaluation\nresults in TensorBoard.\n\n## Usage\n\nTo use the Fairness Indicators with your own data and evaluations:\n\n1. Train a new model and evaluate using\n `tensorflow_model_analysis.run_model_analysis` or\n `tensorflow_model_analysis.ExtractEvaluateAndWriteResult` API in\n [model_eval_lib](https://github.com/tensorflow/model-analysis/blob/master/tensorflow_model_analysis/api/model_eval_lib.py).\n For code snippets on how to do this, see the Fairness Indicators colab\n [here](https://github.com/tensorflow/fairness-indicators).\n\n2. Write a summary data file using [`demo.py`](https://github.com/tensorflow/fairness-indicators/blob/master/tensorboard_plugin/tensorboard_plugin_fairness_indicators/demo.py), which will be read\n by TensorBoard to render the Fairness Indicators dashboard (See the\n [TensorBoard tutorial](https://github.com/tensorflow/tensorboard/blob/master/README.md)\n for more information on summary data files).\n\n Flags to be used with the `demo.py` utility:\n\n - `--logdir`: Directory where TensorBoard will write the summary\n - `--eval_result_output_dir`: Directory containing evaluation results\n evaluated by TFMA\n\n ```\n python demo.py --logdir=<logdir> --eval_result_output_dir=<eval_result_dir>`\n ```\n\n Or you can also use `tensorboard_plugin_fairness_indicators.summary_v2` API to write the summary file.\n\n ```\n writer = tf.summary.create_file_writer(<logdir>)\n with writer.as_default():\n summary_v2.FairnessIndicators(<eval_result_dir>, step=1)\n writer.close()\n ```\n\n3. Run TensorBoard\n\n Note: This will start a local instance. After the local instance is started, a link\n will be displayed to the terminal. Open the link in your browser to view the\n Fairness Indicators dashboard.\n\n - `tensorboard --logdir=<logdir>`\n - Select the new evaluation run using the drop-down on the left side of\n the dashboard to visualize results.\n\n## Compatible versions\n\nThe following table shows the package versions that are\ncompatible with each other. This is determined by our testing framework, but\nother *untested* combinations may also work.\n\n|tensorboard-pluginn | tensorflow | tensorflow-model-analysis |\n|-------------------------------------------------------------------------------------------------------------|---------------|---------------------------|\n|[GitHub master](https://github.com/tensorflow/fairness-indicators/blob/master/tensorboard_plugin/README.md) | nightly (2.x) | 0.46.0 |\n|[v0.46.0](https://github.com/tensorflow/fairness-indicators/blob/v0.46.0/tensorboard_plugin/README.md) | 2.15.0 | 0.46.0 |\n|[v0.44.0](https://github.com/tensorflow/fairness-indicators/blob/v0.44.0/tensorboard_plugin/README.md) | 2.12.0 | 0.44.0 |\n|[v0.43.0](https://github.com/tensorflow/fairness-indicators/blob/v0.43.0/tensorboard_plugin/README.md) | 2.11.0 | 0.43.0 |\n|[v0.42.0](https://github.com/tensorflow/fairness-indicators/blob/v0.42.0/tensorboard_plugin/README.md) | 2.10.0 | 0.42.0 |\n|[v0.41.0](https://github.com/tensorflow/fairness-indicators/blob/v0.41.0/tensorboard_plugin/README.md) | 2.9.0 | 0.41.0 |\n|[v0.40.0](https://github.com/tensorflow/fairness-indicators/blob/v0.40.0/tensorboard_plugin/README.md) | 2.9.0 | 0.40.0 |\n|[v0.39.0](https://github.com/tensorflow/fairness-indicators/blob/v0.39.0/tensorboard_plugin/README.md) | 2.8.0 | 0.39.0 |\n|[v0.38.0](https://github.com/tensorflow/fairness-indicators/blob/v0.38.0/tensorboard_plugin/README.md) | 2.8.0 | 0.38.0 |\n|[v0.37.0](https://github.com/tensorflow/fairness-indicators/blob/v0.37.0/tensorboard_plugin/README.md) | 2.7.0 | 0.37.0 |\n|[v0.36.0](https://github.com/tensorflow/fairness-indicators/blob/v0.36.0/tensorboard_plugin/README.md) | 2.7.0 | 0.36.0 |\n|[v0.35.0](https://github.com/tensorflow/fairness-indicators/blob/v0.35.0/tensorboard_plugin/README.md) | 2.6.0 | 0.35.0 |\n|[v0.34.0](https://github.com/tensorflow/fairness-indicators/blob/v0.34.0/tensorboard_plugin/README.md) | 2.6.0 | 0.34.0 |\n|[v0.33.0](https://github.com/tensorflow/fairness-indicators/blob/v0.33.0/tensorboard_plugin/README.md) | 2.5.0 | 0.33.0 |\n|[v0.30.0](https://github.com/tensorflow/fairness-indicators/blob/v0.30.0/tensorboard_plugin/README.md) | 2.4.0 | 0.30.0 |\n|[v0.29.0](https://github.com/tensorflow/fairness-indicators/blob/v0.29.0/tensorboard_plugin/README.md) | 2.4.0 | 0.29.0 |\n|[v0.28.0](https://github.com/tensorflow/fairness-indicators/blob/v0.28.0/tensorboard_plugin/README.md) | 2.4.0 | 0.28.0 |\n|[v0.27.0](https://github.com/tensorflow/fairness-indicators/blob/v0.27.0/tensorboard_plugin/README.md) | 2.4.0 | 0.27.0 |\n|[v0.26.0](https://github.com/tensorflow/fairness-indicators/blob/v0.26.0/tensorboard_plugin/README.md) | 2.3.0 | 0.26.0 |\n|[v0.25.0](https://github.com/tensorflow/fairness-indicators/blob/v0.25.0/tensorboard_plugin/README.md) | 2.3.0 | 0.25.0 |\n|[v0.24.0](https://github.com/tensorflow/fairness-indicators/blob/v0.24.0/tensorboard_plugin/README.md) | 2.3.0 | 0.24.0 |\n|[v0.23.0](https://github.com/tensorflow/fairness-indicators/blob/v0.23.0/tensorboard_plugin/README.md) | 2.3.0 | 0.23.0 |\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Fairness Indicators TensorBoard Plugin",
"version": "0.46.0",
"project_urls": {
"Homepage": "https://github.com/tensorflow/fairness-indicators"
},
"split_keywords": [
"tensorflow",
"model",
"analysis",
"fairness",
"indicators",
"tensorboard",
"machine",
"learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e778aa0c7eedfcc769aecb0ebed1514309771029728622a5b6cc1bdc43b1056d",
"md5": "19805b75b4baba829905fe79f145be20",
"sha256": "c843d6851efe3e4484ab1bb6551bc0fa8bc01a4a5049d077cac9da9337ccdeee"
},
"downloads": -1,
"filename": "tensorboard_plugin_fairness_indicators-0.46.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "19805b75b4baba829905fe79f145be20",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.9",
"size": 15903,
"upload_time": "2024-04-26T21:23:07",
"upload_time_iso_8601": "2024-04-26T21:23:07.319968Z",
"url": "https://files.pythonhosted.org/packages/e7/78/aa0c7eedfcc769aecb0ebed1514309771029728622a5b6cc1bdc43b1056d/tensorboard_plugin_fairness_indicators-0.46.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-26 21:23:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tensorflow",
"github_project": "fairness-indicators",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "tensorboard-plugin-fairness-indicators"
}