prob-conf-mat


Nameprob-conf-mat JSON
Version 0.1.0rc3 PyPI version JSON
download
home_pageNone
SummaryConfusion matrices with uncertainty quantification, experiment aggregation and significance testing.
upload_time2025-07-10 13:24:50
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT License Copyright (c) 2025 Ivo Verhoeven Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords classification confusion matrices confusion matrix probabilistic statistics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div style="text-align: center;" align="center">

<picture>
  <source media="(prefers-color-scheme: dark)" srcset="https://www.ivoverhoeven.nl/prob_conf_mat/assets/logo_rectangle_light_text.svg">
  <source media="(prefers-color-scheme: light)" srcset="https://www.ivoverhoeven.nl/prob_conf_mat/assets/logo_rectangle_dark_text.svg">
  <img alt="Logo" src="https://www.ivoverhoeven.nl/prob_conf_mat/assets/logo_rectangle_dark_text.svg" width="150px">
</picture>

<div style="text-align: center;" align="center">

<a href="https://github.com/ioverho/prob_conf_mat/actions/workflows/test.yaml" >
 <img src="https://github.com/ioverho/prob_conf_mat/actions/workflows/test.yaml/badge.svg"/ alt="Tests status">
</a>

<a href="https://codecov.io/github/ioverho/prob_conf_mat" >
 <img src="https://codecov.io/github/ioverho/prob_conf_mat/graph/badge.svg?token=EU85JBF8M2"/ alt="Codecov report">
</a>

<a href="./LICENSE" >
 <img alt="GitHub License" src="https://img.shields.io/github/license/ioverho/prob_conf_mat">
</a>

<a href="https://pypi.org/project/prob-conf-mat/" >
  <img alt="PyPI - Version" src="https://img.shields.io/pypi/v/prob_conf_mat">
</a>

<h1>Probabilistic Confusion Matrices</h1>

</div>
</div>

**`prob_conf_mat`** is a Python package for performing statistical inference with confusion matrices. It quantifies the amount of uncertainty present, aggregates semantically related experiments into experiment groups, and compares experiments against each other for significance.

## Installation

Installation can be done using from [pypi](https://pypi.org/project/prob-conf-mat/) can be done using `pip`:

```bash
pip install prob_conf_mat
```

Or, if you're using [`uv`](https://docs.astral.sh/uv/), simply run:

```bash
uv add prob_conf_mat
```

The project currently depends on the following packages:

<details>
  <summary>Dependency tree</summary>

```txt
bayes-conf-mat
├── jaxtyping v0.3.2
├── matplotlib v3.10.3
├── numpy v2.3.0
├── scipy v1.15.3
├── seaborn v0.13.2
│   └── pandas v2.3.0
└── tabulate v0.9.0

```

</details>

### Development Environment

This project was developed using [`uv`](https://docs.astral.sh/uv/). To install the development environment, simply clone this github repo:

```bash
git clone https://github.com/ioverho/prob_conf_mat.git
```

And then run the `uv sync --dev` command:

```bash
uv sync --dev
```

The development dependencies should automatically install into the `.venv` folder.

## Documentation

For more information about the package, motivation, how-to guides and implementation, please see the [documentation website](https://www.ivoverhoeven.nl/prob_conf_mat/index.html). We try to use [Daniele Procida's structure for Python documentation](https://docs.divio.com/documentation-system/).

The documentation is broadly divided into 4 sections:

1. **Getting Started**: a collection of small tutorials to help new users get started
2. **How To**: more expansive guides on how to achieve specific things
3. **Reference**: in-depth information about how to interface with the library
4. **Explanation**: explanations about *why* things are the way they are

|                 | Learning                                                                                                     | Coding                                                                                         |
| --------------- | ------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------- |
| **Practical**   | [Getting Started](https://www.ivoverhoeven.nl/prob_conf_mat/Getting%20Started/01_estimating_uncertainty.html) | [How-To Guides](https://www.ivoverhoeven.nl/prob_conf_mat/How%20To%20Guides/configuration.html) |
| **Theoretical** | [Explanation](https://www.ivoverhoeven.nl/prob_conf_mat/Explanation/generating_confusion_matrices.html)       | [Reference](https://www.ivoverhoeven.nl/prob_conf_mat/Reference/Study.html)                     |

## Quick Start

In depth tutorials taking you through all basic steps are available on the [documentation site](https://www.ivoverhoeven.nl/prob_conf_mat/Getting%20Started/01_estimating_uncertainty.html). For the impatient, here's a standard use case.

First define a study, and set some sensible hyperparameters for the simulated confusion matrices.

```python
from prob_conf_mat import Study

study = Study(
    seed=0,
    num_samples=10000,
    ci_probability=0.95,
)
```

Then add a experiment and confusion matrix to the study:

```python
study.add_experiment(
  experiment_name="model_1/fold_0",
  confusion_matrix=[
    [13, 0, 0],
    [0, 10, 6],
    [0,  0, 9],
  ],
  confusion_prior=0,
  prevalence_prior=1,
)
```

Finally, add some metrics to the study:

```python
study.add_metric("acc")
```

We are now ready to start generating summary statistics about this experiment. For example:

```python
study.report_metric_summaries(
  metric="acc",
  table_fmt="github"
)
```

| Group   | Experiment   |   Observed |   Median |   Mode |        95.0% HDI |     MU |    Skew |   Kurt |
|---------|--------------|------------|----------|--------|------------------|--------|---------|--------|
| model_1 | fold_0       |     0.8421 |   0.8499 | 0.8673 | [0.7307, 0.9464] | 0.2157 | -0.5627 | 0.2720 |

So while this experiment achieves an accuracy of 84.21%, a more reasonable estimate (given the size of the test set, and) would be 84.99%. There is a 95% probability that the true accuracy lies between 73.07%-94.64%.

Visually that looks something like:

```python
fig = study.plot_metric_summaries(metric="acc")
```

<picture>
  <img alt="Metric distribution" src="documentation/assets/figures/readme/uncertainty_fig.svg" width="80%" style="display: block;margin-left: auto;margin-right: auto; max-width: 500;">
</picture>

Now let's add a confusion matrix for the same model, but estimated using a different fold. We want to know what the average performance is for that model across the different folds:

```python
study.add_experiment(
  experiment_name="model_1/fold_1",
  confusion_matrix=[
      [12, 1, 0],
      [1, 8, 7],
      [0, 2, 7],
  ],
  confusion_prior=0,
  prevalence_prior=1,
)
```

We can equip each metric with an inter-experiment aggregation method, and we can then request summary statistics about the aggregate performance of the experiments using `'model_1'`:

```python
study.add_metric(
    metric="acc",
    aggregation="beta",
)

fig = study.plot_forest_plot(metric="acc")
```

<picture>
  <img alt="Forest plot" src="documentation/assets/figures/readme/forest_plot.svg" width="80%" style="display: block;margin-left: auto;margin-right: auto; max-width: 500;">
</picture>

Note that estimated aggregate accuracy has much less uncertainty (a smaller HDI/MU).

These experiments seem pretty different. But is this difference significant? Let's assume that for this example a difference needs to be at least `'0.05'` to be considered significant. In that case, we can quickly request the probability of their difference:

```python
fig = study.plot_pairwise_comparison(
    metric="acc",
    experiment_a="model_1/fold_0",
    experiment_b="model_1/fold_1",
    min_sig_diff=0.05,
)
```

<picture>
  <img alt="Comparison plot" src="documentation/assets/figures/readme/comparison_plot.svg" width="80%" style="display: block;margin-left: auto;margin-right: auto; max-width: 500;">
</picture>

There's about an 82% probability that the difference is in fact significant. While likely, there isn't quite enough data to be sure.

## Development

This project was developed using the following (amazing) tools:

1. Package management: [`uv`](https://docs.astral.sh/uv/)
2. Linting: [`ruff`](https://docs.astral.sh/ruff/)
3. Static Type-Checking: [`pyright`](https://microsoft.github.io/pyright/)
4. Documentation: [`mkdocs`](https://www.mkdocs.org/)
5. CI: [`pre-commit`](https://pre-commit.com/)

Most of the common development commands are included in `./Makefile`. If `make` is installed, you can immediately run the following commands:

```txt
Usage:
  make <target>

Utility
  help             Display this help
  hello-world      Tests uv and make

Environment
  install          Install default dependencies
  install-dev      Install dev dependencies
  upgrade          Upgrade installed dependencies
  export           Export uv to requirements.txt file

Testing, Linting, Typing & Formatting
  test             Runs all tests
  coverage         Checks test coverage
  lint             Run linting
  type             Run static typechecking
  commit           Run pre-commit checks

Documentation
  mkdocs           Update the docs
  mkdocs-serve     Serve documentation site
```

## Credits

The following are some packages and libraries which served as inspiration for aspects of this project: [arviz](https://python.arviz.org/en/stable/), [bayestestR](https://easystats.github.io/bayestestR/), [BERTopic](https://github.com/MaartenGr/BERTopic), [jaxtyping](https://github.com/patrick-kidger/jaxtyping), [mici](https://github.com/matt-graham/mici), , [python-ci](https://github.com/stinodego/python-ci), [statsmodels](https://www.statsmodels.org/stable/index.html).

A lot of the approaches and methods used in this project come from published works. Some especially important works include:

1. Goutte, C., & Gaussier, E. (2005). [A probabilistic interpretation of precision, recall and F-score, with implication for evaluation](https://link.springer.com/chapter/10.1007/978-3-540-31865-1_25). In European conference on information retrieval (pp. 345-359). Berlin, Heidelberg: Springer Berlin Heidelberg.
2. Tötsch, N., & Hoffmann, D. (2021). [Classifier uncertainty: evidence, potential impact, and probabilistic treatment](https://peerj.com/articles/cs-398/). PeerJ Computer Science, 7, e398.
3. Kruschke, J. K. (2013). [Bayesian estimation supersedes the t test](https://pubmed.ncbi.nlm.nih.gov/22774788/). Journal of Experimental Psychology: General, 142(2), 573.
4. Makowski, D., Ben-Shachar, M. S., Chen, S. A., & Lüdecke, D. (2019). [Indices of effect existence and significance in the Bayesian framework](https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2019.02767/full). Frontiers in psychology, 10, 2767.
5. Hill, T. (2011). [Conflations of probability distributions](https://www.ams.org/journals/tran/2011-363-06/S0002-9947-2011-05340-7/S0002-9947-2011-05340-7.pdf). Transactions of the American Mathematical Society, 363(6), 3351-3372.
6. Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. J. H. W. (2019). [Cochrane handbook for systematic reviews of interventions](https://www.cochrane.org/authors/handbooks-and-manuals/handbook). Hoboken: Wiley, 4.

## Citation

```bibtex
@software{ioverho_prob_conf_mat,
    author = {Verhoeven, Ivo},
    license = {MIT},
    title = {{prob\_conf\_mat}},
    url = {https://github.com/ioverho/prob_conf_mat}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "prob-conf-mat",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "classification, confusion matrices, confusion matrix, probabilistic, statistics",
    "author": null,
    "author_email": "Ivo Verhoeven <mail@ivoverhoeven.nl>",
    "download_url": "https://files.pythonhosted.org/packages/9c/22/94bec799166bf18d8f70c8a119d92468168026b6165647ba246ef1eca1bc/prob_conf_mat-0.1.0rc3.tar.gz",
    "platform": null,
    "description": "<div style=\"text-align: center;\" align=\"center\">\n\n<picture>\n  <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://www.ivoverhoeven.nl/prob_conf_mat/assets/logo_rectangle_light_text.svg\">\n  <source media=\"(prefers-color-scheme: light)\" srcset=\"https://www.ivoverhoeven.nl/prob_conf_mat/assets/logo_rectangle_dark_text.svg\">\n  <img alt=\"Logo\" src=\"https://www.ivoverhoeven.nl/prob_conf_mat/assets/logo_rectangle_dark_text.svg\" width=\"150px\">\n</picture>\n\n<div style=\"text-align: center;\" align=\"center\">\n\n<a href=\"https://github.com/ioverho/prob_conf_mat/actions/workflows/test.yaml\" >\n <img src=\"https://github.com/ioverho/prob_conf_mat/actions/workflows/test.yaml/badge.svg\"/ alt=\"Tests status\">\n</a>\n\n<a href=\"https://codecov.io/github/ioverho/prob_conf_mat\" >\n <img src=\"https://codecov.io/github/ioverho/prob_conf_mat/graph/badge.svg?token=EU85JBF8M2\"/ alt=\"Codecov report\">\n</a>\n\n<a href=\"./LICENSE\" >\n <img alt=\"GitHub License\" src=\"https://img.shields.io/github/license/ioverho/prob_conf_mat\">\n</a>\n\n<a href=\"https://pypi.org/project/prob-conf-mat/\" >\n  <img alt=\"PyPI - Version\" src=\"https://img.shields.io/pypi/v/prob_conf_mat\">\n</a>\n\n<h1>Probabilistic Confusion Matrices</h1>\n\n</div>\n</div>\n\n**`prob_conf_mat`** is a Python package for performing statistical inference with confusion matrices. It quantifies the amount of uncertainty present, aggregates semantically related experiments into experiment groups, and compares experiments against each other for significance.\n\n## Installation\n\nInstallation can be done using from [pypi](https://pypi.org/project/prob-conf-mat/) can be done using `pip`:\n\n```bash\npip install prob_conf_mat\n```\n\nOr, if you're using [`uv`](https://docs.astral.sh/uv/), simply run:\n\n```bash\nuv add prob_conf_mat\n```\n\nThe project currently depends on the following packages:\n\n<details>\n  <summary>Dependency tree</summary>\n\n```txt\nbayes-conf-mat\n\u251c\u2500\u2500 jaxtyping v0.3.2\n\u251c\u2500\u2500 matplotlib v3.10.3\n\u251c\u2500\u2500 numpy v2.3.0\n\u251c\u2500\u2500 scipy v1.15.3\n\u251c\u2500\u2500 seaborn v0.13.2\n\u2502   \u2514\u2500\u2500 pandas v2.3.0\n\u2514\u2500\u2500 tabulate v0.9.0\n\n```\n\n</details>\n\n### Development Environment\n\nThis project was developed using [`uv`](https://docs.astral.sh/uv/). To install the development environment, simply clone this github repo:\n\n```bash\ngit clone https://github.com/ioverho/prob_conf_mat.git\n```\n\nAnd then run the `uv sync --dev` command:\n\n```bash\nuv sync --dev\n```\n\nThe development dependencies should automatically install into the `.venv` folder.\n\n## Documentation\n\nFor more information about the package, motivation, how-to guides and implementation, please see the [documentation website](https://www.ivoverhoeven.nl/prob_conf_mat/index.html). We try to use [Daniele Procida's structure for Python documentation](https://docs.divio.com/documentation-system/).\n\nThe documentation is broadly divided into 4 sections:\n\n1. **Getting Started**: a collection of small tutorials to help new users get started\n2. **How To**: more expansive guides on how to achieve specific things\n3. **Reference**: in-depth information about how to interface with the library\n4. **Explanation**: explanations about *why* things are the way they are\n\n|                 | Learning                                                                                                     | Coding                                                                                         |\n| --------------- | ------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------- |\n| **Practical**   | [Getting Started](https://www.ivoverhoeven.nl/prob_conf_mat/Getting%20Started/01_estimating_uncertainty.html) | [How-To Guides](https://www.ivoverhoeven.nl/prob_conf_mat/How%20To%20Guides/configuration.html) |\n| **Theoretical** | [Explanation](https://www.ivoverhoeven.nl/prob_conf_mat/Explanation/generating_confusion_matrices.html)       | [Reference](https://www.ivoverhoeven.nl/prob_conf_mat/Reference/Study.html)                     |\n\n## Quick Start\n\nIn depth tutorials taking you through all basic steps are available on the [documentation site](https://www.ivoverhoeven.nl/prob_conf_mat/Getting%20Started/01_estimating_uncertainty.html). For the impatient, here's a standard use case.\n\nFirst define a study, and set some sensible hyperparameters for the simulated confusion matrices.\n\n```python\nfrom prob_conf_mat import Study\n\nstudy = Study(\n    seed=0,\n    num_samples=10000,\n    ci_probability=0.95,\n)\n```\n\nThen add a experiment and confusion matrix to the study:\n\n```python\nstudy.add_experiment(\n  experiment_name=\"model_1/fold_0\",\n  confusion_matrix=[\n    [13, 0, 0],\n    [0, 10, 6],\n    [0,  0, 9],\n  ],\n  confusion_prior=0,\n  prevalence_prior=1,\n)\n```\n\nFinally, add some metrics to the study:\n\n```python\nstudy.add_metric(\"acc\")\n```\n\nWe are now ready to start generating summary statistics about this experiment. For example:\n\n```python\nstudy.report_metric_summaries(\n  metric=\"acc\",\n  table_fmt=\"github\"\n)\n```\n\n| Group   | Experiment   |   Observed |   Median |   Mode |        95.0% HDI |     MU |    Skew |   Kurt |\n|---------|--------------|------------|----------|--------|------------------|--------|---------|--------|\n| model_1 | fold_0       |     0.8421 |   0.8499 | 0.8673 | [0.7307, 0.9464] | 0.2157 | -0.5627 | 0.2720 |\n\nSo while this experiment achieves an accuracy of 84.21%, a more reasonable estimate (given the size of the test set, and) would be 84.99%. There is a 95% probability that the true accuracy lies between 73.07%-94.64%.\n\nVisually that looks something like:\n\n```python\nfig = study.plot_metric_summaries(metric=\"acc\")\n```\n\n<picture>\n  <img alt=\"Metric distribution\" src=\"documentation/assets/figures/readme/uncertainty_fig.svg\" width=\"80%\" style=\"display: block;margin-left: auto;margin-right: auto; max-width: 500;\">\n</picture>\n\nNow let's add a confusion matrix for the same model, but estimated using a different fold. We want to know what the average performance is for that model across the different folds:\n\n```python\nstudy.add_experiment(\n  experiment_name=\"model_1/fold_1\",\n  confusion_matrix=[\n      [12, 1, 0],\n      [1, 8, 7],\n      [0, 2, 7],\n  ],\n  confusion_prior=0,\n  prevalence_prior=1,\n)\n```\n\nWe can equip each metric with an inter-experiment aggregation method, and we can then request summary statistics about the aggregate performance of the experiments using `'model_1'`:\n\n```python\nstudy.add_metric(\n    metric=\"acc\",\n    aggregation=\"beta\",\n)\n\nfig = study.plot_forest_plot(metric=\"acc\")\n```\n\n<picture>\n  <img alt=\"Forest plot\" src=\"documentation/assets/figures/readme/forest_plot.svg\" width=\"80%\" style=\"display: block;margin-left: auto;margin-right: auto; max-width: 500;\">\n</picture>\n\nNote that estimated aggregate accuracy has much less uncertainty (a smaller HDI/MU).\n\nThese experiments seem pretty different. But is this difference significant? Let's assume that for this example a difference needs to be at least `'0.05'` to be considered significant. In that case, we can quickly request the probability of their difference:\n\n```python\nfig = study.plot_pairwise_comparison(\n    metric=\"acc\",\n    experiment_a=\"model_1/fold_0\",\n    experiment_b=\"model_1/fold_1\",\n    min_sig_diff=0.05,\n)\n```\n\n<picture>\n  <img alt=\"Comparison plot\" src=\"documentation/assets/figures/readme/comparison_plot.svg\" width=\"80%\" style=\"display: block;margin-left: auto;margin-right: auto; max-width: 500;\">\n</picture>\n\nThere's about an 82% probability that the difference is in fact significant. While likely, there isn't quite enough data to be sure.\n\n## Development\n\nThis project was developed using the following (amazing) tools:\n\n1. Package management: [`uv`](https://docs.astral.sh/uv/)\n2. Linting: [`ruff`](https://docs.astral.sh/ruff/)\n3. Static Type-Checking: [`pyright`](https://microsoft.github.io/pyright/)\n4. Documentation: [`mkdocs`](https://www.mkdocs.org/)\n5. CI: [`pre-commit`](https://pre-commit.com/)\n\nMost of the common development commands are included in `./Makefile`. If `make` is installed, you can immediately run the following commands:\n\n```txt\nUsage:\n  make <target>\n\nUtility\n  help             Display this help\n  hello-world      Tests uv and make\n\nEnvironment\n  install          Install default dependencies\n  install-dev      Install dev dependencies\n  upgrade          Upgrade installed dependencies\n  export           Export uv to requirements.txt file\n\nTesting, Linting, Typing & Formatting\n  test             Runs all tests\n  coverage         Checks test coverage\n  lint             Run linting\n  type             Run static typechecking\n  commit           Run pre-commit checks\n\nDocumentation\n  mkdocs           Update the docs\n  mkdocs-serve     Serve documentation site\n```\n\n## Credits\n\nThe following are some packages and libraries which served as inspiration for aspects of this project: [arviz](https://python.arviz.org/en/stable/), [bayestestR](https://easystats.github.io/bayestestR/), [BERTopic](https://github.com/MaartenGr/BERTopic), [jaxtyping](https://github.com/patrick-kidger/jaxtyping), [mici](https://github.com/matt-graham/mici), , [python-ci](https://github.com/stinodego/python-ci), [statsmodels](https://www.statsmodels.org/stable/index.html).\n\nA lot of the approaches and methods used in this project come from published works. Some especially important works include:\n\n1. Goutte, C., & Gaussier, E. (2005). [A probabilistic interpretation of precision, recall and F-score, with implication for evaluation](https://link.springer.com/chapter/10.1007/978-3-540-31865-1_25). In European conference on information retrieval (pp. 345-359). Berlin, Heidelberg: Springer Berlin Heidelberg.\n2. T\u00f6tsch, N., & Hoffmann, D. (2021). [Classifier uncertainty: evidence, potential impact, and probabilistic treatment](https://peerj.com/articles/cs-398/). PeerJ Computer Science, 7, e398.\n3. Kruschke, J. K. (2013). [Bayesian estimation supersedes the t test](https://pubmed.ncbi.nlm.nih.gov/22774788/). Journal of Experimental Psychology: General, 142(2), 573.\n4. Makowski, D., Ben-Shachar, M. S., Chen, S. A., & L\u00fcdecke, D. (2019). [Indices of effect existence and significance in the Bayesian framework](https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2019.02767/full). Frontiers in psychology, 10, 2767.\n5. Hill, T. (2011). [Conflations of probability distributions](https://www.ams.org/journals/tran/2011-363-06/S0002-9947-2011-05340-7/S0002-9947-2011-05340-7.pdf). Transactions of the American Mathematical Society, 363(6), 3351-3372.\n6. Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. J. H. W. (2019). [Cochrane handbook for systematic reviews of interventions](https://www.cochrane.org/authors/handbooks-and-manuals/handbook). Hoboken: Wiley, 4.\n\n## Citation\n\n```bibtex\n@software{ioverho_prob_conf_mat,\n    author = {Verhoeven, Ivo},\n    license = {MIT},\n    title = {{prob\\_conf\\_mat}},\n    url = {https://github.com/ioverho/prob_conf_mat}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2025 Ivo Verhoeven  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Confusion matrices with uncertainty quantification, experiment aggregation and significance testing.",
    "version": "0.1.0rc3",
    "project_urls": {
        "Documentation": "https://www.ivoverhoeven.nl/prob_conf_mat/",
        "Homepage": "https://www.ivoverhoeven.nl/prob_conf_mat/",
        "Issues": "https://github.com/ioverho/prob_conf_mat/issues",
        "Repository": "https://github.com/ioverho/prob_conf_mat"
    },
    "split_keywords": [
        "classification",
        " confusion matrices",
        " confusion matrix",
        " probabilistic",
        " statistics"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "23df9112eb60be866d1b901cf55061e2be356433e1d8c5596dab9408aba5e433",
                "md5": "18b6bbb52124f1d4ab87a38596741a4a",
                "sha256": "f3be84ce1a1091b8cf0308f34c69df52924ea6d38b4b866dddd354f0cabccb67"
            },
            "downloads": -1,
            "filename": "prob_conf_mat-0.1.0rc3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "18b6bbb52124f1d4ab87a38596741a4a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 98736,
            "upload_time": "2025-07-10T13:24:49",
            "upload_time_iso_8601": "2025-07-10T13:24:49.097208Z",
            "url": "https://files.pythonhosted.org/packages/23/df/9112eb60be866d1b901cf55061e2be356433e1d8c5596dab9408aba5e433/prob_conf_mat-0.1.0rc3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9c2294bec799166bf18d8f70c8a119d92468168026b6165647ba246ef1eca1bc",
                "md5": "748e1d8d78ab9b4fb0985c75873cfd70",
                "sha256": "e6c1ec4e2dd6a192b3f38af5662b0c1995580740e73674d5dc8b626fb1fe9adc"
            },
            "downloads": -1,
            "filename": "prob_conf_mat-0.1.0rc3.tar.gz",
            "has_sig": false,
            "md5_digest": "748e1d8d78ab9b4fb0985c75873cfd70",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 1234662,
            "upload_time": "2025-07-10T13:24:50",
            "upload_time_iso_8601": "2025-07-10T13:24:50.376067Z",
            "url": "https://files.pythonhosted.org/packages/9c/22/94bec799166bf18d8f70c8a119d92468168026b6165647ba246ef1eca1bc/prob_conf_mat-0.1.0rc3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-10 13:24:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ioverho",
    "github_project": "prob_conf_mat",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "prob-conf-mat"
}
        
Elapsed time: 0.43398s