metrics-as-scores


Namemetrics-as-scores JSON
Version 2.8.2 PyPI version JSON
download
home_pagehttps://github.com/mrshoenel/metrics-as-scores
SummaryInteractive web application, tool- and analysis suite for approximating, exploring, understanding, and sampling from conditional distributions.
upload_time2024-04-26 08:34:25
maintainerNone
docs_urlNone
authorSebastian Hönel
requires_python<3.12,>=3.10
licenseDual-licensed under GNU General Public License v3 (GPLv3) and closed-source
keywords distribution fitting statistical tests context-dependent metrics quality score
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            Metrics As Scores
[![DOI](https://zenodo.org/badge/524333119.svg)](https://zenodo.org/badge/latestdoi/524333119)
[![status](https://joss.theoj.org/papers/eb549efe6c0111490395496c68717579/status.svg)](https://joss.theoj.org/papers/eb549efe6c0111490395496c68717579)
[![codecov](https://codecov.io/github/MrShoenel/metrics-as-scores/branch/master/graph/badge.svg?token=HO1GYXVEUQ)](https://codecov.io/github/MrShoenel/metrics-as-scores)
================

- <a href="#usage" id="toc-usage"><span
  class="toc-section-number">1</span> Usage</a>
  - <a href="#text-based-user-interface-tui"
    id="toc-text-based-user-interface-tui"><span
    class="toc-section-number">1.1</span> Text-based User Interface
    (TUI)</a>
  - <a href="#web-application" id="toc-web-application"><span
    class="toc-section-number">1.2</span> Web Application</a>
  - <a href="#development-setup" id="toc-development-setup"><span
    class="toc-section-number">1.3</span> Development Setup</a>
    - <a href="#setting-up-a-virtual-environment"
      id="toc-setting-up-a-virtual-environment"><span
      class="toc-section-number">1.3.1</span> Setting Up a Virtual
      Environment</a>
    - <a href="#installing-packages" id="toc-installing-packages"><span
      class="toc-section-number">1.3.2</span> Installing Packages</a>
    - <a href="#running-tests" id="toc-running-tests"><span
      class="toc-section-number">1.3.3</span> Running Tests</a>
- <a href="#example-usage" id="toc-example-usage"><span
  class="toc-section-number">2</span> Example Usage</a>
  - <a href="#concrete-example-using-the-qualitas.class-corpus-dataset"
    id="toc-concrete-example-using-the-qualitas.class-corpus-dataset"><span
    class="toc-section-number">2.1</span> Concrete Example Using the
    Qualitas.class Corpus Dataset</a>
  - <a href="#concrete-example-using-the-iris-dataset"
    id="toc-concrete-example-using-the-iris-dataset"><span
    class="toc-section-number">2.2</span> Concrete Example Using the Iris
    Dataset</a>
  - <a href="#diamonds-example" id="toc-diamonds-example"><span
    class="toc-section-number">2.3</span> Diamonds Example</a>
- <a href="#datasets" id="toc-datasets"><span
  class="toc-section-number">3</span> Datasets</a>
  - <a href="#use-your-own" id="toc-use-your-own"><span
    class="toc-section-number">3.1</span> Use Your Own</a>
  - <a href="#known-datasets" id="toc-known-datasets"><span
    class="toc-section-number">3.2</span> Known Datasets</a>
- <a href="#personalizing-the-web-application"
  id="toc-personalizing-the-web-application"><span
  class="toc-section-number">4</span> Personalizing the Web
  Application</a>
- <a href="#references" id="toc-references">References</a>

------------------------------------------------------------------------

**Please Note**: ***Metrics As Scores*** (`MAS`) changed considerably
between versions
[**`v1.0.8`**](https://github.com/MrShoenel/metrics-as-scores/tree/v1.0.8)
and **`v2.x.x`**.

The current version is `v2.8.2`.

From version **`v2.x.x`** it has the following new features:

- [Textual User Interface (TUI)](#text-based-user-interface-tui)
- Proper documentation and testing
- New version on PyPI. Install the package and run the command line
  interface by typing **`mas`**!

[Metrics As Scores
Demo.](https://user-images.githubusercontent.com/5049151/219892077-58854478-b761-4a3d-9faf-2fe46c122cf5.webm)

------------------------------------------------------------------------

Contains the data and scripts needed for the application
**`Metrics as Scores`**, check out
<a href="https://mas.research.hönel.net/"
class="uri">https://mas.research.hönel.net/</a>.

This package accompanies the paper entitled “*Contextual
Operationalization of Metrics As Scores: Is My Metric Value Good?*”
(Hönel et al. 2022). It seeks to answer the question whether or not the
domain a software metric was captured in, matters. It enables the user
to compare domains and to understand their differences. In order to
answer the question of whether a metric value is actually good, we need
to transform it into a **score**. Scores are normalized **and
rectified** distances, that can be compared in an apples-to-apples
manner, across domains. The same metric value might be good in one
domain, while it is not in another. To borrow an example from the domain
of software: It is much more acceptable (or common) to have large
applications (in terms of lines of code) in the domains of games and
databases than it is for the domains of IDEs and SDKs. Given an *ideal*
value for a metric (which may also be user-defined), we can transform
observed metric values to distances from that value and then use the
cumulative distribution function to map distances to scores.

------------------------------------------------------------------------

# Usage

You may install Metrics As Scores directly from PyPI. For users that
wish to
[**contribute**](https://github.com/MrShoenel/metrics-as-scores/blob/master/CONTRIBUTING.md)
to Metrics As Scores, a [development setup](#development-setup) is
recommended. In either case, after the installation, [**you have access
to the text-based user interface**](#text-based-user-interface-tui).

``` shell
# Installation from PyPI:
pip install metrics-as-scores
```

You can **bring up the TUI** simply by typing the following after
installing or cloning the repo (see next section for more details):

``` shell
mas
```

## Text-based User Interface (TUI)

Metrics As Scores features a text-based command line user interface
(TUI). It offers a couple of workflows/wizards, that help you to work
and interact with the application. There is no need to modify any source
code, if you want to do one of the following:

- Show Installed Datasets
- Show List of Known Datasets Available Online That Can Be Downloaded
- Download and install a known or existing dataset
- Create Own Dataset to be used with Metrics-As-Scores
- Fit Parametric Distributions for Own Dataset
- Pre-generate distributions for usage in the
  [**Web-Application**](#web-application)
- Bundle Own dataset so it can be published
- Run local, interactive Web-Application using a selected dataset

![Metrics As Scores Text-based User Interface
(TUI).](./TUI.png "Metrics As Scores Text-based User Interface (TUI).")

## Web Application

Metrics As Scores’ main feature is perhaps the Web Application. It can
be run directly and locally from the TUI using a selected dataset (you
may download a known dataset or use your own). The Web Application
allows to visually inspect each *feature* across all the defined
*groups*. It features the PDF/PMF, CDF and CCDF, as well as the PPF for
each feature in each group. It offers five different principal types of
densities: Parametric, Parametric (discrete), Empirical, Empirical
(discrete), and (approximate) Kernel Density Estimation. The Web
Application includes a detailed [Help section](#) that should answer
most of your questions.

![Metrics As Scores Interactive Web
.](./WebApp.png "Metrics As Scores Interactive Web Application.")

## Development Setup

This project was developed using and requires Python `>=3.10`. The
development documentation can be found at
<https://mrshoenel.github.io/metrics-as-scores/>. Steps:

1.  Clone the Repository,
2.  Set up a virtual environment,
3.  Install packages.

### Setting Up a Virtual Environment

It is recommended to use a virtual environment. To use a virtual
environment, follow these steps (Windows specific; activation of the
environment might differ).

``` shell
virtualenv --python=C:/Python310/python.exe venv # Use specific Python version for virtual environment
venv/Scripts/activate
```

Here is a Linux example that assumes you have Python `3.10` installed
(this may also require installing `python3.10-venv` and/or
`python3.10-dev`):

``` shell
python3.10 -m venv venv
source venv/bin/activate # Linux
```

### Installing Packages

The project is managed with `Poetry`. To install the required packages,
simply run the following.

``` shell
venv/Scripts/activate
# First, update pip:
(venv) C:\metrics-as-scores>python -m pip install --upgrade pip
# First install Poetry v1.3.2 using pip:
(venv) C:\metrics-as-scores>pip install poetry==1.3.2
# Install the projects and its dependencies
(venv) C:\metrics-as-scores> poetry install
```

The same in Linux:

``` shell
source venv/bin/activate # Linux
(venv) ubuntu@vm:/tmp/metrics-as-scores$ python -m pip install --upgrade pip
(venv) ubuntu@vm:/tmp/metrics-as-scores$ pip install poetry==1.3.2
(venv) ubuntu@vm:/tmp/metrics-as-scores$ poetry install
```

### Running Tests

Tests are run using `poethepoet`:

``` shell
# Runs the tests and prints coverage
(venv) C:\metrics-as-scores>poe test
```

You can also generate coverage reports:

``` shell
# Writes reports to the local directory htmlcov
(venv) C:\metrics-as-scores>poe cov
```

------------------------------------------------------------------------

# Example Usage

*Metrics As Scores* can be thought of an *interactive*, *multiple-ANOVA*
analysis and explorer. The analysis of variance (ANOVA; Chambers,
Freeny, and Heiberger (2017)) is usually used to analyze the differences
among *hypothesized* group means for a single *feature*. An ANOVA might
be used to estimate the goodness-of-fit of a statistical model. Beyond
ANOVA, `MAS` seeks to answer the question of whether a sample of a
certain quantity (feature) is more or less common across groups. For
each group, we can determine what might constitute a common/ideal value,
and how distant the sample is from that value. This is expressed in
terms of a percentile (a standardized scale of `[0,1]`), which we call
**score**.

## Concrete Example Using the Qualitas.class Corpus Dataset

The notebook
[`notebooks/Example-webapp-qcc.ipynb`](https://github.com/MrShoenel/metrics-as-scores/blob/master/notebooks/Example-webapp-qcc.ipynb)
holds a concrete example for using the web application to interactively
obtain **scores**. In this example, we create a hypothetical application
that ought to be in the application domain *SDK*. Using a concrete
metric, *Number of Packages*, we find out that our hypothetical new SDK
application scores poorly for what it is intended to be.

This example illustrates the point that software metrics, when captured
out of context, are meaningless (Gil and Lalouche 2016). For example,
typical values for complexity metrics are vastly different, depending on
the type of application. We find that, for example, applications of type
SDK have a much lower *expected* complexity compared to Games (`1.9`
vs. `3.1`) (Hönel et al. 2022). Software metrics are often used in
software quality models. However, without knowledge of the application’s
context (here: domain), the deduced quality of these models is at least
misleading, if not completely off. This becomes apparent if we examine
how an application’s complexity scores across certain domains.

Since there are many software metrics that are captured simultaneously,
we can also compare domains in their entirety: How many metrics are
statistically significantly different from each other? Is there a set of
domains that are not distinguishable from each other? Are there metrics
that are always different across domains and must be used with care? In
this example, we use a known and downloadable dataset (Hönel 2023b). It
is based on software metrics and application domains of the
“Qualitas.class corpus” (Terra et al. 2013; Tempero et al. 2010).

## Concrete Example Using the Iris Dataset

The notebook
[`notebooks/Example-create-own-dataset.ipynb`](https://github.com/MrShoenel/metrics-as-scores/blob/master/notebooks/Example-create-own-dataset.ipynb)
holds a concrete example for creating/importing/using one’s own dataset.
Although all necessary steps can be achieved using the **TUI**, this
notebook demonstrates a complete example of implementing this in code.

## Diamonds Example

The diamonds dataset (Wickham 2016) holds prices of over 50,000 round
cut diamonds. It contains a number attributes for each diamond, such as
its price, length, depth, or weight. The dataset, however, features
three quality attributes: The quality of the cut, the clarity, and the
color. Suppose we are interested in examining properties of diamonds of
the highest quality only, across colors. Therefore, we select only those
diamonds from the dataset that have an *ideal* cut and the best (*IF*)
clarity. Now only the color quality gives a context to each diamonds and
its attributes (i.e., diamonds are now *grouped* by color).

This constellation now allows us to examine differences across
differently colored diamonds. For example, there are considerable
differences in price. We find that only the group of diamonds of the
best color is significantly different from the other groups. This
example is available as a downloadable dataset (Hönel 2023c).

------------------------------------------------------------------------

# Datasets

Metrics As Scores can use existing and own datasets. Please keep reading
to learn how.

## Use Your Own

Metrics As Scores has a built-in wizard that lets you import your own
dataset! There is another wizard that bundles your dataset so that it
can be shared with others. You may [**contribute your
dataset**](https://github.com/MrShoenel/metrics-as-scores/blob/master/CONTRIBUTING.md)
so we can add it to the curated list of known datasets (see next
section). If you do not have an own dataset, you can use the built-in
wizard to download any of the known datasets, too!

Note that Metrics As Scores supports you with all tools necessary to
create a publishable dataset. For example, it carries out the common
statistical tests:

- ANOVA (Chambers, Freeny, and Heiberger 2017): Analysis of variance of
  your data across the available groups.
- Tukey’s Honest Significance Test (TukeyHSD; Tukey (1949)): This test
  is used to gain insights into the results of an ANOVA test. While the
  former only allows obtaining the amount of corroboration for the null
  hypothesis, TukeyHSD performs all pairwise comparisons (for all
  possible combinations of any two groups).
- Two-sample T-test: Compares the means of two samples to give an
  indication whether or not they appear to come from the same
  distribution. Again, this is useful for comparing groups. Tukey’s test
  is used to gain insights into the results of an ANOVA test. While the
  former only allows obtaining the amount of corroboration for the null
  hypothesis, TukeyHSD performs all pairwise comparisons (for all
  possible combinations of any two groups).

It also creates an **automatic report** based on these tests that you
can simply render into a PDF using Quarto.

A publishable dataset must contain parametric fits and pre-generated
densities (please check the wizard for these two). Metrics As Scores can
fit approximately **120** continuous and discrete random variables using
`Pymoo` (Blank and Deb 2020). Note that Metrics As Scores also
automatically carries out a number of goodness-of-fit tests. The type of
test also depends on the data (for example, not each test is valid for
discrete data, such as the KS two-sample test). These tests are then
used to select some best fitting random variable for display in the web
application.

- Cramér-von Mises- (Cramér 1928) and Kolmogorov–Smirnov one-sample
  (Stephens 1974) tests: After fitting a distribution, the sample is
  tested against the fitted parametric distribution. Since the fitted
  distribution cannot usually accommodate all of the sample’s
  subtleties, the test will indicate whether the fit is acceptable or
  not.
- Cramér-von Mises- (Anderson 1962), Kolmogorov–Smirnov-, and
  Epps–Singleton (Epps and Singleton 1986) two-sample tests: After
  fitting, we create a second sample by uniformly sampling from the
  `PPF`. Then, both samples can be used in these tests. The
  Epps–Singleton test is also applicable for discrete distributions.

## Known Datasets

The following is a curated list of known, publicly available datasets
that can be used with Metrics As Scores. These datasets can be
downloaded using the text-based user interface.

- Metrics and Domains From the Qualitas.class Corpus (Hönel 2023b). 10
  GB. <https://doi.org/10.5281/zenodo.7633949>.
- Elisa Spectrophotometer Positive Samples (Hönel 2023a). 266 MB.
  <https://doi.org/10.5281/zenodo.7633989>.
- Price, Weight, and Other Properties of Over 1,200 Ideal-Cut and
  Best-Clarity Diamonds (Hönel 2023c). 508 MB.
  <https://doi.org/10.5281/zenodo.7647596>.
- The Iris Flower Data Set (Hönel 2023d). 143 MB.
  <https://doi.org/10.5281/zenodo.7669645>.

------------------------------------------------------------------------

# Personalizing the Web Application

The web application *“[Metrics As Scores](#)”* is located in the
directory
[`src/metrics_as_scores/webapp/`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/).
The app itself has three vertical blocks: a header, the interactive
part, and a footer. Header and footer can be easily edited by modifing
the files
[`src/metrics_as_scores/webapp/header.html`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/header.html)
and
[`src/metrics_as_scores/webapp/footer.html`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/footer.html).

Note that when you create your own dataset, you get to add sections to
the header and footer using two HTML fragments. This is recommended over
modifying the web application directly.

If you want to change the title of the application, you will have to
modify the file
[`src/metrics_as_scores/webapp/main.py`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/main.py)
at the very end:

``` python
# Change this line to your desired title.
curdoc().title = "Metrics As Scores"
```

**Important**: If you modify the web application, you must always
maintain two links: one to <a href="https://mas.research.hönel.net/"
class="uri">https://mas.research.hönel.net/</a> and one to this
repository, that is, <https://github.com/MrShoenel/metrics-as-scores>.

# References

<div id="refs" class="references csl-bib-body hanging-indent">

<div id="ref-Anderson1962" class="csl-entry">

Anderson, T. W. 1962. “<span class="nocase">On the Distribution of the
Two-Sample Cramer-von Mises Criterion</span>.” *The Annals of
Mathematical Statistics* 33 (3): 1148–59.
<https://doi.org/10.1214/aoms/1177704477>.

</div>

<div id="ref-pymoo" class="csl-entry">

Blank, Julian, and Kalyanmoy Deb. 2020. “<span class="nocase">pymoo:
Multi-Objective Optimization in Python</span>.” *IEEE Access* 8:
89497–509. <https://doi.org/10.1109/ACCESS.2020.2990567>.

</div>

<div id="ref-chambers2017statistical" class="csl-entry">

Chambers, John M., Anne E. Freeny, and Richard M. Heiberger. 2017.
“<span class="nocase">Analysis of Variance; Designed
Experiments</span>.” In *<span class="nocase">Statistical Models in
S</span>*, edited by John M. Chambers and Trevor J. Hastie, 1st ed.
Routledge. <https://doi.org/10.1201/9780203738535>.

</div>

<div id="ref-cramer1928" class="csl-entry">

Cramér, Harald. 1928. “On the Composition of Elementary Errors.”
*Scandinavian Actuarial Journal* 1928 (1): 13–74.
<https://doi.org/10.1080/03461238.1928.10416862>.

</div>

<div id="ref-Epps1986" class="csl-entry">

Epps, T. W., and Kenneth J. Singleton. 1986. “<span class="nocase">An
Omnibus Test for the Two-Sample Problem Using the Empirical
Characteristic Function</span>.” *Journal of Statistical Computation and
Simulation* 26 (3-4): 177–203.
<https://doi.org/10.1080/00949658608810963>.

</div>

<div id="ref-gil2016software" class="csl-entry">

Gil, Joseph Yossi, and Gal Lalouche. 2016. “When Do Software Complexity
Metrics Mean Nothing? - When Examined Out of Context.” *J. Object
Technol.* 15 (1): 2:1–25. <https://doi.org/10.5381/jot.2016.15.5.a2>.

</div>

<div id="ref-dataset_elisa" class="csl-entry">

Hönel, Sebastian. 2023a. “Metrics As Scores Dataset: Elisa
Spectrophotometer Positive Samples.” Zenodo.
<https://doi.org/10.5281/zenodo.7633989>.

</div>

<div id="ref-dataset_qcc" class="csl-entry">

———. 2023b. “<span class="nocase">Metrics As Scores Dataset: Metrics and
Domains From the Qualitas.class Corpus</span>.” Zenodo.
<https://doi.org/10.5281/zenodo.7633949>.

</div>

<div id="ref-dataset_diamonds-ideal-if" class="csl-entry">

———. 2023c. “<span class="nocase">Metrics As Scores Dataset: Price,
Weight, and Other Properties of Over 1,200 Ideal-Cut and Best-Clarity
Diamonds</span>.” Zenodo. <https://doi.org/10.5281/zenodo.7647596>.

</div>

<div id="ref-dataset_iris" class="csl-entry">

———. 2023d. “Metrics As Scores Dataset: The Iris Flower Data Set.”
Zenodo. <https://doi.org/10.5281/zenodo.7669664>.

</div>

<div id="ref-honel2022mas" class="csl-entry">

Hönel, Sebastian, Morgan Ericsson, Welf Löwe, and Anna Wingkvist. 2022.
“<span class="nocase">Contextual Operationalization of Metrics As
Scores: Is My Metric Value Good?</span>” In *<span class="nocase">22nd
IEEE International Conference on Software Quality, Reliability and
Security, QRS 2022, Guangzhou, China, December 5–9, 2022</span>*,
333–43. IEEE. <https://doi.org/10.1109/QRS57517.2022.00042>.

</div>

<div id="ref-Stephens1974" class="csl-entry">

Stephens, M. A. 1974. “<span class="nocase">EDF Statistics for Goodness
of Fit and Some Comparisons</span>.” *Journal of the American
Statistical Association* 69 (347): 730–37.
<https://doi.org/10.1080/01621459.1974.10480196>.

</div>

<div id="ref-tempero2010qualitas" class="csl-entry">

Tempero, Ewan D., Craig Anslow, Jens Dietrich, Ted Han, Jing Li, Markus
Lumpe, Hayden Melton, and James Noble. 2010. “<span class="nocase">The
Qualitas Corpus: A Curated Collection of Java Code for Empirical
Studies</span>.” In *17th Asia Pacific Software Engineering Conference,
APSEC 2010, Sydney, Australia, November 30 - December 3, 2010*, edited
by Jun Han and Tran Dan Thu, 336–45. IEEE Computer Society.
<https://doi.org/10.1109/APSEC.2010.46>.

</div>

<div id="ref-terra2013qualitas" class="csl-entry">

Terra, Ricardo, Luis Fernando Miranda, Marco Tulio Valente, and Roberto
da Silva Bigonha. 2013. “<span class="nocase"><span
class="nocase">Qualitas.class</span> corpus: a compiled version of the
<span class="nocase">qualitas</span> corpus</span>.” *ACM SIGSOFT
Software Engineering Notes* 38 (5): 1–4.
<https://doi.org/10.1145/2507288.2507314>.

</div>

<div id="ref-Tukey1949" class="csl-entry">

Tukey, John W. 1949. “<span class="nocase">Comparing Individual Means in
the Analysis of Variance</span>.” *Biometrics* 5 (2): 99–114.
<https://doi.org/10.2307/3001913>.

</div>

<div id="ref-ggplot2" class="csl-entry">

Wickham, Hadley. 2016. *<span class="nocase">ggplot2</span>: Elegant
Graphics for Data Analysis*. Springer-Verlag New York.
<https://ggplot2.tidyverse.org>.

</div>

</div>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mrshoenel/metrics-as-scores",
    "name": "metrics-as-scores",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.10",
    "maintainer_email": null,
    "keywords": "distribution fitting, statistical tests, context-dependent, metrics, quality, score",
    "author": "Sebastian H\u00f6nel",
    "author_email": "development@hoenel.net",
    "download_url": "https://files.pythonhosted.org/packages/60/86/583de758d97b9eb44853432ec5c61beea5cc6de2812b1e7a83f7b13fe3bf/metrics_as_scores-2.8.2.tar.gz",
    "platform": null,
    "description": "Metrics As Scores\n[![DOI](https://zenodo.org/badge/524333119.svg)](https://zenodo.org/badge/latestdoi/524333119)\n[![status](https://joss.theoj.org/papers/eb549efe6c0111490395496c68717579/status.svg)](https://joss.theoj.org/papers/eb549efe6c0111490395496c68717579)\n[![codecov](https://codecov.io/github/MrShoenel/metrics-as-scores/branch/master/graph/badge.svg?token=HO1GYXVEUQ)](https://codecov.io/github/MrShoenel/metrics-as-scores)\n================\n\n- <a href=\"#usage\" id=\"toc-usage\"><span\n  class=\"toc-section-number\">1</span> Usage</a>\n  - <a href=\"#text-based-user-interface-tui\"\n    id=\"toc-text-based-user-interface-tui\"><span\n    class=\"toc-section-number\">1.1</span> Text-based User Interface\n    (TUI)</a>\n  - <a href=\"#web-application\" id=\"toc-web-application\"><span\n    class=\"toc-section-number\">1.2</span> Web Application</a>\n  - <a href=\"#development-setup\" id=\"toc-development-setup\"><span\n    class=\"toc-section-number\">1.3</span> Development Setup</a>\n    - <a href=\"#setting-up-a-virtual-environment\"\n      id=\"toc-setting-up-a-virtual-environment\"><span\n      class=\"toc-section-number\">1.3.1</span> Setting Up a Virtual\n      Environment</a>\n    - <a href=\"#installing-packages\" id=\"toc-installing-packages\"><span\n      class=\"toc-section-number\">1.3.2</span> Installing Packages</a>\n    - <a href=\"#running-tests\" id=\"toc-running-tests\"><span\n      class=\"toc-section-number\">1.3.3</span> Running Tests</a>\n- <a href=\"#example-usage\" id=\"toc-example-usage\"><span\n  class=\"toc-section-number\">2</span> Example Usage</a>\n  - <a href=\"#concrete-example-using-the-qualitas.class-corpus-dataset\"\n    id=\"toc-concrete-example-using-the-qualitas.class-corpus-dataset\"><span\n    class=\"toc-section-number\">2.1</span> Concrete Example Using the\n    Qualitas.class Corpus Dataset</a>\n  - <a href=\"#concrete-example-using-the-iris-dataset\"\n    id=\"toc-concrete-example-using-the-iris-dataset\"><span\n    class=\"toc-section-number\">2.2</span> Concrete Example Using the Iris\n    Dataset</a>\n  - <a href=\"#diamonds-example\" id=\"toc-diamonds-example\"><span\n    class=\"toc-section-number\">2.3</span> Diamonds Example</a>\n- <a href=\"#datasets\" id=\"toc-datasets\"><span\n  class=\"toc-section-number\">3</span> Datasets</a>\n  - <a href=\"#use-your-own\" id=\"toc-use-your-own\"><span\n    class=\"toc-section-number\">3.1</span> Use Your Own</a>\n  - <a href=\"#known-datasets\" id=\"toc-known-datasets\"><span\n    class=\"toc-section-number\">3.2</span> Known Datasets</a>\n- <a href=\"#personalizing-the-web-application\"\n  id=\"toc-personalizing-the-web-application\"><span\n  class=\"toc-section-number\">4</span> Personalizing the Web\n  Application</a>\n- <a href=\"#references\" id=\"toc-references\">References</a>\n\n------------------------------------------------------------------------\n\n**Please Note**: ***Metrics As Scores*** (`MAS`) changed considerably\nbetween versions\n[**`v1.0.8`**](https://github.com/MrShoenel/metrics-as-scores/tree/v1.0.8)\nand **`v2.x.x`**.\n\nThe current version is `v2.8.2`.\n\nFrom version **`v2.x.x`** it has the following new features:\n\n- [Textual User Interface (TUI)](#text-based-user-interface-tui)\n- Proper documentation and testing\n- New version on PyPI. Install the package and run the command line\n  interface by typing **`mas`**!\n\n[Metrics As Scores\nDemo.](https://user-images.githubusercontent.com/5049151/219892077-58854478-b761-4a3d-9faf-2fe46c122cf5.webm)\n\n------------------------------------------------------------------------\n\nContains the data and scripts needed for the application\n**`Metrics as Scores`**, check out\n<a href=\"https://mas.research.h\u00f6nel.net/\"\nclass=\"uri\">https://mas.research.h\u00f6nel.net/</a>.\n\nThis package accompanies the paper entitled \u201c*Contextual\nOperationalization of Metrics As Scores: Is My Metric Value Good?*\u201d\n(H\u00f6nel et al. 2022). It seeks to answer the question whether or not the\ndomain a software metric was captured in, matters. It enables the user\nto compare domains and to understand their differences. In order to\nanswer the question of whether a metric value is actually good, we need\nto transform it into a **score**. Scores are normalized **and\nrectified** distances, that can be compared in an apples-to-apples\nmanner, across domains. The same metric value might be good in one\ndomain, while it is not in another. To borrow an example from the domain\nof software: It is much more acceptable (or common) to have large\napplications (in terms of lines of code) in the domains of games and\ndatabases than it is for the domains of IDEs and SDKs. Given an *ideal*\nvalue for a metric (which may also be user-defined), we can transform\nobserved metric values to distances from that value and then use the\ncumulative distribution function to map distances to scores.\n\n------------------------------------------------------------------------\n\n# Usage\n\nYou may install Metrics As Scores directly from PyPI. For users that\nwish to\n[**contribute**](https://github.com/MrShoenel/metrics-as-scores/blob/master/CONTRIBUTING.md)\nto Metrics As Scores, a [development setup](#development-setup) is\nrecommended. In either case, after the installation, [**you have access\nto the text-based user interface**](#text-based-user-interface-tui).\n\n``` shell\n# Installation from PyPI:\npip install metrics-as-scores\n```\n\nYou can **bring up the TUI** simply by typing the following after\ninstalling or cloning the repo (see next section for more details):\n\n``` shell\nmas\n```\n\n## Text-based User Interface (TUI)\n\nMetrics As Scores features a text-based command line user interface\n(TUI). It offers a couple of workflows/wizards, that help you to work\nand interact with the application. There is no need to modify any source\ncode, if you want to do one of the following:\n\n- Show Installed Datasets\n- Show List of Known Datasets Available Online That Can Be Downloaded\n- Download and install a known or existing dataset\n- Create Own Dataset to be used with Metrics-As-Scores\n- Fit Parametric Distributions for Own Dataset\n- Pre-generate distributions for usage in the\n  [**Web-Application**](#web-application)\n- Bundle Own dataset so it can be published\n- Run local, interactive Web-Application using a selected dataset\n\n![Metrics As Scores Text-based User Interface\n(TUI).](./TUI.png \"Metrics As Scores Text-based User Interface (TUI).\")\n\n## Web Application\n\nMetrics As Scores\u2019 main feature is perhaps the Web Application. It can\nbe run directly and locally from the TUI using a selected dataset (you\nmay download a known dataset or use your own). The Web Application\nallows to visually inspect each *feature* across all the defined\n*groups*. It features the PDF/PMF, CDF and CCDF, as well as the PPF for\neach feature in each group. It offers five different principal types of\ndensities: Parametric, Parametric (discrete), Empirical, Empirical\n(discrete), and (approximate) Kernel Density Estimation. The Web\nApplication includes a detailed [Help section](#) that should answer\nmost of your questions.\n\n![Metrics As Scores Interactive Web\n.](./WebApp.png \"Metrics As Scores Interactive Web Application.\")\n\n## Development Setup\n\nThis project was developed using and requires Python `>=3.10`. The\ndevelopment documentation can be found at\n<https://mrshoenel.github.io/metrics-as-scores/>. Steps:\n\n1.  Clone the Repository,\n2.  Set up a virtual environment,\n3.  Install packages.\n\n### Setting Up a Virtual Environment\n\nIt is recommended to use a virtual environment. To use a virtual\nenvironment, follow these steps (Windows specific; activation of the\nenvironment might differ).\n\n``` shell\nvirtualenv --python=C:/Python310/python.exe venv # Use specific Python version for virtual environment\nvenv/Scripts/activate\n```\n\nHere is a Linux example that assumes you have Python `3.10` installed\n(this may also require installing `python3.10-venv` and/or\n`python3.10-dev`):\n\n``` shell\npython3.10 -m venv venv\nsource venv/bin/activate # Linux\n```\n\n### Installing Packages\n\nThe project is managed with `Poetry`. To install the required packages,\nsimply run the following.\n\n``` shell\nvenv/Scripts/activate\n# First, update pip:\n(venv) C:\\metrics-as-scores>python -m pip install --upgrade pip\n# First install Poetry v1.3.2 using pip:\n(venv) C:\\metrics-as-scores>pip install poetry==1.3.2\n# Install the projects and its dependencies\n(venv) C:\\metrics-as-scores> poetry install\n```\n\nThe same in Linux:\n\n``` shell\nsource venv/bin/activate # Linux\n(venv) ubuntu@vm:/tmp/metrics-as-scores$ python -m pip install --upgrade pip\n(venv) ubuntu@vm:/tmp/metrics-as-scores$ pip install poetry==1.3.2\n(venv) ubuntu@vm:/tmp/metrics-as-scores$ poetry install\n```\n\n### Running Tests\n\nTests are run using `poethepoet`:\n\n``` shell\n# Runs the tests and prints coverage\n(venv) C:\\metrics-as-scores>poe test\n```\n\nYou can also generate coverage reports:\n\n``` shell\n# Writes reports to the local directory htmlcov\n(venv) C:\\metrics-as-scores>poe cov\n```\n\n------------------------------------------------------------------------\n\n# Example Usage\n\n*Metrics As Scores* can be thought of an *interactive*, *multiple-ANOVA*\nanalysis and explorer. The analysis of variance (ANOVA; Chambers,\nFreeny, and Heiberger (2017)) is usually used to analyze the differences\namong *hypothesized* group means for a single *feature*. An ANOVA might\nbe used to estimate the goodness-of-fit of a statistical model. Beyond\nANOVA, `MAS` seeks to answer the question of whether a sample of a\ncertain quantity (feature) is more or less common across groups. For\neach group, we can determine what might constitute a common/ideal value,\nand how distant the sample is from that value. This is expressed in\nterms of a percentile (a standardized scale of `[0,1]`), which we call\n**score**.\n\n## Concrete Example Using the Qualitas.class Corpus Dataset\n\nThe notebook\n[`notebooks/Example-webapp-qcc.ipynb`](https://github.com/MrShoenel/metrics-as-scores/blob/master/notebooks/Example-webapp-qcc.ipynb)\nholds a concrete example for using the web application to interactively\nobtain **scores**. In this example, we create a hypothetical application\nthat ought to be in the application domain *SDK*. Using a concrete\nmetric, *Number of Packages*, we find out that our hypothetical new SDK\napplication scores poorly for what it is intended to be.\n\nThis example illustrates the point that software metrics, when captured\nout of context, are meaningless (Gil and Lalouche 2016). For example,\ntypical values for complexity metrics are vastly different, depending on\nthe type of application. We find that, for example, applications of type\nSDK have a much lower *expected* complexity compared to Games (`1.9`\nvs.\u00a0`3.1`) (H\u00f6nel et al. 2022). Software metrics are often used in\nsoftware quality models. However, without knowledge of the application\u2019s\ncontext (here: domain), the deduced quality of these models is at least\nmisleading, if not completely off. This becomes apparent if we examine\nhow an application\u2019s complexity scores across certain domains.\n\nSince there are many software metrics that are captured simultaneously,\nwe can also compare domains in their entirety: How many metrics are\nstatistically significantly different from each other? Is there a set of\ndomains that are not distinguishable from each other? Are there metrics\nthat are always different across domains and must be used with care? In\nthis example, we use a known and downloadable dataset (H\u00f6nel 2023b). It\nis based on software metrics and application domains of the\n\u201cQualitas.class corpus\u201d (Terra et al. 2013; Tempero et al. 2010).\n\n## Concrete Example Using the Iris Dataset\n\nThe notebook\n[`notebooks/Example-create-own-dataset.ipynb`](https://github.com/MrShoenel/metrics-as-scores/blob/master/notebooks/Example-create-own-dataset.ipynb)\nholds a concrete example for creating/importing/using one\u2019s own dataset.\nAlthough all necessary steps can be achieved using the **TUI**, this\nnotebook demonstrates a complete example of implementing this in code.\n\n## Diamonds Example\n\nThe diamonds dataset (Wickham 2016) holds prices of over 50,000 round\ncut diamonds. It contains a number attributes for each diamond, such as\nits price, length, depth, or weight. The dataset, however, features\nthree quality attributes: The quality of the cut, the clarity, and the\ncolor. Suppose we are interested in examining properties of diamonds of\nthe highest quality only, across colors. Therefore, we select only those\ndiamonds from the dataset that have an *ideal* cut and the best (*IF*)\nclarity. Now only the color quality gives a context to each diamonds and\nits attributes (i.e., diamonds are now *grouped* by color).\n\nThis constellation now allows us to examine differences across\ndifferently colored diamonds. For example, there are considerable\ndifferences in price. We find that only the group of diamonds of the\nbest color is significantly different from the other groups. This\nexample is available as a downloadable dataset (H\u00f6nel 2023c).\n\n------------------------------------------------------------------------\n\n# Datasets\n\nMetrics As Scores can use existing and own datasets. Please keep reading\nto learn how.\n\n## Use Your Own\n\nMetrics As Scores has a built-in wizard that lets you import your own\ndataset! There is another wizard that bundles your dataset so that it\ncan be shared with others. You may [**contribute your\ndataset**](https://github.com/MrShoenel/metrics-as-scores/blob/master/CONTRIBUTING.md)\nso we can add it to the curated list of known datasets (see next\nsection). If you do not have an own dataset, you can use the built-in\nwizard to download any of the known datasets, too!\n\nNote that Metrics As Scores supports you with all tools necessary to\ncreate a publishable dataset. For example, it carries out the common\nstatistical tests:\n\n- ANOVA (Chambers, Freeny, and Heiberger 2017): Analysis of variance of\n  your data across the available groups.\n- Tukey\u2019s Honest Significance Test (TukeyHSD; Tukey (1949)): This test\n  is used to gain insights into the results of an ANOVA test. While the\n  former only allows obtaining the amount of corroboration for the null\n  hypothesis, TukeyHSD performs all pairwise comparisons (for all\n  possible combinations of any two groups).\n- Two-sample T-test: Compares the means of two samples to give an\n  indication whether or not they appear to come from the same\n  distribution. Again, this is useful for comparing groups. Tukey\u2019s test\n  is used to gain insights into the results of an ANOVA test. While the\n  former only allows obtaining the amount of corroboration for the null\n  hypothesis, TukeyHSD performs all pairwise comparisons (for all\n  possible combinations of any two groups).\n\nIt also creates an **automatic report** based on these tests that you\ncan simply render into a PDF using Quarto.\n\nA publishable dataset must contain parametric fits and pre-generated\ndensities (please check the wizard for these two). Metrics As Scores can\nfit approximately **120** continuous and discrete random variables using\n`Pymoo` (Blank and Deb 2020). Note that Metrics As Scores also\nautomatically carries out a number of goodness-of-fit tests. The type of\ntest also depends on the data (for example, not each test is valid for\ndiscrete data, such as the KS two-sample test). These tests are then\nused to select some best fitting random variable for display in the web\napplication.\n\n- Cram\u00e9r-von Mises- (Cram\u00e9r 1928) and Kolmogorov\u2013Smirnov one-sample\n  (Stephens 1974) tests: After fitting a distribution, the sample is\n  tested against the fitted parametric distribution. Since the fitted\n  distribution cannot usually accommodate all of the sample\u2019s\n  subtleties, the test will indicate whether the fit is acceptable or\n  not.\n- Cram\u00e9r-von Mises- (Anderson 1962), Kolmogorov\u2013Smirnov-, and\n  Epps\u2013Singleton (Epps and Singleton 1986) two-sample tests: After\n  fitting, we create a second sample by uniformly sampling from the\n  `PPF`. Then, both samples can be used in these tests. The\n  Epps\u2013Singleton test is also applicable for discrete distributions.\n\n## Known Datasets\n\nThe following is a curated list of known, publicly available datasets\nthat can be used with Metrics As Scores. These datasets can be\ndownloaded using the text-based user interface.\n\n- Metrics and Domains From the Qualitas.class Corpus (H\u00f6nel 2023b). 10\n  GB. <https://doi.org/10.5281/zenodo.7633949>.\n- Elisa Spectrophotometer Positive Samples (H\u00f6nel 2023a). 266 MB.\n  <https://doi.org/10.5281/zenodo.7633989>.\n- Price, Weight, and Other Properties of Over 1,200 Ideal-Cut and\n  Best-Clarity Diamonds (H\u00f6nel 2023c). 508 MB.\n  <https://doi.org/10.5281/zenodo.7647596>.\n- The Iris Flower Data Set (H\u00f6nel 2023d). 143 MB.\n  <https://doi.org/10.5281/zenodo.7669645>.\n\n------------------------------------------------------------------------\n\n# Personalizing the Web Application\n\nThe web application *\u201c[Metrics As Scores](#)\u201d* is located in the\ndirectory\n[`src/metrics_as_scores/webapp/`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/).\nThe app itself has three vertical blocks: a header, the interactive\npart, and a footer. Header and footer can be easily edited by modifing\nthe files\n[`src/metrics_as_scores/webapp/header.html`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/header.html)\nand\n[`src/metrics_as_scores/webapp/footer.html`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/footer.html).\n\nNote that when you create your own dataset, you get to add sections to\nthe header and footer using two HTML fragments. This is recommended over\nmodifying the web application directly.\n\nIf you want to change the title of the application, you will have to\nmodify the file\n[`src/metrics_as_scores/webapp/main.py`](https://github.com/MrShoenel/metrics-as-scores/blob/master/src/metrics_as_scores/webapp/main.py)\nat the very end:\n\n``` python\n# Change this line to your desired title.\ncurdoc().title = \"Metrics As Scores\"\n```\n\n**Important**: If you modify the web application, you must always\nmaintain two links: one to <a href=\"https://mas.research.h\u00f6nel.net/\"\nclass=\"uri\">https://mas.research.h\u00f6nel.net/</a> and one to this\nrepository, that is, <https://github.com/MrShoenel/metrics-as-scores>.\n\n# References\n\n<div id=\"refs\" class=\"references csl-bib-body hanging-indent\">\n\n<div id=\"ref-Anderson1962\" class=\"csl-entry\">\n\nAnderson, T. W. 1962. \u201c<span class=\"nocase\">On the Distribution of the\nTwo-Sample Cramer-von Mises Criterion</span>.\u201d *The Annals of\nMathematical Statistics* 33 (3): 1148\u201359.\n<https://doi.org/10.1214/aoms/1177704477>.\n\n</div>\n\n<div id=\"ref-pymoo\" class=\"csl-entry\">\n\nBlank, Julian, and Kalyanmoy Deb. 2020. \u201c<span class=\"nocase\">pymoo:\nMulti-Objective Optimization in Python</span>.\u201d *IEEE Access* 8:\n89497\u2013509. <https://doi.org/10.1109/ACCESS.2020.2990567>.\n\n</div>\n\n<div id=\"ref-chambers2017statistical\" class=\"csl-entry\">\n\nChambers, John M., Anne E. Freeny, and Richard M. Heiberger. 2017.\n\u201c<span class=\"nocase\">Analysis of Variance; Designed\nExperiments</span>.\u201d In *<span class=\"nocase\">Statistical Models in\nS</span>*, edited by John M. Chambers and Trevor J. Hastie, 1st ed.\nRoutledge. <https://doi.org/10.1201/9780203738535>.\n\n</div>\n\n<div id=\"ref-cramer1928\" class=\"csl-entry\">\n\nCram\u00e9r, Harald. 1928. \u201cOn the Composition of Elementary Errors.\u201d\n*Scandinavian Actuarial Journal* 1928 (1): 13\u201374.\n<https://doi.org/10.1080/03461238.1928.10416862>.\n\n</div>\n\n<div id=\"ref-Epps1986\" class=\"csl-entry\">\n\nEpps, T. W., and Kenneth J. Singleton. 1986. \u201c<span class=\"nocase\">An\nOmnibus Test for the Two-Sample Problem Using the Empirical\nCharacteristic Function</span>.\u201d *Journal of Statistical Computation and\nSimulation* 26 (3-4): 177\u2013203.\n<https://doi.org/10.1080/00949658608810963>.\n\n</div>\n\n<div id=\"ref-gil2016software\" class=\"csl-entry\">\n\nGil, Joseph Yossi, and Gal Lalouche. 2016. \u201cWhen Do Software Complexity\nMetrics Mean Nothing? - When Examined Out of Context.\u201d *J. Object\nTechnol.* 15 (1): 2:1\u201325. <https://doi.org/10.5381/jot.2016.15.5.a2>.\n\n</div>\n\n<div id=\"ref-dataset_elisa\" class=\"csl-entry\">\n\nH\u00f6nel, Sebastian. 2023a. \u201cMetrics As Scores Dataset: Elisa\nSpectrophotometer Positive Samples.\u201d Zenodo.\n<https://doi.org/10.5281/zenodo.7633989>.\n\n</div>\n\n<div id=\"ref-dataset_qcc\" class=\"csl-entry\">\n\n\u2014\u2014\u2014. 2023b. \u201c<span class=\"nocase\">Metrics As Scores Dataset: Metrics and\nDomains From the Qualitas.class Corpus</span>.\u201d Zenodo.\n<https://doi.org/10.5281/zenodo.7633949>.\n\n</div>\n\n<div id=\"ref-dataset_diamonds-ideal-if\" class=\"csl-entry\">\n\n\u2014\u2014\u2014. 2023c. \u201c<span class=\"nocase\">Metrics As Scores Dataset: Price,\nWeight, and Other Properties of Over 1,200 Ideal-Cut and Best-Clarity\nDiamonds</span>.\u201d Zenodo. <https://doi.org/10.5281/zenodo.7647596>.\n\n</div>\n\n<div id=\"ref-dataset_iris\" class=\"csl-entry\">\n\n\u2014\u2014\u2014. 2023d. \u201cMetrics As Scores Dataset: The Iris Flower Data Set.\u201d\nZenodo. <https://doi.org/10.5281/zenodo.7669664>.\n\n</div>\n\n<div id=\"ref-honel2022mas\" class=\"csl-entry\">\n\nH\u00f6nel, Sebastian, Morgan Ericsson, Welf L\u00f6we, and Anna Wingkvist. 2022.\n\u201c<span class=\"nocase\">Contextual Operationalization of Metrics As\nScores: Is My Metric Value Good?</span>\u201d In *<span class=\"nocase\">22nd\nIEEE International Conference on Software Quality, Reliability and\nSecurity, QRS 2022, Guangzhou, China, December 5\u20139, 2022</span>*,\n333\u201343. IEEE. <https://doi.org/10.1109/QRS57517.2022.00042>.\n\n</div>\n\n<div id=\"ref-Stephens1974\" class=\"csl-entry\">\n\nStephens, M. A. 1974. \u201c<span class=\"nocase\">EDF Statistics for Goodness\nof Fit and Some Comparisons</span>.\u201d *Journal of the American\nStatistical Association* 69 (347): 730\u201337.\n<https://doi.org/10.1080/01621459.1974.10480196>.\n\n</div>\n\n<div id=\"ref-tempero2010qualitas\" class=\"csl-entry\">\n\nTempero, Ewan D., Craig Anslow, Jens Dietrich, Ted Han, Jing Li, Markus\nLumpe, Hayden Melton, and James Noble. 2010. \u201c<span class=\"nocase\">The\nQualitas Corpus: A Curated Collection of Java Code for Empirical\nStudies</span>.\u201d In *17th Asia Pacific Software Engineering Conference,\nAPSEC 2010, Sydney, Australia, November 30 - December 3, 2010*, edited\nby Jun Han and Tran Dan Thu, 336\u201345. IEEE Computer Society.\n<https://doi.org/10.1109/APSEC.2010.46>.\n\n</div>\n\n<div id=\"ref-terra2013qualitas\" class=\"csl-entry\">\n\nTerra, Ricardo, Luis Fernando Miranda, Marco Tulio Valente, and Roberto\nda Silva Bigonha. 2013. \u201c<span class=\"nocase\"><span\nclass=\"nocase\">Qualitas.class</span> corpus: a compiled version of the\n<span class=\"nocase\">qualitas</span> corpus</span>.\u201d *ACM SIGSOFT\nSoftware Engineering Notes* 38 (5): 1\u20134.\n<https://doi.org/10.1145/2507288.2507314>.\n\n</div>\n\n<div id=\"ref-Tukey1949\" class=\"csl-entry\">\n\nTukey, John W. 1949. \u201c<span class=\"nocase\">Comparing Individual Means in\nthe Analysis of Variance</span>.\u201d *Biometrics* 5 (2): 99\u2013114.\n<https://doi.org/10.2307/3001913>.\n\n</div>\n\n<div id=\"ref-ggplot2\" class=\"csl-entry\">\n\nWickham, Hadley. 2016. *<span class=\"nocase\">ggplot2</span>: Elegant\nGraphics for Data Analysis*. Springer-Verlag New York.\n<https://ggplot2.tidyverse.org>.\n\n</div>\n\n</div>\n",
    "bugtrack_url": null,
    "license": "Dual-licensed under GNU General Public License v3 (GPLv3) and closed-source",
    "summary": "Interactive web application, tool- and analysis suite for approximating, exploring, understanding, and sampling from conditional distributions.",
    "version": "2.8.2",
    "project_urls": {
        "Homepage": "https://github.com/mrshoenel/metrics-as-scores",
        "Repository": "https://github.com/mrshoenel/metrics-as-scores/issues"
    },
    "split_keywords": [
        "distribution fitting",
        " statistical tests",
        " context-dependent",
        " metrics",
        " quality",
        " score"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9e032503b5c0b26814a6284033b4966586b7a68923ec25b1b0407d3e8b149f41",
                "md5": "4134db5eb28fd5139f334a22de2af02d",
                "sha256": "2b07b3e9f79cc881acea0c8515875e21f044cb651d287dde509ddfdff4b1a89e"
            },
            "downloads": -1,
            "filename": "metrics_as_scores-2.8.2-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "4134db5eb28fd5139f334a22de2af02d",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": "<3.12,>=3.10",
            "size": 182918,
            "upload_time": "2024-04-26T08:34:23",
            "upload_time_iso_8601": "2024-04-26T08:34:23.025595Z",
            "url": "https://files.pythonhosted.org/packages/9e/03/2503b5c0b26814a6284033b4966586b7a68923ec25b1b0407d3e8b149f41/metrics_as_scores-2.8.2-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6086583de758d97b9eb44853432ec5c61beea5cc6de2812b1e7a83f7b13fe3bf",
                "md5": "db9010e485fcc3ee866d9898205d7217",
                "sha256": "e391778c7823000319c8436313efc023699da5dbd4546beaf4dfd9627be2aa78"
            },
            "downloads": -1,
            "filename": "metrics_as_scores-2.8.2.tar.gz",
            "has_sig": false,
            "md5_digest": "db9010e485fcc3ee866d9898205d7217",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.10",
            "size": 172255,
            "upload_time": "2024-04-26T08:34:25",
            "upload_time_iso_8601": "2024-04-26T08:34:25.010281Z",
            "url": "https://files.pythonhosted.org/packages/60/86/583de758d97b9eb44853432ec5c61beea5cc6de2812b1e7a83f7b13fe3bf/metrics_as_scores-2.8.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 08:34:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mrshoenel",
    "github_project": "metrics-as-scores",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "metrics-as-scores"
}
        
Elapsed time: 0.30159s