spearmint


Namespearmint JSON
Version 0.0.2 PyPI version JSON
download
home_page
SummaryRefreshing hypothesis testing in python!
upload_time2023-12-05 05:23:33
maintainer
docs_urlNone
author
requires_python>=3.10
licenseCopyright (c) 2024, Dustin E. Stansbury Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords data science hypothesis testing ab testing statistics bayesian inference bootstrap statistics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img src="https://raw.githubusercontent.com/dustinstansbury/spearmint/main/images/mint.png" alt="drawing" width="30"/> <span style="color:00A33D"> _spearmint_ </span> <img src="https://raw.githubusercontent.com/dustinstansbury/spearmint/main/images/mint.png" alt="drawing" width="30"/>

### _Refreshing hypothesis testing in python_


<a href="https://github.com/dustinstansbury/spearmint/blob/main/LICENSE"><img alt="License: MIT" src="https://black.readthedocs.io/en/stable/_static/license.svg"></a>
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
![linting](https://github.com/dustinstansbury/spearmint/actions/workflows/lint.yml/badge.svg?branch=main)
![mypy](https://github.com/dustinstansbury/spearmint/actions/workflows/mypy.yml/badge.svg?branch=main)
![tests](https://github.com/dustinstansbury/spearmint/actions/workflows/test.yml/badge.svg?branch=main)
[![codecov](https://codecov.io/gh/dustinstansbury/spearmint/graph/badge.svg?token=HZC4CGNLTV)](https://codecov.io/gh/dustinstansbury/spearmint)
[![Try In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1lbR-0Da196ST-Yq157m4PzUx8cy7WpK_?usp=sharing)
[![Tru it in Streamlit](https://static.streamlit.io/badges/streamlit_badge_red.svg)](https://ab-testing.streamlit.app)



## Features
- Offers a simple API for running, visualizing, and interpreting statistically-rigorous hypothesis tests with none of the hastle of jumping between various statistical or visualization packages.
- Supports most common variable types used in AB Tests inlcuding, continuous, binary/proportions, counts/rates data.
- Implements many Frequentist, Bayesian, and Bootstrap inference methods
- Supports multiple customizations:
    + Custom metric definitions
    + Simple Bayesian prior definition
    + Easily extendable to support new inference methods

## Installation

### Requirements
- `spearmint` has been tested on `python>=3.10`.

### Install via `pip`

```bash
pip install spearmint
```

If you plan to run your analyses in `jupyterlab`, you can add the `notebook` option

```bash
pip install spearmint[notebook]
```

### Install via `conda` (WIP)

```bash
conda install -c conda-forge spearmint # not yet on conda-forge
```

### Install from source
If you would like to contribute to spearmint, then you'll want to install from source (or use the `-e` flag when installing from `PyPI`):

```bash
mkdir /PATH/TO/LOCAL/SPEARMINT && cd /PATH/TO/LOCAL/SPEARMINT
git clone git@github.com:dustinstansbury/spearmint.git
cd spearmint
pip install -e .
```

## Basics Usage

### Observations data
Spearmint takes as input a [pandas](https://pandas.pydata.org/) `DataFrame` containing experiment observations data. Each record represents an observation/trial recorded in the experiment and has the following columns:

- **One or more `treatment` columns**: each treatment column contains two or more distinct, discrete values that are used to identify the different groups in the experiment.
- **One or more `metric` columns**: these are the values associated with each observation that are used to compare groups in the experiment.
- **Zero or more `attributes` columns**: these define additional discrete properties assigned to the observations. These attributes can be used to perform segmentation across groups.

To demonstrate, let's generate some artificial experiment observations data. The `metric` column in our dataset will be a series of binary outcomes (i.e. `True`/`False`). This binary `metric` is analogous to *conversion* or *success* in AB testing.
```python
from spearmint.utils import generate_fake_observations

"""Generate binary demo data"""
experiment_observations = generate_fake_observations(
    distribution="bernoulli",
    n_treatments=3,
    n_attributes=4,
    n_observations=120,
    random_seed=123
)
experiment_observations.head()
```

These fake observations are simulated from a different Bernoulli distributions, each distribution being associated with the three `treatement`s (named `"A"`, `"B"`, or `"C"`), and each distrubition having increasing average probability of *conversion*. The simulated data also contains four `attribute` columns, named `attr_*`, that can potentially be used for segmentation.

```bash
   id treatment attr_0 attr_1 attr_2 attr_3  metric
0   0         C    A0a    A1b    A2a    A3a    True
1   1         B    A0a    A1b    A2a    A3b    True
2   2         C    A0a    A1a    A2a    A3b    True
3   3         C    A0a    A1a    A2a    A3b    True
4   4         A    A0a    A1b    A2a    A3a    True
```

## Running an AB test in spearmint is as easy as ✨1-2-3✨:

- 1. Initialize an **`Experiment`**, which holds the raw observations, and any metadata associated with an AB experiment.
- 2. Define the **`HypothesisTest`**, which declares the configuration of the statistical inference procedure.
- 3. Run the `HypothesisTest` against the `Experiment` and interpret the resulting **`InferenceResults`**. `InferenceResults`, hold the parameter estimates of the inference procedure, and are used to summarize, visualize, and save the results of the hypothesis test.


## Example Workflow
Below we demonstrate how to run a hypothesis test analysis on the fake observations data generated in a 1-2-3 fashion.

### 1. Initialize the `Experiment`

```python
from spearmint import Experiment
experiment = Experiment(data=experiment_observations)
```

Since the `metric` column in the simulated observations are binary (i.e. `True`/`False`), we'll essentially be running a test for the difference in success rates--i.e. what's the probability of observing a `True`--between two groups. This is analogous to running an AB experiment that aims to compare conversion rates (e.g. clicking a CTA, opening an email, signing up for a service, etc.) between a control and a variation group.

### 2. Define the `HypothesisTest`

Here, we test the `hypothesis` that that the conversion rate for `treatment` group `'C'` (the `variation`) is `'larger'` than for the `treatment` group `'A'` (the `control`, or reference group). 

```python
from spearmint import HypothesisTest

ab_test = HypothesisTest(
    treatment='treatment',
    metric='metric',
    control='A',
    variation='C',
    hypothesis='larger',
    # variable_type='binary',  # inferred from `metric` values
    # inference_method='frequentist'  # default
)
```

### 3. Run the test and interpret the `InferenceResults`
Here, we run our `HypothesisTest` with an acceptable Type I error rate of `alpha=0.05`

```python
ab_test_results = experiment.run_test(ab_test, alpha=0.05)
assert ab_test.variable_type == 'binary'  # check that correct variable_type inferred
assert ab_test_results.accept_hypothesis

"""Display test results to stdout"""
ab_test_results.display()
```
The test results displays two tables. The first table gives a summary of the observed samples from the control (`"A"`) and variation (`"C"`) groups. This `Samples Comparison` table gives the number of samples, the mean, variance, and standard error of the mean estimation, as well as the difference in mean estimates between the `variation` and `control` groups.

```bash
Samples Comparison
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃                ┃ A                ┃ C                ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│        Samples │ 35               │ 44               │
│           Mean │ 0.4286           │ 0.75             │
│ Standard Error │ (0.2646, 0.5925) │ (0.6221, 0.8779) │
│       Variance │ 0.2449           │ 0.1875           │
│          Delta │                  │ 0.3214           │
└────────────────┴──────────────────┴──────────────────┘
Proportions Delta Results
┌────────────────────┬──────────────────┐
│ Delta              │ 0.3214           │
│ Delta CI           │ (0.1473, inf)    │
│ Delta-relative     │ 0.75 %           │
│ Delta-relative CI  │ (34.3703, inf) % │
│ Delta CI %-tiles   │ (0.05, inf)      │
│ Effect Size        │ 0.6967           │
│ alpha              │ 0.05             │
│ Power              │ 0.92             │
│ Variable Type      │ binary           │
│ Inference Method   │ frequentist      │
│ Test statistic (z) │ 3.47             │
│ p-value            │ 0.0003           │
│ Hypothesis         │ C is larger      │
│ Accept Hypothesis  │ True             │
└────────────────────┴──────────────────┘
```
The second table shows a summary of the results from the hypothesis test inference procedure.

#### Interpreting inference results

We first see that this test uses a `Proportions Delta` inference procedure. Each inference procedure will test for the `Delta` in expected value between the two groups. For `"binary"` variables, this expected value is the _proportionality_ or  _average conversion rate_.  For `"continuous"` variables the expecteted value is the mean, for `"count"` variables the expected value will be expected number of events observed.

We see that there is a larger proportionality (e.g. conversion rate) for the `variation` group `'C'`, when compared to that of the `control` group `'A'`. Specifically there is a `Delta` of 0.32 in expected value between the two groups.

The results also report confidence intervals `CI` around the `Delta` estimates. Since the `hypothesis` is `"larger"`, the lower bound of the `CI` is $1-\alpha$ %, while the upper bound of the condifence intervals is $\infty$; these bounds are given by `Delta CI %-tiles`.

Along with absolute `Delta`, we report the `Relative Delta`, here a 75% relative increase. `Delta Relative` estimates also have associated `CI`s.

The size of the `Delta` in proportionality is moderately large, as indicated by an effect size of 0.70. This test also results in a `p-value` of 0.0003, which is lower than the prescribed $\alpha=$ 0.05. Thus the Hypothesis test declares that the `hypothesis` that `'C is larger'` should be accepted.


#### Visualizing `InferenceResults`

In addition to `.display()`ing the test results to the console, we can `.visualize()` the results.

```python
ab_test_results.visualize()
```

<div style="text-align:center"><img src="https://raw.githubusercontent.com/dustinstansbury/spearmint/main/images/proportions_delta_example.png"/></div>

The left plot shows each Samples's estimated parametric distribution, as well as the estimates of group central tendency and 95% Confidence Intervals (CIs) around those estiates (plotted as intervals along the x-axis). Non-overlapping distributions and CIs provides strong visual evidence that the difference between the two groups' central tendencies is statistically significant.

The right plot shows the `Delta` distribution over the _difference_ in those estimated sample distributions, along with 95% CIs. Delta CIs greater than zero give further visual evidence that the difference in the two samples is statistically significant.

---

 **💡 NOTE**

For `"binary"`, `"frequentist"` tests--i.e. `Proportions Delta` tests--we display the inference results for the observed Samples (i.e. the left `ab_test_results.visualize()` plot) as binomial distributions, giving the distribution over the expected number of successful trials given the total number observations and the number of `True`/`False` trials per group.

---

## [Additional Documentation and Tutorials](https://github.com/dustinstansbury/spearmint/blob/master/docs)
For more details of using `spearmint`'s API see the [Spearmint Basics Tutorial](https://github.com/dustinstansbury/spearmint/blob/main/docs/spearmint_basics.ipynb), or try running it in [Google Collab](https://colab.research.google.com/drive/1lbR-0Da196ST-Yq157m4PzUx8cy7WpK_?usp=sharing)

## [CHANGELOG](./CHANGELOG.md)



            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "spearmint",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Dustin Stansbury <dustin.stansbury@gmail.com>",
    "keywords": "data science,hypothesis testing,AB testing,statistics,bayesian inference,bootstrap statistics",
    "author": "",
    "author_email": "Dustin Stansbury <dustin.stansbury@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/28/85/c083ede997e7032b4ed7d6ec8ed587a1eac24a980c9469f183bcabfbf977/spearmint-0.0.2.tar.gz",
    "platform": null,
    "description": "# <img src=\"https://raw.githubusercontent.com/dustinstansbury/spearmint/main/images/mint.png\" alt=\"drawing\" width=\"30\"/> <span style=\"color:00A33D\"> _spearmint_ </span> <img src=\"https://raw.githubusercontent.com/dustinstansbury/spearmint/main/images/mint.png\" alt=\"drawing\" width=\"30\"/>\n\n### _Refreshing hypothesis testing in python_\n\n\n<a href=\"https://github.com/dustinstansbury/spearmint/blob/main/LICENSE\"><img alt=\"License: MIT\" src=\"https://black.readthedocs.io/en/stable/_static/license.svg\"></a>\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n![linting](https://github.com/dustinstansbury/spearmint/actions/workflows/lint.yml/badge.svg?branch=main)\n![mypy](https://github.com/dustinstansbury/spearmint/actions/workflows/mypy.yml/badge.svg?branch=main)\n![tests](https://github.com/dustinstansbury/spearmint/actions/workflows/test.yml/badge.svg?branch=main)\n[![codecov](https://codecov.io/gh/dustinstansbury/spearmint/graph/badge.svg?token=HZC4CGNLTV)](https://codecov.io/gh/dustinstansbury/spearmint)\n[![Try In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1lbR-0Da196ST-Yq157m4PzUx8cy7WpK_?usp=sharing)\n[![Tru it in Streamlit](https://static.streamlit.io/badges/streamlit_badge_red.svg)](https://ab-testing.streamlit.app)\n\n\n\n## Features\n- Offers a simple API for running, visualizing, and interpreting statistically-rigorous hypothesis tests with none of the hastle of jumping between various statistical or visualization packages.\n- Supports most common variable types used in AB Tests inlcuding, continuous, binary/proportions, counts/rates data.\n- Implements many Frequentist, Bayesian, and Bootstrap inference methods\n- Supports multiple customizations:\n    + Custom metric definitions\n    + Simple Bayesian prior definition\n    + Easily extendable to support new inference methods\n\n## Installation\n\n### Requirements\n- `spearmint` has been tested on `python>=3.10`.\n\n### Install via `pip`\n\n```bash\npip install spearmint\n```\n\nIf you plan to run your analyses in `jupyterlab`, you can add the `notebook` option\n\n```bash\npip install spearmint[notebook]\n```\n\n### Install via `conda` (WIP)\n\n```bash\nconda install -c conda-forge spearmint # not yet on conda-forge\n```\n\n### Install from source\nIf you would like to contribute to spearmint, then you'll want to install from source (or use the `-e` flag when installing from `PyPI`):\n\n```bash\nmkdir /PATH/TO/LOCAL/SPEARMINT && cd /PATH/TO/LOCAL/SPEARMINT\ngit clone git@github.com:dustinstansbury/spearmint.git\ncd spearmint\npip install -e .\n```\n\n## Basics Usage\n\n### Observations data\nSpearmint takes as input a [pandas](https://pandas.pydata.org/) `DataFrame` containing experiment observations data. Each record represents an observation/trial recorded in the experiment and has the following columns:\n\n- **One or more `treatment` columns**: each treatment column contains two or more distinct, discrete values that are used to identify the different groups in the experiment.\n- **One or more `metric` columns**: these are the values associated with each observation that are used to compare groups in the experiment.\n- **Zero or more `attributes` columns**: these define additional discrete properties assigned to the observations. These attributes can be used to perform segmentation across groups.\n\nTo demonstrate, let's generate some artificial experiment observations data. The `metric` column in our dataset will be a series of binary outcomes (i.e. `True`/`False`). This binary `metric` is analogous to *conversion* or *success* in AB testing.\n```python\nfrom spearmint.utils import generate_fake_observations\n\n\"\"\"Generate binary demo data\"\"\"\nexperiment_observations = generate_fake_observations(\n    distribution=\"bernoulli\",\n    n_treatments=3,\n    n_attributes=4,\n    n_observations=120,\n    random_seed=123\n)\nexperiment_observations.head()\n```\n\nThese fake observations are simulated from a different Bernoulli distributions, each distribution being associated with the three `treatement`s (named `\"A\"`, `\"B\"`, or `\"C\"`), and each distrubition having increasing average probability of *conversion*. The simulated data also contains four `attribute` columns, named `attr_*`, that can potentially be used for segmentation.\n\n```bash\n   id treatment attr_0 attr_1 attr_2 attr_3  metric\n0   0         C    A0a    A1b    A2a    A3a    True\n1   1         B    A0a    A1b    A2a    A3b    True\n2   2         C    A0a    A1a    A2a    A3b    True\n3   3         C    A0a    A1a    A2a    A3b    True\n4   4         A    A0a    A1b    A2a    A3a    True\n```\n\n## Running an AB test in spearmint is as easy as \u27281-2-3\u2728:\n\n- 1. Initialize an **`Experiment`**, which holds the raw observations, and any metadata associated with an AB experiment.\n- 2. Define the **`HypothesisTest`**, which declares the configuration of the statistical inference procedure.\n- 3. Run the `HypothesisTest` against the `Experiment` and interpret the resulting **`InferenceResults`**. `InferenceResults`, hold the parameter estimates of the inference procedure, and are used to summarize, visualize, and save the results of the hypothesis test.\n\n\n## Example Workflow\nBelow we demonstrate how to run a hypothesis test analysis on the fake observations data generated in a 1-2-3 fashion.\n\n### 1. Initialize the `Experiment`\n\n```python\nfrom spearmint import Experiment\nexperiment = Experiment(data=experiment_observations)\n```\n\nSince the `metric` column in the simulated observations are binary (i.e. `True`/`False`), we'll essentially be running a test for the difference in success rates--i.e. what's the probability of observing a `True`--between two groups. This is analogous to running an AB experiment that aims to compare conversion rates (e.g. clicking a CTA, opening an email, signing up for a service, etc.) between a control and a variation group.\n\n### 2. Define the `HypothesisTest`\n\nHere, we test the `hypothesis` that that the conversion rate for `treatment` group `'C'` (the `variation`) is `'larger'` than for the `treatment` group `'A'` (the `control`, or reference group). \n\n```python\nfrom spearmint import HypothesisTest\n\nab_test = HypothesisTest(\n    treatment='treatment',\n    metric='metric',\n    control='A',\n    variation='C',\n    hypothesis='larger',\n    # variable_type='binary',  # inferred from `metric` values\n    # inference_method='frequentist'  # default\n)\n```\n\n### 3. Run the test and interpret the `InferenceResults`\nHere, we run our `HypothesisTest` with an acceptable Type I error rate of `alpha=0.05`\n\n```python\nab_test_results = experiment.run_test(ab_test, alpha=0.05)\nassert ab_test.variable_type == 'binary'  # check that correct variable_type inferred\nassert ab_test_results.accept_hypothesis\n\n\"\"\"Display test results to stdout\"\"\"\nab_test_results.display()\n```\nThe test results displays two tables. The first table gives a summary of the observed samples from the control (`\"A\"`) and variation (`\"C\"`) groups. This `Samples Comparison` table gives the number of samples, the mean, variance, and standard error of the mean estimation, as well as the difference in mean estimates between the `variation` and `control` groups.\n\n```bash\nSamples Comparison\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503                \u2503 A                \u2503 C                \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502        Samples \u2502 35               \u2502 44               \u2502\n\u2502           Mean \u2502 0.4286           \u2502 0.75             \u2502\n\u2502 Standard Error \u2502 (0.2646, 0.5925) \u2502 (0.6221, 0.8779) \u2502\n\u2502       Variance \u2502 0.2449           \u2502 0.1875           \u2502\n\u2502          Delta \u2502                  \u2502 0.3214           \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nProportions Delta Results\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Delta              \u2502 0.3214           \u2502\n\u2502 Delta CI           \u2502 (0.1473, inf)    \u2502\n\u2502 Delta-relative     \u2502 0.75 %           \u2502\n\u2502 Delta-relative CI  \u2502 (34.3703, inf) % \u2502\n\u2502 Delta CI %-tiles   \u2502 (0.05, inf)      \u2502\n\u2502 Effect Size        \u2502 0.6967           \u2502\n\u2502 alpha              \u2502 0.05             \u2502\n\u2502 Power              \u2502 0.92             \u2502\n\u2502 Variable Type      \u2502 binary           \u2502\n\u2502 Inference Method   \u2502 frequentist      \u2502\n\u2502 Test statistic (z) \u2502 3.47             \u2502\n\u2502 p-value            \u2502 0.0003           \u2502\n\u2502 Hypothesis         \u2502 C is larger      \u2502\n\u2502 Accept Hypothesis  \u2502 True             \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\nThe second table shows a summary of the results from the hypothesis test inference procedure.\n\n#### Interpreting inference results\n\nWe first see that this test uses a `Proportions Delta` inference procedure. Each inference procedure will test for the `Delta` in expected value between the two groups. For `\"binary\"` variables, this expected value is the _proportionality_ or  _average conversion rate_.  For `\"continuous\"` variables the expecteted value is the mean, for `\"count\"` variables the expected value will be expected number of events observed.\n\nWe see that there is a larger proportionality (e.g. conversion rate) for the `variation` group `'C'`, when compared to that of the `control` group `'A'`. Specifically there is a `Delta` of 0.32 in expected value between the two groups.\n\nThe results also report confidence intervals `CI` around the `Delta` estimates. Since the `hypothesis` is `\"larger\"`, the lower bound of the `CI` is $1-\\alpha$ %, while the upper bound of the condifence intervals is $\\infty$; these bounds are given by `Delta CI %-tiles`.\n\nAlong with absolute `Delta`, we report the `Relative Delta`, here a 75% relative increase. `Delta Relative` estimates also have associated `CI`s.\n\nThe size of the `Delta` in proportionality is moderately large, as indicated by an effect size of 0.70. This test also results in a `p-value` of 0.0003, which is lower than the prescribed $\\alpha=$ 0.05. Thus the Hypothesis test declares that the `hypothesis` that `'C is larger'` should be accepted.\n\n\n#### Visualizing `InferenceResults`\n\nIn addition to `.display()`ing the test results to the console, we can `.visualize()` the results.\n\n```python\nab_test_results.visualize()\n```\n\n<div style=\"text-align:center\"><img src=\"https://raw.githubusercontent.com/dustinstansbury/spearmint/main/images/proportions_delta_example.png\"/></div>\n\nThe left plot shows each Samples's estimated parametric distribution, as well as the estimates of group central tendency and 95% Confidence Intervals (CIs) around those estiates (plotted as intervals along the x-axis). Non-overlapping distributions and CIs provides strong visual evidence that the difference between the two groups' central tendencies is statistically significant.\n\nThe right plot shows the `Delta` distribution over the _difference_ in those estimated sample distributions, along with 95% CIs. Delta CIs greater than zero give further visual evidence that the difference in the two samples is statistically significant.\n\n---\n\n **\ud83d\udca1 NOTE**\n\nFor `\"binary\"`, `\"frequentist\"` tests--i.e. `Proportions Delta` tests--we display the inference results for the observed Samples (i.e. the left `ab_test_results.visualize()` plot) as binomial distributions, giving the distribution over the expected number of successful trials given the total number observations and the number of `True`/`False` trials per group.\n\n---\n\n## [Additional Documentation and Tutorials](https://github.com/dustinstansbury/spearmint/blob/master/docs)\nFor more details of using `spearmint`'s API see the [Spearmint Basics Tutorial](https://github.com/dustinstansbury/spearmint/blob/main/docs/spearmint_basics.ipynb), or try running it in [Google Collab](https://colab.research.google.com/drive/1lbR-0Da196ST-Yq157m4PzUx8cy7WpK_?usp=sharing)\n\n## [CHANGELOG](./CHANGELOG.md)\n\n\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2024, Dustin E. Stansbury  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Refreshing hypothesis testing in python!",
    "version": "0.0.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/dustinstansbury/spearmint/issues",
        "Changelog": "https://github.com/dustinstansbury/spearmint/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/dustinstansbury/spearmint",
        "Homepage": "https://github.com/dustinstansbury/spearmint",
        "Repository": "https://github.com/dustinstansbury/spearmint.git"
    },
    "split_keywords": [
        "data science",
        "hypothesis testing",
        "ab testing",
        "statistics",
        "bayesian inference",
        "bootstrap statistics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f619dd6e6d0b37a1eff8ff00473bf0f0dbc1f3abd4af1fbf762d983e99609e2",
                "md5": "ba15184af20f7e1fc2acd8764f872409",
                "sha256": "890a9b4c6386402945d68ad69e50825999886373b8f37e00eddfeaddc9097816"
            },
            "downloads": -1,
            "filename": "spearmint-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ba15184af20f7e1fc2acd8764f872409",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 67239,
            "upload_time": "2023-12-05T05:23:31",
            "upload_time_iso_8601": "2023-12-05T05:23:31.384249Z",
            "url": "https://files.pythonhosted.org/packages/9f/61/9dd6e6d0b37a1eff8ff00473bf0f0dbc1f3abd4af1fbf762d983e99609e2/spearmint-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2885c083ede997e7032b4ed7d6ec8ed587a1eac24a980c9469f183bcabfbf977",
                "md5": "42b7067567a36bf420adf3d5abe39314",
                "sha256": "b0a13476250d09ea6ddc9807352a6d922a86bab45a7606a6d328e5c539ed5a5d"
            },
            "downloads": -1,
            "filename": "spearmint-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "42b7067567a36bf420adf3d5abe39314",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 587375,
            "upload_time": "2023-12-05T05:23:33",
            "upload_time_iso_8601": "2023-12-05T05:23:33.131073Z",
            "url": "https://files.pythonhosted.org/packages/28/85/c083ede997e7032b4ed7d6ec8ed587a1eac24a980c9469f183bcabfbf977/spearmint-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-05 05:23:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dustinstansbury",
    "github_project": "spearmint",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "spearmint"
}
        
Elapsed time: 1.77667s