bayesian-testing


Namebayesian-testing JSON
Version 0.6.1 PyPI version JSON
download
home_pagehttps://github.com/Matt52/bayesian-testing
SummaryBayesian A/B testing with simple probabilities.
upload_time2023-12-23 14:38:45
maintainer
docs_urlNone
authorMatus Baniar
requires_python>=3.7.1,<4.0.0
licenseMIT
keywords ab testing bayes bayesian statistics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Tests](https://github.com/Matt52/bayesian-testing/workflows/Tests/badge.svg)](https://github.com/Matt52/bayesian-testing/actions?workflow=Tests)
[![Codecov](https://codecov.io/gh/Matt52/bayesian-testing/branch/main/graph/badge.svg)](https://codecov.io/gh/Matt52/bayesian-testing)
[![PyPI](https://img.shields.io/pypi/v/bayesian-testing.svg)](https://pypi.org/project/bayesian-testing/)
# Bayesian A/B testing
`bayesian_testing` is a small package for a quick evaluation of A/B (or A/B/C/...) tests using
Bayesian approach.

**Implemented tests:**
- [BinaryDataTest](bayesian_testing/experiments/binary.py)
  - **_Input data_** - binary data (`[0, 1, 0, ...]`)
  - Designed for conversion-like data A/B testing.
- [NormalDataTest](bayesian_testing/experiments/normal.py)
  - **_Input data_** - normal data with unknown variance
  - Designed for normal data A/B testing.
- [DeltaLognormalDataTest](bayesian_testing/experiments/delta_lognormal.py)
  - **_Input data_** - lognormal data with zeros
  - Designed for revenue-like data A/B testing.
- [DeltaNormalDataTest](bayesian_testing/experiments/delta_normal.py)
  - **_Input data_** - normal data with zeros
  - Designed for profit-like data A/B testing.
- [DiscreteDataTest](bayesian_testing/experiments/discrete.py)
  - **_Input data_** - categorical data with numerical categories
  - Designed for discrete data A/B testing (e.g. dice rolls, star ratings, 1-10 ratings, etc.).
- [PoissonDataTest](bayesian_testing/experiments/poisson.py)
  - **_Input data_** - non-negative integers (`[1, 0, 3, ...]`)
  - Designed for poisson data A/B testing.
- [ExponentialDataTest](bayesian_testing/experiments/exponential.py)
  - **_Input data_** - exponential data (non-negative real numbers)
  - Designed for exponential data A/B testing (e.g. session/waiting time, time between events,
etc.).

**Implemented evaluation metrics:**
- `Probability of Being Best`
  - Probability that a given variant is best among all variants.
  - By default, `the best` is equivalent to `the greatest` (from a data/metric point of view),
however it is possible to change this by using `min_is_best=True` in the evaluation method
(this can be useful if we try to find the variant while minimizing the tested measure).
- `Expected Loss`
  - "Risk" of choosing particular variant over other variants in the test.
  - Measured in the same units as a tested measure (e.g. positive rate or average value).

Both evaluation metrics are calculated using simulations from posterior distributions (considering
given data).


## Installation
`bayesian_testing` can be installed using pip:
```console
pip install bayesian_testing
```
Alternatively, you can clone the repository and use `poetry` manually:
```console
cd bayesian_testing
pip install poetry
poetry install
poetry shell
```

## Basic Usage
The primary features are classes:
- `BinaryDataTest`
- `NormalDataTest`
- `DeltaLognormalDataTest`
- `DeltaNormalDataTest`
- `DiscreteDataTest`
- `PoissonDataTest`
- `ExponentialDataTest`

All test classes support two methods to insert the data:
- `add_variant_data` - Adding raw data for a variant as a list of observations (or numpy 1-D array).
- `add_variant_data_agg` - Adding aggregated variant data (this can be practical for a large data,
as the aggregation can be done already on a database level).

Both methods for adding data allow specification of prior distributions
(see details in respective docstrings). Default prior setup should be sufficient for most of the
cases (e.g. cases with unknown priors or large amounts of data).

To get the results of the test, simply call the method `evaluate`.

Probabilities of being best and expected loss are approximated using simulations, hence the
`evaluate` method can return slightly different values for different runs. To stabilize it, you can
set the `sim_count` parameter of the `evaluate` to a higher value (default value is 20K), or even
use the `seed` parameter to fix it completely.


### BinaryDataTest
Class for a Bayesian A/B test for the binary-like data (e.g. conversions, successes, etc.).

**Example:**
```python
import numpy as np
from bayesian_testing.experiments import BinaryDataTest

# generating some random data
rng = np.random.default_rng(52)
# random 1x1500 array of 0/1 data with 5.2% probability for 1:
data_a = rng.binomial(n=1, p=0.052, size=1500)
# random 1x1200 array of 0/1 data with 6.7% probability for 1:
data_b = rng.binomial(n=1, p=0.067, size=1200)

# initialize a test:
test = BinaryDataTest()

# add variant using raw data (arrays of zeros and ones):
test.add_variant_data("A", data_a)
test.add_variant_data("B", data_b)
# priors can be specified like this (default for this test is a=b=1/2):
# test.add_variant_data("B", data_b, a_prior=1, b_prior=20)

# add variant using aggregated data (same as raw data with 950 zeros and 50 ones):
test.add_variant_data_agg("C", totals=1000, positives=50)

# evaluate test:
results = test.evaluate()
results # print(pd.DataFrame(results).to_markdown(tablefmt="grid", index=False))
```

    +---------+--------+-----------+---------------+----------------+-----------------+---------------+
    | variant | totals | positives | positive_rate | posterior_mean | prob_being_best | expected_loss |
    +=========+========+===========+===============+================+=================+===============+
    | A       |   1500 |        80 |       0.05333 |        0.05363 |         0.067   |     0.0138102 |
    +---------+--------+-----------+---------------+----------------+-----------------+---------------+
    | B       |   1200 |        80 |       0.06667 |        0.06703 |         0.88975 |     0.0004622 |
    +---------+--------+-----------+---------------+----------------+-----------------+---------------+
    | C       |   1000 |        50 |       0.05    |        0.05045 |         0.04325 |     0.0169686 |
    +---------+--------+-----------+---------------+----------------+-----------------+---------------+

### NormalDataTest
Class for a Bayesian A/B test for the normal data.

**Example:**
```python
import numpy as np
from bayesian_testing.experiments import NormalDataTest

# generating some random data
rng = np.random.default_rng(21)
data_a = rng.normal(7.2, 2, 1000)
data_b = rng.normal(7.1, 2, 800)
data_c = rng.normal(7.0, 4, 500)

# initialize a test:
test = NormalDataTest()

# add variant using raw data:
test.add_variant_data("A", data_a)
test.add_variant_data("B", data_b)
# test.add_variant_data("C", data_c)

# add variant using aggregated data:
test.add_variant_data_agg("C", len(data_c), sum(data_c), sum(np.square(data_c)))

# evaluate test:
results = test.evaluate(sim_count=20000, seed=52, min_is_best=False)
results # print(pd.DataFrame(results).to_markdown(tablefmt="grid", index=False))
```

    +---------+--------+------------+------------+----------------+-----------------+---------------+
    | variant | totals | sum_values | avg_values | posterior_mean | prob_being_best | expected_loss |
    +=========+========+============+============+================+=================+===============+
    | A       |   1000 |    7294.68 |    7.29468 |        7.29462 |         0.1707  |     0.196874  |
    +---------+--------+------------+------------+----------------+-----------------+---------------+
    | B       |    800 |    5685.86 |    7.10733 |        7.10725 |         0.00125 |     0.385112  |
    +---------+--------+------------+------------+----------------+-----------------+---------------+
    | C       |    500 |    3736.92 |    7.47383 |        7.4737  |         0.82805 |     0.0169998 |
    +---------+--------+------------+------------+----------------+-----------------+---------------+

### DeltaLognormalDataTest
Class for a Bayesian A/B test for the delta-lognormal data (log-normal with zeros).
Delta-lognormal data is typical case of revenue per session data where many sessions have 0 revenue
but non-zero values are positive values with possible log-normal distribution.
To handle this data, the calculation is combining binary Bayes model for zero vs non-zero
"conversions" and log-normal model for non-zero values.

**Example:**
```python
import numpy as np
from bayesian_testing.experiments import DeltaLognormalDataTest

test = DeltaLognormalDataTest()

data_a = [7.1, 0.3, 5.9, 0, 1.3, 0.3, 0, 1.2, 0, 3.6, 0, 1.5, 2.2, 0, 4.9, 0, 0, 1.1, 0, 0, 7.1, 0, 6.9, 0]
data_b = [4.0, 0, 3.3, 19.3, 18.5, 0, 0, 0, 12.9, 0, 0, 0, 10.2, 0, 0, 23.1, 0, 3.7, 0, 0, 11.3, 10.0, 0, 18.3, 12.1]

# adding variant using raw data:
test.add_variant_data("A", data_a)
# test.add_variant_data("B", data_b)

# alternatively, variant can be also added using aggregated data
# (looks more complicated, but it can be quite handy for a large data):
test.add_variant_data_agg(
    name="B",
    totals=len(data_b),
    positives=sum(x > 0 for x in data_b),
    sum_values=sum(data_b),
    sum_logs=sum([np.log(x) for x in data_b if x > 0]),
    sum_logs_2=sum([np.square(np.log(x)) for x in data_b if x > 0])
)

# evaluate test:
results = test.evaluate(seed=21)
results # print(pd.DataFrame(results).to_markdown(tablefmt="grid", index=False))
```

    +---------+--------+-----------+------------+------------+---------------------+-----------------+---------------+
    | variant | totals | positives | sum_values | avg_values | avg_positive_values | prob_being_best | expected_loss |
    +=========+========+===========+============+============+=====================+=================+===============+
    | A       |     24 |        13 |       43.4 |    1.80833 |             3.33846 |         0.04815 |      4.09411  |
    +---------+--------+-----------+------------+------------+---------------------+-----------------+---------------+
    | B       |     25 |        12 |      146.7 |    5.868   |            12.225   |         0.95185 |      0.158863 |
    +---------+--------+-----------+------------+------------+---------------------+-----------------+---------------+

***Note**: Alternatively, `DeltaNormalDataTest` can be used for a case when conversions are not
necessarily positive values.*

### DiscreteDataTest
Class for a Bayesian A/B test for the discrete data with finite number of numerical categories
(states), representing some value.
This test can be used for instance for dice rolls data (when looking for the "best" of multiple
dice) or rating data (e.g. 1-5 stars or 1-10 scale).

**Example:**
```python
from bayesian_testing.experiments import DiscreteDataTest

# dice rolls data for 3 dice - A, B, C
data_a = [2, 5, 1, 4, 6, 2, 2, 6, 3, 2, 6, 3, 4, 6, 3, 1, 6, 3, 5, 6]
data_b = [1, 2, 2, 2, 2, 3, 2, 3, 4, 2]
data_c = [1, 3, 6, 5, 4]

# initialize a test with all possible states (i.e. numerical categories):
test = DiscreteDataTest(states=[1, 2, 3, 4, 5, 6])

# add variant using raw data:
test.add_variant_data("A", data_a)
test.add_variant_data("B", data_b)
test.add_variant_data("C", data_c)

# add variant using aggregated data:
# test.add_variant_data_agg("C", [1, 0, 1, 1, 1, 1]) # equivalent to rolls in data_c

# evaluate test:
results = test.evaluate(sim_count=20000, seed=52, min_is_best=False)
results # print(pd.DataFrame(results).to_markdown(tablefmt="grid", index=False))
```

    +---------+--------------------------------------------------+---------------+-----------------+---------------+
    | variant | concentration                                    | average_value | prob_being_best | expected_loss |
    +=========+==================================================+===============+=================+===============+
    | A       | {1: 2.0, 2: 4.0, 3: 4.0, 4: 2.0, 5: 2.0, 6: 6.0} |           3.8 |         0.54685 |      0.199953 |
    +---------+--------------------------------------------------+---------------+-----------------+---------------+
    | B       | {1: 1.0, 2: 6.0, 3: 2.0, 4: 1.0, 5: 0.0, 6: 0.0} |           2.3 |         0.008   |      1.18268  |
    +---------+--------------------------------------------------+---------------+-----------------+---------------+
    | C       | {1: 1.0, 2: 0.0, 3: 1.0, 4: 1.0, 5: 1.0, 6: 1.0} |           3.8 |         0.44515 |      0.287025 |
    +---------+--------------------------------------------------+---------------+-----------------+---------------+

### PoissonDataTest
Class for a Bayesian A/B test for the poisson data.

**Example:**
```python
from bayesian_testing.experiments import PoissonDataTest

# goals received - so less is better (duh...)
psg_goals_against = [0, 2, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 3, 1, 0]
city_goals_against = [0, 0, 3, 2, 0, 1, 0, 3, 0, 1, 1, 0, 1, 2]
bayern_goals_against = [1, 0, 0, 1, 1, 2, 1, 0, 2, 0, 0, 2, 2, 1, 0]

# initialize a test:
test = PoissonDataTest()

# add variant using raw data:
test.add_variant_data('psg', psg_goals_against)

# example with specific priors
# ("b_prior" as an effective sample size, and "a_prior/b_prior" as a prior mean):
test.add_variant_data('city', city_goals_against, a_prior=3, b_prior=1)
# test.add_variant_data('bayern', bayern_goals_against)

# add variant using aggregated data:
test.add_variant_data_agg("bayern", len(bayern_goals_against), sum(bayern_goals_against))

# evaluate test (since fewer goals is better, we explicitly set the min_is_best to True)
results = test.evaluate(sim_count=20000, seed=52, min_is_best=True)
results # print(pd.DataFrame(results).to_markdown(tablefmt="grid", index=False))
```

    +---------+--------+------------+------------------+----------------+-----------------+---------------+
    | variant | totals | sum_values | observed_average | posterior_mean | prob_being_best | expected_loss |
    +=========+========+============+==================+================+=================+===============+
    | psg     |     15 |          9 |          0.6     |        0.60265 |         0.78175 |     0.0369998 |
    +---------+--------+------------+------------------+----------------+-----------------+---------------+
    | city    |     14 |         14 |          1       |        1.13333 |         0.0344  |     0.562055  |
    +---------+--------+------------+------------------+----------------+-----------------+---------------+
    | bayern  |     15 |         13 |          0.86667 |        0.86755 |         0.18385 |     0.300335  |
    +---------+--------+------------+------------------+----------------+-----------------+---------------+

_note: Since we set `min_is_best=True` (because received goals are "bad"), probability and loss are
in a favor of variants with lower posterior means._

### ExponentialDataTest
Class for a Bayesian A/B test for the exponential data.

**Example:**
```python
import numpy as np
from bayesian_testing.experiments import ExponentialDataTest

# waiting times for 3 different variants, each with many observations,
# generated using exponential distributions with defined scales (expected values)
waiting_times_a = np.random.exponential(scale=10, size=200)
waiting_times_b = np.random.exponential(scale=11, size=210)
waiting_times_c = np.random.exponential(scale=11, size=220)

# initialize a test:
test = ExponentialDataTest()
# adding variants using the observation data:
test.add_variant_data('A', waiting_times_a)
test.add_variant_data('B', waiting_times_b)
test.add_variant_data('C', waiting_times_c)

# alternatively, add variants using aggregated data:
# test.add_variant_data_agg('A', len(waiting_times_a), sum(waiting_times_a))

# evaluate test (since a lower waiting time is better, we explicitly set the min_is_best to True)
results = test.evaluate(sim_count=20000, min_is_best=True)
results # print(pd.DataFrame(results).to_markdown(tablefmt="grid", index=False))
```

    +---------+--------+------------+------------------+----------------+-----------------+---------------+
    | variant | totals | sum_values | observed_average | posterior_mean | prob_being_best | expected_loss |
    +=========+========+============+==================+================+=================+===============+
    | A       |    200 |    1884.18 |          9.42092 |        9.41671 |         0.89785 |     0.0395505 |
    +---------+--------+------------+------------------+----------------+-----------------+---------------+
    | B       |    210 |    2350.03 |         11.1906  |       11.1858  |         0.03405 |     1.80781   |
    +---------+--------+------------+------------------+----------------+-----------------+---------------+
    | C       |    220 |    2380.65 |         10.8211  |       10.8167  |         0.0681  |     1.4408    |
    +---------+--------+------------+------------------+----------------+-----------------+---------------+

## Development
To set up a development environment, use [Poetry](https://python-poetry.org/) and [pre-commit](https://pre-commit.com):
```console
pip install poetry
poetry install
poetry run pre-commit install
```

## To be implemented

Additional metrics:
- `Potential Value Remaining`

## References
- `bayesian_testing` package itself depends only on [numpy](https://numpy.org) package.
- Work on this package (including default priors selection) was inspired mainly by a Coursera
course [Bayesian Statistics: From Concept to Data Analysis](https://www.coursera.org/learn/bayesian-statistics).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Matt52/bayesian-testing",
    "name": "bayesian-testing",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.1,<4.0.0",
    "maintainer_email": "",
    "keywords": "ab testing,bayes,bayesian statistics",
    "author": "Matus Baniar",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/72/5f/2c424f3a8e25d7904c8f0e7b8de130fdd87c4adc674ce5115248a3a061ed/bayesian_testing-0.6.1.tar.gz",
    "platform": null,
    "description": "[![Tests](https://github.com/Matt52/bayesian-testing/workflows/Tests/badge.svg)](https://github.com/Matt52/bayesian-testing/actions?workflow=Tests)\n[![Codecov](https://codecov.io/gh/Matt52/bayesian-testing/branch/main/graph/badge.svg)](https://codecov.io/gh/Matt52/bayesian-testing)\n[![PyPI](https://img.shields.io/pypi/v/bayesian-testing.svg)](https://pypi.org/project/bayesian-testing/)\n# Bayesian A/B testing\n`bayesian_testing` is a small package for a quick evaluation of A/B (or A/B/C/...) tests using\nBayesian approach.\n\n**Implemented tests:**\n- [BinaryDataTest](bayesian_testing/experiments/binary.py)\n  - **_Input data_** - binary data (`[0, 1, 0, ...]`)\n  - Designed for conversion-like data A/B testing.\n- [NormalDataTest](bayesian_testing/experiments/normal.py)\n  - **_Input data_** - normal data with unknown variance\n  - Designed for normal data A/B testing.\n- [DeltaLognormalDataTest](bayesian_testing/experiments/delta_lognormal.py)\n  - **_Input data_** - lognormal data with zeros\n  - Designed for revenue-like data A/B testing.\n- [DeltaNormalDataTest](bayesian_testing/experiments/delta_normal.py)\n  - **_Input data_** - normal data with zeros\n  - Designed for profit-like data A/B testing.\n- [DiscreteDataTest](bayesian_testing/experiments/discrete.py)\n  - **_Input data_** - categorical data with numerical categories\n  - Designed for discrete data A/B testing (e.g. dice rolls, star ratings, 1-10 ratings, etc.).\n- [PoissonDataTest](bayesian_testing/experiments/poisson.py)\n  - **_Input data_** - non-negative integers (`[1, 0, 3, ...]`)\n  - Designed for poisson data A/B testing.\n- [ExponentialDataTest](bayesian_testing/experiments/exponential.py)\n  - **_Input data_** - exponential data (non-negative real numbers)\n  - Designed for exponential data A/B testing (e.g. session/waiting time, time between events,\netc.).\n\n**Implemented evaluation metrics:**\n- `Probability of Being Best`\n  - Probability that a given variant is best among all variants.\n  - By default, `the best` is equivalent to `the greatest` (from a data/metric point of view),\nhowever it is possible to change this by using `min_is_best=True` in the evaluation method\n(this can be useful if we try to find the variant while minimizing the tested measure).\n- `Expected Loss`\n  - \"Risk\" of choosing particular variant over other variants in the test.\n  - Measured in the same units as a tested measure (e.g. positive rate or average value).\n\nBoth evaluation metrics are calculated using simulations from posterior distributions (considering\ngiven data).\n\n\n## Installation\n`bayesian_testing` can be installed using pip:\n```console\npip install bayesian_testing\n```\nAlternatively, you can clone the repository and use `poetry` manually:\n```console\ncd bayesian_testing\npip install poetry\npoetry install\npoetry shell\n```\n\n## Basic Usage\nThe primary features are classes:\n- `BinaryDataTest`\n- `NormalDataTest`\n- `DeltaLognormalDataTest`\n- `DeltaNormalDataTest`\n- `DiscreteDataTest`\n- `PoissonDataTest`\n- `ExponentialDataTest`\n\nAll test classes support two methods to insert the data:\n- `add_variant_data` - Adding raw data for a variant as a list of observations (or numpy 1-D array).\n- `add_variant_data_agg` - Adding aggregated variant data (this can be practical for a large data,\nas the aggregation can be done already on a database level).\n\nBoth methods for adding data allow specification of prior distributions\n(see details in respective docstrings). Default prior setup should be sufficient for most of the\ncases (e.g. cases with unknown priors or large amounts of data).\n\nTo get the results of the test, simply call the method `evaluate`.\n\nProbabilities of being best and expected loss are approximated using simulations, hence the\n`evaluate` method can return slightly different values for different runs. To stabilize it, you can\nset the `sim_count` parameter of the `evaluate` to a higher value (default value is 20K), or even\nuse the `seed` parameter to fix it completely.\n\n\n### BinaryDataTest\nClass for a Bayesian A/B test for the binary-like data (e.g. conversions, successes, etc.).\n\n**Example:**\n```python\nimport numpy as np\nfrom bayesian_testing.experiments import BinaryDataTest\n\n# generating some random data\nrng = np.random.default_rng(52)\n# random 1x1500 array of 0/1 data with 5.2% probability for 1:\ndata_a = rng.binomial(n=1, p=0.052, size=1500)\n# random 1x1200 array of 0/1 data with 6.7% probability for 1:\ndata_b = rng.binomial(n=1, p=0.067, size=1200)\n\n# initialize a test:\ntest = BinaryDataTest()\n\n# add variant using raw data (arrays of zeros and ones):\ntest.add_variant_data(\"A\", data_a)\ntest.add_variant_data(\"B\", data_b)\n# priors can be specified like this (default for this test is a=b=1/2):\n# test.add_variant_data(\"B\", data_b, a_prior=1, b_prior=20)\n\n# add variant using aggregated data (same as raw data with 950 zeros and 50 ones):\ntest.add_variant_data_agg(\"C\", totals=1000, positives=50)\n\n# evaluate test:\nresults = test.evaluate()\nresults # print(pd.DataFrame(results).to_markdown(tablefmt=\"grid\", index=False))\n```\n\n    +---------+--------+-----------+---------------+----------------+-----------------+---------------+\n    | variant | totals | positives | positive_rate | posterior_mean | prob_being_best | expected_loss |\n    +=========+========+===========+===============+================+=================+===============+\n    | A       |   1500 |        80 |       0.05333 |        0.05363 |         0.067   |     0.0138102 |\n    +---------+--------+-----------+---------------+----------------+-----------------+---------------+\n    | B       |   1200 |        80 |       0.06667 |        0.06703 |         0.88975 |     0.0004622 |\n    +---------+--------+-----------+---------------+----------------+-----------------+---------------+\n    | C       |   1000 |        50 |       0.05    |        0.05045 |         0.04325 |     0.0169686 |\n    +---------+--------+-----------+---------------+----------------+-----------------+---------------+\n\n### NormalDataTest\nClass for a Bayesian A/B test for the normal data.\n\n**Example:**\n```python\nimport numpy as np\nfrom bayesian_testing.experiments import NormalDataTest\n\n# generating some random data\nrng = np.random.default_rng(21)\ndata_a = rng.normal(7.2, 2, 1000)\ndata_b = rng.normal(7.1, 2, 800)\ndata_c = rng.normal(7.0, 4, 500)\n\n# initialize a test:\ntest = NormalDataTest()\n\n# add variant using raw data:\ntest.add_variant_data(\"A\", data_a)\ntest.add_variant_data(\"B\", data_b)\n# test.add_variant_data(\"C\", data_c)\n\n# add variant using aggregated data:\ntest.add_variant_data_agg(\"C\", len(data_c), sum(data_c), sum(np.square(data_c)))\n\n# evaluate test:\nresults = test.evaluate(sim_count=20000, seed=52, min_is_best=False)\nresults # print(pd.DataFrame(results).to_markdown(tablefmt=\"grid\", index=False))\n```\n\n    +---------+--------+------------+------------+----------------+-----------------+---------------+\n    | variant | totals | sum_values | avg_values | posterior_mean | prob_being_best | expected_loss |\n    +=========+========+============+============+================+=================+===============+\n    | A       |   1000 |    7294.68 |    7.29468 |        7.29462 |         0.1707  |     0.196874  |\n    +---------+--------+------------+------------+----------------+-----------------+---------------+\n    | B       |    800 |    5685.86 |    7.10733 |        7.10725 |         0.00125 |     0.385112  |\n    +---------+--------+------------+------------+----------------+-----------------+---------------+\n    | C       |    500 |    3736.92 |    7.47383 |        7.4737  |         0.82805 |     0.0169998 |\n    +---------+--------+------------+------------+----------------+-----------------+---------------+\n\n### DeltaLognormalDataTest\nClass for a Bayesian A/B test for the delta-lognormal data (log-normal with zeros).\nDelta-lognormal data is typical case of revenue per session data where many sessions have 0 revenue\nbut non-zero values are positive values with possible log-normal distribution.\nTo handle this data, the calculation is combining binary Bayes model for zero vs non-zero\n\"conversions\" and log-normal model for non-zero values.\n\n**Example:**\n```python\nimport numpy as np\nfrom bayesian_testing.experiments import DeltaLognormalDataTest\n\ntest = DeltaLognormalDataTest()\n\ndata_a = [7.1, 0.3, 5.9, 0, 1.3, 0.3, 0, 1.2, 0, 3.6, 0, 1.5, 2.2, 0, 4.9, 0, 0, 1.1, 0, 0, 7.1, 0, 6.9, 0]\ndata_b = [4.0, 0, 3.3, 19.3, 18.5, 0, 0, 0, 12.9, 0, 0, 0, 10.2, 0, 0, 23.1, 0, 3.7, 0, 0, 11.3, 10.0, 0, 18.3, 12.1]\n\n# adding variant using raw data:\ntest.add_variant_data(\"A\", data_a)\n# test.add_variant_data(\"B\", data_b)\n\n# alternatively, variant can be also added using aggregated data\n# (looks more complicated, but it can be quite handy for a large data):\ntest.add_variant_data_agg(\n    name=\"B\",\n    totals=len(data_b),\n    positives=sum(x > 0 for x in data_b),\n    sum_values=sum(data_b),\n    sum_logs=sum([np.log(x) for x in data_b if x > 0]),\n    sum_logs_2=sum([np.square(np.log(x)) for x in data_b if x > 0])\n)\n\n# evaluate test:\nresults = test.evaluate(seed=21)\nresults # print(pd.DataFrame(results).to_markdown(tablefmt=\"grid\", index=False))\n```\n\n    +---------+--------+-----------+------------+------------+---------------------+-----------------+---------------+\n    | variant | totals | positives | sum_values | avg_values | avg_positive_values | prob_being_best | expected_loss |\n    +=========+========+===========+============+============+=====================+=================+===============+\n    | A       |     24 |        13 |       43.4 |    1.80833 |             3.33846 |         0.04815 |      4.09411  |\n    +---------+--------+-----------+------------+------------+---------------------+-----------------+---------------+\n    | B       |     25 |        12 |      146.7 |    5.868   |            12.225   |         0.95185 |      0.158863 |\n    +---------+--------+-----------+------------+------------+---------------------+-----------------+---------------+\n\n***Note**: Alternatively, `DeltaNormalDataTest` can be used for a case when conversions are not\nnecessarily positive values.*\n\n### DiscreteDataTest\nClass for a Bayesian A/B test for the discrete data with finite number of numerical categories\n(states), representing some value.\nThis test can be used for instance for dice rolls data (when looking for the \"best\" of multiple\ndice) or rating data (e.g. 1-5 stars or 1-10 scale).\n\n**Example:**\n```python\nfrom bayesian_testing.experiments import DiscreteDataTest\n\n# dice rolls data for 3 dice - A, B, C\ndata_a = [2, 5, 1, 4, 6, 2, 2, 6, 3, 2, 6, 3, 4, 6, 3, 1, 6, 3, 5, 6]\ndata_b = [1, 2, 2, 2, 2, 3, 2, 3, 4, 2]\ndata_c = [1, 3, 6, 5, 4]\n\n# initialize a test with all possible states (i.e. numerical categories):\ntest = DiscreteDataTest(states=[1, 2, 3, 4, 5, 6])\n\n# add variant using raw data:\ntest.add_variant_data(\"A\", data_a)\ntest.add_variant_data(\"B\", data_b)\ntest.add_variant_data(\"C\", data_c)\n\n# add variant using aggregated data:\n# test.add_variant_data_agg(\"C\", [1, 0, 1, 1, 1, 1]) # equivalent to rolls in data_c\n\n# evaluate test:\nresults = test.evaluate(sim_count=20000, seed=52, min_is_best=False)\nresults # print(pd.DataFrame(results).to_markdown(tablefmt=\"grid\", index=False))\n```\n\n    +---------+--------------------------------------------------+---------------+-----------------+---------------+\n    | variant | concentration                                    | average_value | prob_being_best | expected_loss |\n    +=========+==================================================+===============+=================+===============+\n    | A       | {1: 2.0, 2: 4.0, 3: 4.0, 4: 2.0, 5: 2.0, 6: 6.0} |           3.8 |         0.54685 |      0.199953 |\n    +---------+--------------------------------------------------+---------------+-----------------+---------------+\n    | B       | {1: 1.0, 2: 6.0, 3: 2.0, 4: 1.0, 5: 0.0, 6: 0.0} |           2.3 |         0.008   |      1.18268  |\n    +---------+--------------------------------------------------+---------------+-----------------+---------------+\n    | C       | {1: 1.0, 2: 0.0, 3: 1.0, 4: 1.0, 5: 1.0, 6: 1.0} |           3.8 |         0.44515 |      0.287025 |\n    +---------+--------------------------------------------------+---------------+-----------------+---------------+\n\n### PoissonDataTest\nClass for a Bayesian A/B test for the poisson data.\n\n**Example:**\n```python\nfrom bayesian_testing.experiments import PoissonDataTest\n\n# goals received - so less is better (duh...)\npsg_goals_against = [0, 2, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 3, 1, 0]\ncity_goals_against = [0, 0, 3, 2, 0, 1, 0, 3, 0, 1, 1, 0, 1, 2]\nbayern_goals_against = [1, 0, 0, 1, 1, 2, 1, 0, 2, 0, 0, 2, 2, 1, 0]\n\n# initialize a test:\ntest = PoissonDataTest()\n\n# add variant using raw data:\ntest.add_variant_data('psg', psg_goals_against)\n\n# example with specific priors\n# (\"b_prior\" as an effective sample size, and \"a_prior/b_prior\" as a prior mean):\ntest.add_variant_data('city', city_goals_against, a_prior=3, b_prior=1)\n# test.add_variant_data('bayern', bayern_goals_against)\n\n# add variant using aggregated data:\ntest.add_variant_data_agg(\"bayern\", len(bayern_goals_against), sum(bayern_goals_against))\n\n# evaluate test (since fewer goals is better, we explicitly set the min_is_best to True)\nresults = test.evaluate(sim_count=20000, seed=52, min_is_best=True)\nresults # print(pd.DataFrame(results).to_markdown(tablefmt=\"grid\", index=False))\n```\n\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n    | variant | totals | sum_values | observed_average | posterior_mean | prob_being_best | expected_loss |\n    +=========+========+============+==================+================+=================+===============+\n    | psg     |     15 |          9 |          0.6     |        0.60265 |         0.78175 |     0.0369998 |\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n    | city    |     14 |         14 |          1       |        1.13333 |         0.0344  |     0.562055  |\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n    | bayern  |     15 |         13 |          0.86667 |        0.86755 |         0.18385 |     0.300335  |\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n\n_note: Since we set `min_is_best=True` (because received goals are \"bad\"), probability and loss are\nin a favor of variants with lower posterior means._\n\n### ExponentialDataTest\nClass for a Bayesian A/B test for the exponential data.\n\n**Example:**\n```python\nimport numpy as np\nfrom bayesian_testing.experiments import ExponentialDataTest\n\n# waiting times for 3 different variants, each with many observations,\n# generated using exponential distributions with defined scales (expected values)\nwaiting_times_a = np.random.exponential(scale=10, size=200)\nwaiting_times_b = np.random.exponential(scale=11, size=210)\nwaiting_times_c = np.random.exponential(scale=11, size=220)\n\n# initialize a test:\ntest = ExponentialDataTest()\n# adding variants using the observation data:\ntest.add_variant_data('A', waiting_times_a)\ntest.add_variant_data('B', waiting_times_b)\ntest.add_variant_data('C', waiting_times_c)\n\n# alternatively, add variants using aggregated data:\n# test.add_variant_data_agg('A', len(waiting_times_a), sum(waiting_times_a))\n\n# evaluate test (since a lower waiting time is better, we explicitly set the min_is_best to True)\nresults = test.evaluate(sim_count=20000, min_is_best=True)\nresults # print(pd.DataFrame(results).to_markdown(tablefmt=\"grid\", index=False))\n```\n\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n    | variant | totals | sum_values | observed_average | posterior_mean | prob_being_best | expected_loss |\n    +=========+========+============+==================+================+=================+===============+\n    | A       |    200 |    1884.18 |          9.42092 |        9.41671 |         0.89785 |     0.0395505 |\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n    | B       |    210 |    2350.03 |         11.1906  |       11.1858  |         0.03405 |     1.80781   |\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n    | C       |    220 |    2380.65 |         10.8211  |       10.8167  |         0.0681  |     1.4408    |\n    +---------+--------+------------+------------------+----------------+-----------------+---------------+\n\n## Development\nTo set up a development environment, use [Poetry](https://python-poetry.org/) and [pre-commit](https://pre-commit.com):\n```console\npip install poetry\npoetry install\npoetry run pre-commit install\n```\n\n## To be implemented\n\nAdditional metrics:\n- `Potential Value Remaining`\n\n## References\n- `bayesian_testing` package itself depends only on [numpy](https://numpy.org) package.\n- Work on this package (including default priors selection) was inspired mainly by a Coursera\ncourse [Bayesian Statistics: From Concept to Data Analysis](https://www.coursera.org/learn/bayesian-statistics).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Bayesian A/B testing with simple probabilities.",
    "version": "0.6.1",
    "project_urls": {
        "Homepage": "https://github.com/Matt52/bayesian-testing",
        "Repository": "https://github.com/Matt52/bayesian-testing"
    },
    "split_keywords": [
        "ab testing",
        "bayes",
        "bayesian statistics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "88d671bdaa66c202c612dffc25cbbc79a4f1d1ff67443c77aaf3b499585bd7ed",
                "md5": "5da6df1cb89945475f9cc9552741a917",
                "sha256": "91655e9a5d7a6302a4c95210aed9d109423e612dcf6e6f23bb8234320369906d"
            },
            "downloads": -1,
            "filename": "bayesian_testing-0.6.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5da6df1cb89945475f9cc9552741a917",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.1,<4.0.0",
            "size": 31966,
            "upload_time": "2023-12-23T14:38:43",
            "upload_time_iso_8601": "2023-12-23T14:38:43.294448Z",
            "url": "https://files.pythonhosted.org/packages/88/d6/71bdaa66c202c612dffc25cbbc79a4f1d1ff67443c77aaf3b499585bd7ed/bayesian_testing-0.6.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "725f2c424f3a8e25d7904c8f0e7b8de130fdd87c4adc674ce5115248a3a061ed",
                "md5": "cfed75262fba681ec94e2aa5f900ab76",
                "sha256": "941bb406bf10f42da27a383baf187183469769be5c67f673117478f14f065e46"
            },
            "downloads": -1,
            "filename": "bayesian_testing-0.6.1.tar.gz",
            "has_sig": false,
            "md5_digest": "cfed75262fba681ec94e2aa5f900ab76",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.1,<4.0.0",
            "size": 20262,
            "upload_time": "2023-12-23T14:38:45",
            "upload_time_iso_8601": "2023-12-23T14:38:45.147992Z",
            "url": "https://files.pythonhosted.org/packages/72/5f/2c424f3a8e25d7904c8f0e7b8de130fdd87c4adc674ce5115248a3a061ed/bayesian_testing-0.6.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-23 14:38:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Matt52",
    "github_project": "bayesian-testing",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "bayesian-testing"
}
        
Elapsed time: 0.15944s