ep-stats


Nameep-stats JSON
Version 2.3.2 PyPI version JSON
download
home_pagehttps://github.com/avast/ep-stats
SummaryStatistical package to evaluate ab tests in experimentation platform.
upload_time2024-04-24 12:57:29
maintainerNone
docs_urlNone
authorOndrej Zahradnik
requires_python>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements anyio asgiref click fastapi h11 idna numpy packaging pandas patsy prometheus-client pydantic pyparsing python-dateutil pytz scipy six sniffio starlette statsmodels typing-extensions tzdata uvicorn
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![](https://img.shields.io/github/workflow/status/avast/ep-stats/Code%20Checks?color=green)
[![PyPI version](https://img.shields.io/pypi/v/ep-stats?color=green)](https://pypi.org/project/ep-stats/)
[![Python versions](https://img.shields.io/pypi/pyversions/ep-stats?color=green)](https://pypi.org/project/ep-stats/)
[![Code style](https://img.shields.io/badge/formatted%20with-brunette-362511)](https://github.com/odwyersoftware/brunette)
[![Code style](https://img.shields.io/badge/styled%20with-flake8-green)](https://flake8.pycqa.org/en/latest/)
![](https://img.shields.io/github/languages/code-size/avast/ep-stats?color=green)
<img src="theme/experiment_b.png" align="right" />

# ep-stats

**Statistical package for the experimentation platform.**

It provides a general Python package and REST API that can be used to evaluate any metric
in an AB test experiment.

## Features

* Robust two-tailed t-test implementation with multiple p-value corrections and delta methods applied.
* Sequential evaluations allow experiments to be stopped early.
* Connect it to any data source to get either pre-aggregated or per randomization unit data.
* Simple expression language to define arbitrary metrics.
* Sample size estimation.
* REST API to integrate it as a service in experimentation portal with score cards.

## Documentation

We have got a lovely [documentation](https://avast.github.io/ep-stats/).

## Base Example

ep-stats allows for a quick experiment evaluation. We are using sample testing data to evaluate metric `Click-through Rate` in experiment `test-conversion`.

```python
from epstats.toolkit import Experiment, Metric, SrmCheck
experiment = Experiment(
    'test-conversion',
    'a',
    [Metric(
        1,
        'Click-through Rate',
        'count(test_unit_type.unit.click)',
        'count(test_unit_type.global.exposure)'),
    ],
    [SrmCheck(1, 'SRM', 'count(test_unit_type.global.exposure)')],
    unit_type='test_unit_type')

# This gets testing data, use other Dao or get aggregated goals in some other way.
from epstats.toolkit.testing import TestData
goals = TestData.load_goals_agg(experiment.id)

# evaluate experiment
ev = experiment.evaluate_agg(goals)
```

`ev` contains evaluations of exposures, metrics, and checks. This will provide the following output.

`ev.exposures`:

| exp_id | exp_variant_id | exposures |
| :----- | :------------- | --------: |
|test-conversion|a|21|
|test-conversion|b|26|

`ev.metrics`:

| exp_id | metric_id | metric_name | exp_variant_id | count | mean | std | sum_value | confidence_level | diff | test_stat | p_value | confidence_interval | standard_error | degrees_of_freedom |
| :----- | --------: | :---------- | -------------: | ----: | ---: | --: | --------: | ---------------: | ---: | --------: | ------: | ------------------: | -------------: | -----------------: |
|test-conversion|1|Click-through Rate|a|21|0.238095|0.436436|5|0.95|0|0|1|1.14329|0.565685|40|
|test-conversion|1|Click-through Rate|b|26|0.269231|0.452344|7|0.95|0.130769|0.223152|0.82446|1.18137|0.586008|43.5401|

`ev.checks`:

| exp_id | check_id | check_name | variable_id | value |
| :----- | -------: | :--------- | :---------- | ----: |
|test-conversion|1|SRM|p_value|0.465803|
|test-conversion|1|SRM|test_stat|0.531915|
|test-conversion|1|SRM|confidence_level|0.999000|

## Installation

You can install this package via `pip`.

```bash
pip install ep-stats
```

## Running

You can run a testing version of ep-stats via

```bash
python -m epstats
```

Then, see Swagger on [http://localhost:8080/docs](http://localhost:8080/docs) for API documentation.

## Contributing

To get started locally, you can clone the repo and quickly get started using the `Makefile`.

```bash
git clone https://github.com/avast/ep-stats.git
cd ep-stats
make install-dev
```

It sets a new virtual environment `venv` in `./venv` using [venv](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/), installs all development dependencies, and sets [pre-commit](https://pre-commit.com/) git hooks to keep the code neatly formatted with [flake8](https://pypi.org/project/flake8/) and [brunette](https://pypi.org/project/brunette/).

To run tests, you can use `Makefile` as well.

```bash
source venv/bin/activate  # activate python environment
make check
```

To run a development version of ep-stats do

```bash
source venv/bin/activate
cd src
python -m epstats
```

### Documentation

To update documentation run

```bash
mkdocs gh-deploy
```

It updates documentation in GitHub pages stored in branch `gh-pages`.

## Inspiration

Software engineering practices of this package have been heavily inspired by marvelous [calmcode.io](https://calmcode.io/) site managed by [Vincent D. Warmerdam](https://github.com/koaning).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/avast/ep-stats",
    "name": "ep-stats",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Ondrej Zahradnik",
    "author_email": "ondrej.zahradnik@avast.com",
    "download_url": "https://files.pythonhosted.org/packages/20/32/04ba4d7a9577b1f5514fbd2b0a2b37e8503963984c41de0845bb33e06827/ep-stats-2.3.2.tar.gz",
    "platform": null,
    "description": "![](https://img.shields.io/github/workflow/status/avast/ep-stats/Code%20Checks?color=green)\n[![PyPI version](https://img.shields.io/pypi/v/ep-stats?color=green)](https://pypi.org/project/ep-stats/)\n[![Python versions](https://img.shields.io/pypi/pyversions/ep-stats?color=green)](https://pypi.org/project/ep-stats/)\n[![Code style](https://img.shields.io/badge/formatted%20with-brunette-362511)](https://github.com/odwyersoftware/brunette)\n[![Code style](https://img.shields.io/badge/styled%20with-flake8-green)](https://flake8.pycqa.org/en/latest/)\n![](https://img.shields.io/github/languages/code-size/avast/ep-stats?color=green)\n<img src=\"theme/experiment_b.png\" align=\"right\" />\n\n# ep-stats\n\n**Statistical package for the experimentation platform.**\n\nIt provides a general Python package and REST API that can be used to evaluate any metric\nin an AB test experiment.\n\n## Features\n\n* Robust two-tailed t-test implementation with multiple p-value corrections and delta methods applied.\n* Sequential evaluations allow experiments to be stopped early.\n* Connect it to any data source to get either pre-aggregated or per randomization unit data.\n* Simple expression language to define arbitrary metrics.\n* Sample size estimation.\n* REST API to integrate it as a service in experimentation portal with score cards.\n\n## Documentation\n\nWe have got a lovely [documentation](https://avast.github.io/ep-stats/).\n\n## Base Example\n\nep-stats allows for a quick experiment evaluation. We are using sample testing data to evaluate metric `Click-through Rate` in experiment `test-conversion`.\n\n```python\nfrom epstats.toolkit import Experiment, Metric, SrmCheck\nexperiment = Experiment(\n    'test-conversion',\n    'a',\n    [Metric(\n        1,\n        'Click-through Rate',\n        'count(test_unit_type.unit.click)',\n        'count(test_unit_type.global.exposure)'),\n    ],\n    [SrmCheck(1, 'SRM', 'count(test_unit_type.global.exposure)')],\n    unit_type='test_unit_type')\n\n# This gets testing data, use other Dao or get aggregated goals in some other way.\nfrom epstats.toolkit.testing import TestData\ngoals = TestData.load_goals_agg(experiment.id)\n\n# evaluate experiment\nev = experiment.evaluate_agg(goals)\n```\n\n`ev` contains evaluations of exposures, metrics, and checks. This will provide the following output.\n\n`ev.exposures`:\n\n| exp_id | exp_variant_id | exposures |\n| :----- | :------------- | --------: |\n|test-conversion|a|21|\n|test-conversion|b|26|\n\n`ev.metrics`:\n\n| exp_id | metric_id | metric_name | exp_variant_id | count | mean | std | sum_value | confidence_level | diff | test_stat | p_value | confidence_interval | standard_error | degrees_of_freedom |\n| :----- | --------: | :---------- | -------------: | ----: | ---: | --: | --------: | ---------------: | ---: | --------: | ------: | ------------------: | -------------: | -----------------: |\n|test-conversion|1|Click-through Rate|a|21|0.238095|0.436436|5|0.95|0|0|1|1.14329|0.565685|40|\n|test-conversion|1|Click-through Rate|b|26|0.269231|0.452344|7|0.95|0.130769|0.223152|0.82446|1.18137|0.586008|43.5401|\n\n`ev.checks`:\n\n| exp_id | check_id | check_name | variable_id | value |\n| :----- | -------: | :--------- | :---------- | ----: |\n|test-conversion|1|SRM|p_value|0.465803|\n|test-conversion|1|SRM|test_stat|0.531915|\n|test-conversion|1|SRM|confidence_level|0.999000|\n\n## Installation\n\nYou can install this package via `pip`.\n\n```bash\npip install ep-stats\n```\n\n## Running\n\nYou can run a testing version of ep-stats via\n\n```bash\npython -m epstats\n```\n\nThen, see Swagger on [http://localhost:8080/docs](http://localhost:8080/docs) for API documentation.\n\n## Contributing\n\nTo get started locally, you can clone the repo and quickly get started using the `Makefile`.\n\n```bash\ngit clone https://github.com/avast/ep-stats.git\ncd ep-stats\nmake install-dev\n```\n\nIt sets a new virtual environment `venv` in `./venv` using [venv](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/), installs all development dependencies, and sets [pre-commit](https://pre-commit.com/) git hooks to keep the code neatly formatted with [flake8](https://pypi.org/project/flake8/) and [brunette](https://pypi.org/project/brunette/).\n\nTo run tests, you can use `Makefile` as well.\n\n```bash\nsource venv/bin/activate  # activate python environment\nmake check\n```\n\nTo run a development version of ep-stats do\n\n```bash\nsource venv/bin/activate\ncd src\npython -m epstats\n```\n\n### Documentation\n\nTo update documentation run\n\n```bash\nmkdocs gh-deploy\n```\n\nIt updates documentation in GitHub pages stored in branch `gh-pages`.\n\n## Inspiration\n\nSoftware engineering practices of this package have been heavily inspired by marvelous [calmcode.io](https://calmcode.io/) site managed by [Vincent D. Warmerdam](https://github.com/koaning).\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Statistical package to evaluate ab tests in experimentation platform.",
    "version": "2.3.2",
    "project_urls": {
        "Homepage": "https://github.com/avast/ep-stats"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "380d58f5d6be224b45825ef82672f5088c1109dd2fe17feb3cf920fff03899b0",
                "md5": "973941d95156f89037c0d200e79f1afc",
                "sha256": "f653dde4b03768ac1ae9ef6d19c0aa830d6442396af6dbdd2961fda506cba1f5"
            },
            "downloads": -1,
            "filename": "ep_stats-2.3.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "973941d95156f89037c0d200e79f1afc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 52868,
            "upload_time": "2024-04-24T12:57:28",
            "upload_time_iso_8601": "2024-04-24T12:57:28.384484Z",
            "url": "https://files.pythonhosted.org/packages/38/0d/58f5d6be224b45825ef82672f5088c1109dd2fe17feb3cf920fff03899b0/ep_stats-2.3.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "203204ba4d7a9577b1f5514fbd2b0a2b37e8503963984c41de0845bb33e06827",
                "md5": "b289f444c6e3eae9b5e84cc4c1541e56",
                "sha256": "8f3f9351736bb2f922a92116d18a9a4e9572fdd7345cccbe4bec380f827f581a"
            },
            "downloads": -1,
            "filename": "ep-stats-2.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "b289f444c6e3eae9b5e84cc4c1541e56",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 42848,
            "upload_time": "2024-04-24T12:57:29",
            "upload_time_iso_8601": "2024-04-24T12:57:29.969650Z",
            "url": "https://files.pythonhosted.org/packages/20/32/04ba4d7a9577b1f5514fbd2b0a2b37e8503963984c41de0845bb33e06827/ep-stats-2.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-24 12:57:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "avast",
    "github_project": "ep-stats",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "anyio",
            "specs": [
                [
                    "==",
                    "3.7.1"
                ]
            ]
        },
        {
            "name": "asgiref",
            "specs": [
                [
                    "==",
                    "3.7.2"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    "==",
                    "8.1.7"
                ]
            ]
        },
        {
            "name": "fastapi",
            "specs": [
                [
                    "==",
                    "0.95.2"
                ]
            ]
        },
        {
            "name": "h11",
            "specs": [
                [
                    "==",
                    "0.14.0"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.4"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.25.2"
                ]
            ]
        },
        {
            "name": "packaging",
            "specs": [
                [
                    "==",
                    "23.1"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "2.0.3"
                ]
            ]
        },
        {
            "name": "patsy",
            "specs": [
                [
                    "==",
                    "0.5.3"
                ]
            ]
        },
        {
            "name": "prometheus-client",
            "specs": [
                [
                    "==",
                    "0.17.1"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "1.10.12"
                ]
            ]
        },
        {
            "name": "pyparsing",
            "specs": [
                [
                    "==",
                    "2.4.6"
                ]
            ]
        },
        {
            "name": "python-dateutil",
            "specs": [
                [
                    "==",
                    "2.8.2"
                ]
            ]
        },
        {
            "name": "pytz",
            "specs": [
                [
                    "==",
                    "2023.3"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    "==",
                    "1.11.2"
                ]
            ]
        },
        {
            "name": "six",
            "specs": [
                [
                    "==",
                    "1.16.0"
                ]
            ]
        },
        {
            "name": "sniffio",
            "specs": [
                [
                    "==",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "starlette",
            "specs": [
                [
                    "==",
                    "0.27.0"
                ]
            ]
        },
        {
            "name": "statsmodels",
            "specs": [
                [
                    "==",
                    "0.13.5"
                ]
            ]
        },
        {
            "name": "typing-extensions",
            "specs": [
                [
                    "==",
                    "4.7.1"
                ]
            ]
        },
        {
            "name": "tzdata",
            "specs": [
                [
                    "==",
                    "2023.3"
                ]
            ]
        },
        {
            "name": "uvicorn",
            "specs": [
                [
                    "==",
                    "0.17.6"
                ]
            ]
        }
    ],
    "lcname": "ep-stats"
}
        
Elapsed time: 0.25880s