
[](https://pypi.org/project/ep-stats/)
[](https://pypi.org/project/ep-stats/)
[](https://github.com/odwyersoftware/brunette)
[](https://flake8.pycqa.org/en/latest/)

<img src="theme/experiment_b.png" align="right" />
# ep-stats
**Statistical package for the experimentation platform.**
It provides a general Python package and REST API that can be used to evaluate any metric
in an AB test experiment.
## Features
* Robust two-tailed t-test implementation with multiple p-value corrections and delta methods applied.
* Sequential evaluations allow experiments to be stopped early.
* Connect it to any data source to get either pre-aggregated or per randomization unit data.
* Simple expression language to define arbitrary metrics.
* Sample size estimation.
* REST API to integrate it as a service in experimentation portal with score cards.
## Documentation
We have got a lovely [documentation](https://avast.github.io/ep-stats/).
## Base Example
ep-stats allows for a quick experiment evaluation. We are using sample testing data to evaluate metric `Click-through Rate` in experiment `test-conversion`.
```python
from epstats.toolkit import Experiment, Metric, SrmCheck
experiment = Experiment(
'test-conversion',
'a',
[Metric(
1,
'Click-through Rate',
'count(test_unit_type.unit.click)',
'count(test_unit_type.global.exposure)'),
],
[SrmCheck(1, 'SRM', 'count(test_unit_type.global.exposure)')],
unit_type='test_unit_type')
# This gets testing data, use other Dao or get aggregated goals in some other way.
from epstats.toolkit.testing import TestData
goals = TestData.load_goals_agg(experiment.id)
# evaluate experiment
ev = experiment.evaluate_agg(goals)
```
`ev` contains evaluations of exposures, metrics, and checks. This will provide the following output.
`ev.exposures`:
| exp_id | exp_variant_id | exposures |
| :----- | :------------- | --------: |
|test-conversion|a|21|
|test-conversion|b|26|
`ev.metrics`:
| exp_id | metric_id | metric_name | exp_variant_id | count | mean | std | sum_value | confidence_level | diff | test_stat | p_value | confidence_interval | standard_error | degrees_of_freedom |
| :----- | --------: | :---------- | -------------: | ----: | ---: | --: | --------: | ---------------: | ---: | --------: | ------: | ------------------: | -------------: | -----------------: |
|test-conversion|1|Click-through Rate|a|21|0.238095|0.436436|5|0.95|0|0|1|1.14329|0.565685|40|
|test-conversion|1|Click-through Rate|b|26|0.269231|0.452344|7|0.95|0.130769|0.223152|0.82446|1.18137|0.586008|43.5401|
`ev.checks`:
| exp_id | check_id | check_name | variable_id | value |
| :----- | -------: | :--------- | :---------- | ----: |
|test-conversion|1|SRM|p_value|0.465803|
|test-conversion|1|SRM|test_stat|0.531915|
|test-conversion|1|SRM|confidence_level|0.999000|
## Installation
You can install this package via `pip`.
```bash
pip install ep-stats
```
## Running
You can run a testing version of ep-stats via
```bash
python -m epstats
```
Then, see Swagger on [http://localhost:8080/docs](http://localhost:8080/docs) for API documentation.
## Contributing
To get started locally, you can clone the repo and quickly get started using the `Makefile`.
```bash
git clone https://github.com/avast/ep-stats.git
cd ep-stats
make install-dev
```
It sets a new virtual environment `.venv` in `./.venv` using [.venv](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/), installs all development dependencies, and sets [pre-commit](https://pre-commit.com/) git hooks to keep the code neatly formatted with [ruff](https://pypi.org/project/ruff).
To run tests, you can use `Makefile` as well.
```bash
poetry shell # activate python environment
make check
```
To run a development version of ep-stats do
```bash
poetry shell
python -m epstats
```
### Documentation
To update documentation run
```bash
mkdocs gh-deploy
```
It updates documentation in GitHub pages stored in branch `gh-pages`.
## Inspiration
Software engineering practices of this package have been heavily inspired by marvelous [calmcode.io](https://calmcode.io/) site managed by [Vincent D. Warmerdam](https://github.com/koaning).
Raw data
{
"_id": null,
"home_page": "https://github.com/avast/ep-stats",
"name": "ep-stats",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Ondra Zahradnik",
"author_email": "ondra.zahradnik@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/1a/f4/4b5299a31b6090f0dfe513b97e2027d0aceda7699fd628586df089eccbe2/ep_stats-2.5.2.tar.gz",
"platform": null,
"description": "\n[](https://pypi.org/project/ep-stats/)\n[](https://pypi.org/project/ep-stats/)\n[](https://github.com/odwyersoftware/brunette)\n[](https://flake8.pycqa.org/en/latest/)\n\n<img src=\"theme/experiment_b.png\" align=\"right\" />\n\n# ep-stats\n\n**Statistical package for the experimentation platform.**\n\nIt provides a general Python package and REST API that can be used to evaluate any metric\nin an AB test experiment.\n\n## Features\n\n* Robust two-tailed t-test implementation with multiple p-value corrections and delta methods applied.\n* Sequential evaluations allow experiments to be stopped early.\n* Connect it to any data source to get either pre-aggregated or per randomization unit data.\n* Simple expression language to define arbitrary metrics.\n* Sample size estimation.\n* REST API to integrate it as a service in experimentation portal with score cards.\n\n## Documentation\n\nWe have got a lovely [documentation](https://avast.github.io/ep-stats/).\n\n## Base Example\n\nep-stats allows for a quick experiment evaluation. We are using sample testing data to evaluate metric `Click-through Rate` in experiment `test-conversion`.\n\n```python\nfrom epstats.toolkit import Experiment, Metric, SrmCheck\nexperiment = Experiment(\n 'test-conversion',\n 'a',\n [Metric(\n 1,\n 'Click-through Rate',\n 'count(test_unit_type.unit.click)',\n 'count(test_unit_type.global.exposure)'),\n ],\n [SrmCheck(1, 'SRM', 'count(test_unit_type.global.exposure)')],\n unit_type='test_unit_type')\n\n# This gets testing data, use other Dao or get aggregated goals in some other way.\nfrom epstats.toolkit.testing import TestData\ngoals = TestData.load_goals_agg(experiment.id)\n\n# evaluate experiment\nev = experiment.evaluate_agg(goals)\n```\n\n`ev` contains evaluations of exposures, metrics, and checks. This will provide the following output.\n\n`ev.exposures`:\n\n| exp_id | exp_variant_id | exposures |\n| :----- | :------------- | --------: |\n|test-conversion|a|21|\n|test-conversion|b|26|\n\n`ev.metrics`:\n\n| exp_id | metric_id | metric_name | exp_variant_id | count | mean | std | sum_value | confidence_level | diff | test_stat | p_value | confidence_interval | standard_error | degrees_of_freedom |\n| :----- | --------: | :---------- | -------------: | ----: | ---: | --: | --------: | ---------------: | ---: | --------: | ------: | ------------------: | -------------: | -----------------: |\n|test-conversion|1|Click-through Rate|a|21|0.238095|0.436436|5|0.95|0|0|1|1.14329|0.565685|40|\n|test-conversion|1|Click-through Rate|b|26|0.269231|0.452344|7|0.95|0.130769|0.223152|0.82446|1.18137|0.586008|43.5401|\n\n`ev.checks`:\n\n| exp_id | check_id | check_name | variable_id | value |\n| :----- | -------: | :--------- | :---------- | ----: |\n|test-conversion|1|SRM|p_value|0.465803|\n|test-conversion|1|SRM|test_stat|0.531915|\n|test-conversion|1|SRM|confidence_level|0.999000|\n\n## Installation\n\nYou can install this package via `pip`.\n\n```bash\npip install ep-stats\n```\n\n## Running\n\nYou can run a testing version of ep-stats via\n\n```bash\npython -m epstats\n```\n\nThen, see Swagger on [http://localhost:8080/docs](http://localhost:8080/docs) for API documentation.\n\n## Contributing\n\nTo get started locally, you can clone the repo and quickly get started using the `Makefile`.\n\n```bash\ngit clone https://github.com/avast/ep-stats.git\ncd ep-stats\nmake install-dev\n```\n\nIt sets a new virtual environment `.venv` in `./.venv` using [.venv](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/), installs all development dependencies, and sets [pre-commit](https://pre-commit.com/) git hooks to keep the code neatly formatted with [ruff](https://pypi.org/project/ruff).\n\nTo run tests, you can use `Makefile` as well.\n\n```bash\npoetry shell # activate python environment\nmake check\n```\n\nTo run a development version of ep-stats do\n\n```bash\npoetry shell\npython -m epstats\n```\n\n### Documentation\n\nTo update documentation run\n\n```bash\nmkdocs gh-deploy\n```\n\nIt updates documentation in GitHub pages stored in branch `gh-pages`.\n\n## Inspiration\n\nSoftware engineering practices of this package have been heavily inspired by marvelous [calmcode.io](https://calmcode.io/) site managed by [Vincent D. Warmerdam](https://github.com/koaning).\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Statistical package to evaluate ab tests in experimentation platform.",
"version": "2.5.2",
"project_urls": {
"Homepage": "https://github.com/avast/ep-stats"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "12bdd4e4e83c67bd4384199ebedaad9a0e0e2b2364fb81d706241ecf1fc5c15a",
"md5": "fd56f578012baaad440bf92a35f528b8",
"sha256": "21ccd5467ea99893d8b9b65e6780bec8b41f7f66c48f0284be63605e892f78d0"
},
"downloads": -1,
"filename": "ep_stats-2.5.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "fd56f578012baaad440bf92a35f528b8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.9",
"size": 55491,
"upload_time": "2024-11-06T13:31:28",
"upload_time_iso_8601": "2024-11-06T13:31:28.825492Z",
"url": "https://files.pythonhosted.org/packages/12/bd/d4e4e83c67bd4384199ebedaad9a0e0e2b2364fb81d706241ecf1fc5c15a/ep_stats-2.5.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1af44b5299a31b6090f0dfe513b97e2027d0aceda7699fd628586df089eccbe2",
"md5": "da44f8dc71b3be5f1062a6b4e09dd316",
"sha256": "bf6b64c0958105945ff57cc6b933d97d2e147ce603691d2cc2e3047c0957232d"
},
"downloads": -1,
"filename": "ep_stats-2.5.2.tar.gz",
"has_sig": false,
"md5_digest": "da44f8dc71b3be5f1062a6b4e09dd316",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.9",
"size": 54120,
"upload_time": "2024-11-06T13:31:30",
"upload_time_iso_8601": "2024-11-06T13:31:30.115108Z",
"url": "https://files.pythonhosted.org/packages/1a/f4/4b5299a31b6090f0dfe513b97e2027d0aceda7699fd628586df089eccbe2/ep_stats-2.5.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-06 13:31:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "avast",
"github_project": "ep-stats",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "anyio",
"specs": [
[
"==",
"3.7.1"
]
]
},
{
"name": "asgiref",
"specs": [
[
"==",
"3.7.2"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.7"
]
]
},
{
"name": "fastapi",
"specs": [
[
"==",
"0.95.2"
]
]
},
{
"name": "h11",
"specs": [
[
"==",
"0.14.0"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.4"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.25.2"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"23.1"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.0.3"
]
]
},
{
"name": "patsy",
"specs": [
[
"==",
"0.5.3"
]
]
},
{
"name": "prometheus-client",
"specs": [
[
"==",
"0.17.1"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"1.10.12"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"2.4.6"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.8.2"
]
]
},
{
"name": "pytz",
"specs": [
[
"==",
"2023.3"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.11.2"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "sniffio",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "starlette",
"specs": [
[
"==",
"0.27.0"
]
]
},
{
"name": "statsmodels",
"specs": [
[
"==",
"0.13.5"
]
]
},
{
"name": "typing-extensions",
"specs": [
[
"==",
"4.7.1"
]
]
},
{
"name": "tzdata",
"specs": [
[
"==",
"2023.3"
]
]
},
{
"name": "uvicorn",
"specs": [
[
"==",
"0.17.6"
]
]
}
],
"lcname": "ep-stats"
}