Name | ep-stats JSON |
Version |
1.3.1
JSON |
| download |
home_page | https://github.com/avast/ep-stats |
Summary | Statistical package to evaluate ab tests in experimentation platform. |
upload_time | 2022-06-23 10:04:48 |
maintainer | |
docs_url | None |
author | Ondrej Zahradnik |
requires_python | >=3.6 |
license | |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|

[](https://pypi.org/project/ep-stats/)
[](https://pypi.org/project/ep-stats/)
[](https://github.com/odwyersoftware/brunette)
[](https://flake8.pycqa.org/en/latest/)

<img src="theme/experiment_b.png" align="right" />
# ep-stats
**Statistical package for the experimentation platform.**
It provides a general Python package and REST API that can be used to evaluate any metric
in an AB test experiment.
## Features
* Robust two-tailed t-test implementation with multiple p-value corrections and delta methods applied.
* Sequential evaluations allow experiments to be stopped early.
* Connect it to any data source to get either pre-aggregated or per randomization unit data.
* Simple expression language to define arbitrary metrics.
* REST API to integrate it as a service in experimentation portal with score cards.
## Documentation
We have got a lovely [documentation](https://avast.github.io/ep-stats/).
## Base Example
ep-stats allows for a quick experiment evaluation. We are using sample testing data to evaluate metric `Click-through Rate` in experiment `test-conversion`.
```python
from epstats.toolkit import Experiment, Metric, SrmCheck
experiment = Experiment(
'test-conversion',
'a',
[Metric(
1,
'Click-through Rate',
'count(test_unit_type.unit.click)',
'count(test_unit_type.global.exposure)'),
],
[SrmCheck(1, 'SRM', 'count(test_unit_type.global.exposure)')],
unit_type='test_unit_type')
# This gets testing data, use other Dao or get aggregated goals in some other way.
from epstats.toolkit.testing import TestData
goals = TestData.load_goals_agg(experiment.id)
# evaluate experiment
ev = experiment.evaluate_agg(goals)
```
`ev` contains evaluations of exposures, metrics, and checks. This will provide the following output.
`ev.exposures`:
| exp_id | exp_variant_id | exposures |
| :----- | :------------- | --------: |
|test-conversion|a|21|
|test-conversion|b|26|
`ev.metrics`:
| exp_id | metric_id | metric_name | exp_variant_id | count | mean | std | sum_value | confidence_level | diff | test_stat | p_value | confidence_interval | standard_error | degrees_of_freedom |
| :----- | --------: | :---------- | -------------: | ----: | ---: | --: | --------: | ---------------: | ---: | --------: | ------: | ------------------: | -------------: | -----------------: |
|test-conversion|1|Click-through Rate|a|21|0.238095|0.436436|5|0.95|0|0|1|1.14329|0.565685|40|
|test-conversion|1|Click-through Rate|b|26|0.269231|0.452344|7|0.95|0.130769|0.223152|0.82446|1.18137|0.586008|43.5401|
`ev.checks`:
| exp_id | check_id | check_name | variable_id | value |
| :----- | -------: | :--------- | :---------- | ----: |
|test-conversion|1|SRM|p_value|0.465803|
|test-conversion|1|SRM|test_stat|0.531915|
|test-conversion|1|SRM|confidence_level|0.999000|
## Installation
You can install this package via `pip`.
```bash
pip install ep-stats
```
## Running
You can run a testing version of ep-stats via
```bash
python -m epstats
```
Then, see Swagger on [http://localhost:8080/docs](http://localhost:8080/docs) for API documentation.
## Contributing
To get started locally, you can clone the repo and quickly get started using the `Makefile`.
```bash
git clone https://github.com/avast/ep-stats.git
cd ep-stats
make install-dev
```
It sets a new virtual environment `venv` in `./venv` using [venv](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/), installs all development dependencies, and sets [pre-commit](https://pre-commit.com/) git hooks to keep the code neatly formatted with [flake8](https://pypi.org/project/flake8/) and [brunette](https://pypi.org/project/brunette/).
To run tests, you can use `Makefile` as well.
```bash
source venv/bin/activate # activate python environment
make check
```
To run a development version of ep-stats do
```bash
source venv/bin/activate
cd src
python -m epstats
```
### Documentation
To update documentation run
```bash
mkdocs gh-deploy
```
It updates documentation in GitHub pages stored in branch `gh-pages`.
## Inspiration
Software engineering practices of this package have been heavily inspired by marvelous [calmcode.io](https://calmcode.io/) site managed by [Vincent D. Warmerdam](https://github.com/koaning).
Raw data
{
"_id": null,
"home_page": "https://github.com/avast/ep-stats",
"name": "ep-stats",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "",
"author": "Ondrej Zahradnik",
"author_email": "ondrej.zahradnik@avast.com",
"download_url": "https://files.pythonhosted.org/packages/a0/17/1e5946a68d7f4439ccca3a8b01e77acc09093c224f81770384f77f0b642f/ep-stats-1.3.1.tar.gz",
"platform": null,
"description": "\n[](https://pypi.org/project/ep-stats/)\n[](https://pypi.org/project/ep-stats/)\n[](https://github.com/odwyersoftware/brunette)\n[](https://flake8.pycqa.org/en/latest/)\n\n<img src=\"theme/experiment_b.png\" align=\"right\" />\n\n# ep-stats\n\n**Statistical package for the experimentation platform.**\n\nIt provides a general Python package and REST API that can be used to evaluate any metric\nin an AB test experiment.\n\n## Features\n\n* Robust two-tailed t-test implementation with multiple p-value corrections and delta methods applied.\n* Sequential evaluations allow experiments to be stopped early.\n* Connect it to any data source to get either pre-aggregated or per randomization unit data.\n* Simple expression language to define arbitrary metrics.\n* REST API to integrate it as a service in experimentation portal with score cards.\n\n## Documentation\n\nWe have got a lovely [documentation](https://avast.github.io/ep-stats/).\n\n## Base Example\n\nep-stats allows for a quick experiment evaluation. We are using sample testing data to evaluate metric `Click-through Rate` in experiment `test-conversion`.\n\n```python\nfrom epstats.toolkit import Experiment, Metric, SrmCheck\nexperiment = Experiment(\n 'test-conversion',\n 'a',\n [Metric(\n 1,\n 'Click-through Rate',\n 'count(test_unit_type.unit.click)',\n 'count(test_unit_type.global.exposure)'),\n ],\n [SrmCheck(1, 'SRM', 'count(test_unit_type.global.exposure)')],\n unit_type='test_unit_type')\n\n# This gets testing data, use other Dao or get aggregated goals in some other way.\nfrom epstats.toolkit.testing import TestData\ngoals = TestData.load_goals_agg(experiment.id)\n\n# evaluate experiment\nev = experiment.evaluate_agg(goals)\n```\n\n`ev` contains evaluations of exposures, metrics, and checks. This will provide the following output.\n\n`ev.exposures`:\n\n| exp_id | exp_variant_id | exposures |\n| :----- | :------------- | --------: |\n|test-conversion|a|21|\n|test-conversion|b|26|\n\n`ev.metrics`:\n\n| exp_id | metric_id | metric_name | exp_variant_id | count | mean | std | sum_value | confidence_level | diff | test_stat | p_value | confidence_interval | standard_error | degrees_of_freedom |\n| :----- | --------: | :---------- | -------------: | ----: | ---: | --: | --------: | ---------------: | ---: | --------: | ------: | ------------------: | -------------: | -----------------: |\n|test-conversion|1|Click-through Rate|a|21|0.238095|0.436436|5|0.95|0|0|1|1.14329|0.565685|40|\n|test-conversion|1|Click-through Rate|b|26|0.269231|0.452344|7|0.95|0.130769|0.223152|0.82446|1.18137|0.586008|43.5401|\n\n`ev.checks`:\n\n| exp_id | check_id | check_name | variable_id | value |\n| :----- | -------: | :--------- | :---------- | ----: |\n|test-conversion|1|SRM|p_value|0.465803|\n|test-conversion|1|SRM|test_stat|0.531915|\n|test-conversion|1|SRM|confidence_level|0.999000|\n\n## Installation\n\nYou can install this package via `pip`.\n\n```bash\npip install ep-stats\n```\n\n## Running\n\nYou can run a testing version of ep-stats via\n\n```bash\npython -m epstats\n```\n\nThen, see Swagger on [http://localhost:8080/docs](http://localhost:8080/docs) for API documentation.\n\n## Contributing\n\nTo get started locally, you can clone the repo and quickly get started using the `Makefile`.\n\n```bash\ngit clone https://github.com/avast/ep-stats.git\ncd ep-stats\nmake install-dev\n```\n\nIt sets a new virtual environment `venv` in `./venv` using [venv](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/), installs all development dependencies, and sets [pre-commit](https://pre-commit.com/) git hooks to keep the code neatly formatted with [flake8](https://pypi.org/project/flake8/) and [brunette](https://pypi.org/project/brunette/).\n\nTo run tests, you can use `Makefile` as well.\n\n```bash\nsource venv/bin/activate # activate python environment\nmake check\n```\n\nTo run a development version of ep-stats do\n\n```bash\nsource venv/bin/activate\ncd src\npython -m epstats\n```\n\n### Documentation\n\nTo update documentation run\n\n```bash\nmkdocs gh-deploy\n```\n\nIt updates documentation in GitHub pages stored in branch `gh-pages`.\n\n## Inspiration\n\nSoftware engineering practices of this package have been heavily inspired by marvelous [calmcode.io](https://calmcode.io/) site managed by [Vincent D. Warmerdam](https://github.com/koaning).\n\n\n",
"bugtrack_url": null,
"license": "",
"summary": "Statistical package to evaluate ab tests in experimentation platform.",
"version": "1.3.1",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "c8dcea2397330122966bb7957b804af3",
"sha256": "22ff5ddfe0a319d0b6e58a48ee78899a5f495c6cc15ee4123af6224a76baf4ff"
},
"downloads": -1,
"filename": "ep_stats-1.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c8dcea2397330122966bb7957b804af3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 47135,
"upload_time": "2022-06-23T10:04:47",
"upload_time_iso_8601": "2022-06-23T10:04:47.024815Z",
"url": "https://files.pythonhosted.org/packages/93/1e/55cdfa4f5b01f099e810cf42afb15e160d6025b1564cef27c8f9260fcf20/ep_stats-1.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "098c23b65d295ad79a2152a9148503d0",
"sha256": "de142932760153d973a5e1d12a82a642555a837c079f0000a7e96b959c890e7c"
},
"downloads": -1,
"filename": "ep-stats-1.3.1.tar.gz",
"has_sig": false,
"md5_digest": "098c23b65d295ad79a2152a9148503d0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 36201,
"upload_time": "2022-06-23T10:04:48",
"upload_time_iso_8601": "2022-06-23T10:04:48.943889Z",
"url": "https://files.pythonhosted.org/packages/a0/17/1e5946a68d7f4439ccca3a8b01e77acc09093c224f81770384f77f0b642f/ep-stats-1.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-06-23 10:04:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "avast",
"github_project": "ep-stats",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "ep-stats"
}