# Effector
<p align="center">
<img src="https://raw.githubusercontent.com/givasile/effector/main/docs/docs/static/effector_logo.png" width="500"/>
</p>
[](https://badge.fury.io/py/effector)


[](https://pepy.tech/projects/effector)
[](https://github.com/ambv/black)
---
`effector` an eXplainable AI package for **tabular data**. It:
- creates [global and regional](https://xai-effector.github.io/quickstart/global_and_regional_effects/) effect plots
- has a [simple API](https://xai-effector.github.io/quickstart/simple_api/) with smart defaults, but can become [flexible](https://xai-effector.github.io/quickstart/flexible_api/) if needed
- is model agnostic; can explain [any underlying ML model](https://xai-effector.github.io/)
- integrates easily with popular ML libraries, like [Scikit-Learn, Tensorflow and Pytorch](https://xai-effector.github.io/quickstart/simple_api/#__tabbed_2_2)
- is fast, for both [global](https://xai-effector.github.io/notebooks/guides/efficiency_global/) and [regional](https://xai-effector.github.io/notebooks/guides/efficiency_global/) methods
- provides a large collection of [global and regional effects methods](https://xai-effector.github.io/#supported-methods)
---
π [Documentation](https://xai-effector.github.io/) | π [Intro to global and regional effects](https://xai-effector.github.io/quickstart/global_and_regional_effects/) | π§ [API](https://xai-effector.github.io/api/) | π [Examples](https://xai-effector.github.io/examples)
---
## Installation
Effector requires Python 3.10+:
```bash
pip install effector
```
Dependencies: `numpy`, `scipy`, `matplotlib`, `tqdm`, `shap`.
---
## Quickstart
### Train an ML model
```python
import effector
import keras
import numpy as np
import tensorflow as tf
np.random.seed(42)
tf.random.set_seed(42)
# Load dataset
bike_sharing = effector.datasets.BikeSharing(pcg_train=0.8)
X_train, Y_train = bike_sharing.x_train, bike_sharing.y_train
X_test, Y_test = bike_sharing.x_test, bike_sharing.y_test
# Define and train a neural network
model = keras.Sequential([
keras.layers.Dense(1024, activation="relu"),
keras.layers.Dense(512, activation="relu"),
keras.layers.Dense(256, activation="relu"),
keras.layers.Dense(1)
])
model.compile(optimizer="adam", loss="mse", metrics=["mae", keras.metrics.RootMeanSquaredError()])
model.fit(X_train, Y_train, batch_size=512, epochs=20, verbose=1)
model.evaluate(X_test, Y_test, verbose=1)
```
### Wrap it in a callable
```python
def predict(x):
return model(x).numpy().squeeze()
```
### Explain it with global effect plots
```python
# define the global effect method
pdp = effector.PDP(
X_test,
predict,
feature_names=bike_sharing.feature_names,
target_name=bike_sharing.target_name
)
# plot the effect of the 3rd feature (feature: temperature)
pdp.plot(
feature=3,
nof_ice=200,
scale_x={"mean": bike_sharing.x_test_mu[3], "std": bike_sharing.x_test_std[3]},
scale_y={"mean": bike_sharing.y_test_mu, "std": bike_sharing.y_test_std},
centering=True,
show_avg_output=True,
y_limits=[-200, 1000]
)
```

### Explain it with regional effect plots
```python
r_pdp = effector.RegionalPDP(
X_test,
predict,
feature_names=bike_sharing.feature_names,
target_name=bike_sharing.target_name
)
# summarize the subregions of feature 3
scale_x_list = [{"mean": mu, "std": std} for mu, std in zip(bike_sharing.x_test_mu, bike_sharing.x_test_std)]
r_pdp.summary(
features=3,
scale_x_list=scale_x_list
)
```
```
Feature 3 - Full partition tree:
π³ Full Tree Structure:
βββββββββββββββββββββββ
hr πΉ [id: 0 | heter: 0.43 | inst: 3476 | w: 1.00]
workingday = 0.00 πΉ [id: 1 | heter: 0.36 | inst: 1129 | w: 0.32]
temp β€ 6.50 πΉ [id: 3 | heter: 0.17 | inst: 568 | w: 0.16]
temp > 6.50 πΉ [id: 4 | heter: 0.21 | inst: 561 | w: 0.16]
workingday β 0.00 πΉ [id: 2 | heter: 0.28 | inst: 2347 | w: 0.68]
temp β€ 6.50 πΉ [id: 5 | heter: 0.19 | inst: 953 | w: 0.27]
temp > 6.50 πΉ [id: 6 | heter: 0.20 | inst: 1394 | w: 0.40]
--------------------------------------------------
Feature 3 - Statistics per tree level:
π³ Tree Summary:
βββββββββββββββββ
Level 0πΉheter: 0.43
Level 1πΉheter: 0.31 | π»0.12 (28.15%)
Level 2πΉheter: 0.19 | π»0.11 (37.10%)
```
The summary of feature `hr` (hour) says that its effect on the output is highly dependent on the value of features:
- `workingday`, wheteher it is a workingday or not
- `temp`, what is the temperature the specific hour
Let's see how the effect changes on these subregions!
---
#### Is it workingday or not?
```python
# plot the regional effects after the first-level splits (workingday or non-workingday)
for node_idx in [1,2]:
r_pdp.plot(
feature=3,
node_idx=node_idx,
nof_ice=200,
scale_x_list=[{"mean": bike_sharing.x_test_mu[i], "std": bike_sharing.x_test_std[i]} for i in range(X_test.shape[1])],
scale_y={"mean": bike_sharing.y_test_mu, "std": bike_sharing.y_test_std},
y_limits=[-200, 1000]
)
```
<table>
<tr>
<td><img src="https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_5_0.png" alt="Feature effect plot"></td>
<td><img src="https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_5_1.png" alt="Feature effect plot"></td>
</tr>
</table>
#### Is it hot or cold?
```python
# plot the regional effects after the second-level splits (workingday or non-workingday and hot or cold temperature)
for node_idx in [3,4,5,6]:
r_pdp.plot(
feature=3,
node_idx=node_idx,
nof_ice=200,
scale_x_list=[{"mean": bike_sharing.x_test_mu[i], "std": bike_sharing.x_test_std[i]} for i in range(X_test.shape[1])],
scale_y={"mean": bike_sharing.y_test_mu, "std": bike_sharing.y_test_std},
y_limits=[-200, 1000]
)
```
<table>
<tr>
<td><img src="https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_0.png" alt="Feature effect plot"></td>
<td><img src="https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_1.png" alt="Feature effect plot"></td>
</tr>
<tr>
<td><img src="https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_2.png" alt="Feature effect plot"></td>
<td><img src="https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_3.png" alt="Feature effect plot"></td>
</tr>
</table>
---
## Supported Methods
`effector` implements global and regional effect methods:
| Method | Global Effect | Regional Effect | Reference | ML model | Speed |
|---------|----------------|-----------------|-----------|-------------------|----------------------------------------------|
| PDP | `PDP` | `RegionalPDP` | [PDP](https://projecteuclid.org/euclid.aos/1013203451) | any | Fast for a small dataset |
| d-PDP | `DerPDP` | `RegionalDerPDP`| [d-PDP](https://arxiv.org/abs/1309.6392) | differentiable | Fast for a small dataset |
| ALE | `ALE` | `RegionalALE` | [ALE](https://academic.oup.com/jrsssb/article/82/4/1059/7056085) | any | Fast |
| RHALE | `RHALE` | `RegionalRHALE` | [RHALE](https://ebooks.iospress.nl/doi/10.3233/FAIA230354) | differentiable | Very fast |
| SHAP-DP | `ShapDP` | `RegionalShapDP`| [SHAP](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions) | any | Fast for a small dataset and a light ML model |
---
## Method Selection Guide
From the runtime persepective there are three criterias:
- is the dataset `small` (N<10K) or `large` (N>10K instances) ?
- is the ML model `light` (runtime < 0.1s) or `heavy` (runtime > 0.1s) ?
- is the ML model `differentiable` or `non-differentiable` ?
Trust us and follow this guide:
- `light` + `small` + `differentiable` = `any([PDP, RHALE, ShapDP, ALE, DerPDP])`
- `light` + `small` + `non-differentiable`: `[PDP, ALE, ShapDP]`
- `heavy` + `small` + `differentiable` = `any([PDP, RHALE, ALE, DerPDP])`
- `heavy` + `small` + `non differentiable` = `any([PDP, ALE])`
- `big` + `not differentiable` = `ALE`
- `big` + `differentiable` = `RHALE`
---
## Citation
If you use `effector`, please cite it:
```bibtex
@misc{gkolemis2024effector,
title={effector: A Python package for regional explanations},
author={Vasilis Gkolemis et al.},
year={2024},
eprint={2404.02629},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
---
## References
- [Friedman, Jerome H. "Greedy function approximation: a gradient boosting machine." Annals of statistics (2001): 1189-1232.](https://projecteuclid.org/euclid.aos/1013203451)
- [Apley, Daniel W. "Visualizing the effects of predictor variables in black box supervised learning models." arXiv preprint arXiv:1612.08468 (2016).](https://arxiv.org/abs/1612.08468)
- [Gkolemis, Vasilis, "RHALE: Robust and Heterogeneity-Aware Accumulated Local Effects"](https://ebooks.iospress.nl/doi/10.3233/FAIA230354)
- [Gkolemis, Vasilis, "DALE: Decomposing Global Feature Effects Based on Feature Interactions"](https://proceedings.mlr.press/v189/gkolemis23a/gkolemis23a.pdf)
- [Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in neural information processing systems. 2017.](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions)
- [REPID: Regional Effect Plots with implicit Interaction Detection](https://proceedings.mlr.press/v151/herbinger22a.html)
- [Decomposing Global Feature Effects Based on Feature Interactions](https://arxiv.org/pdf/2306.00541.pdf)
- [Regionally Additive Models: Explainable-by-design models minimizing feature interactions](https://arxiv.org/abs/2309.12215)
---
## License
`effector` is released under the [MIT License](https://github.com/givasile/effector/blob/main/LICENSE).
Raw data
{
"_id": null,
"home_page": null,
"name": "effector",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "Julia Herbinger <julia.herbinger@gmail.com>, Christos Diou <cdiou@hua.gr>, Giuseppe Casalicchio <giuseppe.casalicchio@gmail.com>",
"keywords": "explainability, machine learning, deep learning, interpretability, feature effect",
"author": null,
"author_email": "Vasilis Gkolemis <ntipakos@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/97/f7/af7b5155ba68bfcbff0b5052ba223a5257faba5d63c9814a4b02af89bb73/effector-0.1.2.tar.gz",
"platform": null,
"description": "# Effector\n\n<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/givasile/effector/main/docs/docs/static/effector_logo.png\" width=\"500\"/>\n</p>\n\n[](https://badge.fury.io/py/effector)\n\n\n[](https://pepy.tech/projects/effector)\n[](https://github.com/ambv/black)\n\n---\n\n`effector` an eXplainable AI package for **tabular data**. It:\n\n- creates [global and regional](https://xai-effector.github.io/quickstart/global_and_regional_effects/) effect plots\n- has a [simple API](https://xai-effector.github.io/quickstart/simple_api/) with smart defaults, but can become [flexible](https://xai-effector.github.io/quickstart/flexible_api/) if needed\n- is model agnostic; can explain [any underlying ML model](https://xai-effector.github.io/)\n- integrates easily with popular ML libraries, like [Scikit-Learn, Tensorflow and Pytorch](https://xai-effector.github.io/quickstart/simple_api/#__tabbed_2_2)\n- is fast, for both [global](https://xai-effector.github.io/notebooks/guides/efficiency_global/) and [regional](https://xai-effector.github.io/notebooks/guides/efficiency_global/) methods\n- provides a large collection of [global and regional effects methods](https://xai-effector.github.io/#supported-methods)\n\n---\n\n\ud83d\udcd6 [Documentation](https://xai-effector.github.io/) | \ud83d\udd0d [Intro to global and regional effects](https://xai-effector.github.io/quickstart/global_and_regional_effects/) | \ud83d\udd27 [API](https://xai-effector.github.io/api/) | \ud83c\udfd7 [Examples](https://xai-effector.github.io/examples)\n\n---\n\n## Installation\n\nEffector requires Python 3.10+:\n\n```bash\npip install effector\n```\n\nDependencies: `numpy`, `scipy`, `matplotlib`, `tqdm`, `shap`.\n\n---\n\n## Quickstart\n\n### Train an ML model\n\n```python\nimport effector\nimport keras\nimport numpy as np\nimport tensorflow as tf\n\nnp.random.seed(42)\ntf.random.set_seed(42)\n\n# Load dataset\nbike_sharing = effector.datasets.BikeSharing(pcg_train=0.8)\nX_train, Y_train = bike_sharing.x_train, bike_sharing.y_train\nX_test, Y_test = bike_sharing.x_test, bike_sharing.y_test\n\n# Define and train a neural network\nmodel = keras.Sequential([\n keras.layers.Dense(1024, activation=\"relu\"),\n keras.layers.Dense(512, activation=\"relu\"),\n keras.layers.Dense(256, activation=\"relu\"),\n keras.layers.Dense(1)\n])\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mae\", keras.metrics.RootMeanSquaredError()])\nmodel.fit(X_train, Y_train, batch_size=512, epochs=20, verbose=1)\nmodel.evaluate(X_test, Y_test, verbose=1)\n```\n\n### Wrap it in a callable\n\n```python\ndef predict(x):\n return model(x).numpy().squeeze()\n```\n\n### Explain it with global effect plots\n\n```python\n# define the global effect method\npdp = effector.PDP(\n X_test,\n predict,\n feature_names=bike_sharing.feature_names,\n target_name=bike_sharing.target_name\n)\n\n# plot the effect of the 3rd feature (feature: temperature)\npdp.plot(\n feature=3,\n nof_ice=200,\n scale_x={\"mean\": bike_sharing.x_test_mu[3], \"std\": bike_sharing.x_test_std[3]},\n scale_y={\"mean\": bike_sharing.y_test_mu, \"std\": bike_sharing.y_test_std},\n centering=True,\n show_avg_output=True,\n y_limits=[-200, 1000]\n)\n```\n\n\n\n### Explain it with regional effect plots\n\n```python\nr_pdp = effector.RegionalPDP(\n X_test,\n predict,\n feature_names=bike_sharing.feature_names,\n target_name=bike_sharing.target_name\n)\n\n# summarize the subregions of feature 3\nscale_x_list = [{\"mean\": mu, \"std\": std} for mu, std in zip(bike_sharing.x_test_mu, bike_sharing.x_test_std)]\nr_pdp.summary(\n features=3,\n scale_x_list=scale_x_list\n)\n```\n\n```\nFeature 3 - Full partition tree:\n\ud83c\udf33 Full Tree Structure:\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nhr \ud83d\udd39 [id: 0 | heter: 0.43 | inst: 3476 | w: 1.00]\n workingday = 0.00 \ud83d\udd39 [id: 1 | heter: 0.36 | inst: 1129 | w: 0.32]\n temp \u2264 6.50 \ud83d\udd39 [id: 3 | heter: 0.17 | inst: 568 | w: 0.16]\n temp > 6.50 \ud83d\udd39 [id: 4 | heter: 0.21 | inst: 561 | w: 0.16]\n workingday \u2260 0.00 \ud83d\udd39 [id: 2 | heter: 0.28 | inst: 2347 | w: 0.68]\n temp \u2264 6.50 \ud83d\udd39 [id: 5 | heter: 0.19 | inst: 953 | w: 0.27]\n temp > 6.50 \ud83d\udd39 [id: 6 | heter: 0.20 | inst: 1394 | w: 0.40]\n--------------------------------------------------\nFeature 3 - Statistics per tree level:\n\ud83c\udf33 Tree Summary:\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nLevel 0\ud83d\udd39heter: 0.43\n Level 1\ud83d\udd39heter: 0.31 | \ud83d\udd3b0.12 (28.15%)\n Level 2\ud83d\udd39heter: 0.19 | \ud83d\udd3b0.11 (37.10%)\n```\n\nThe summary of feature `hr` (hour) says that its effect on the output is highly dependent on the value of features:\n- `workingday`, wheteher it is a workingday or not\n- `temp`, what is the temperature the specific hour\n\nLet's see how the effect changes on these subregions!\n\n---\n#### Is it workingday or not?\n\n```python\n# plot the regional effects after the first-level splits (workingday or non-workingday)\nfor node_idx in [1,2]:\n r_pdp.plot(\n feature=3,\n node_idx=node_idx,\n nof_ice=200,\n scale_x_list=[{\"mean\": bike_sharing.x_test_mu[i], \"std\": bike_sharing.x_test_std[i]} for i in range(X_test.shape[1])],\n scale_y={\"mean\": bike_sharing.y_test_mu, \"std\": bike_sharing.y_test_std},\n y_limits=[-200, 1000]\n )\n```\n\n<table>\n <tr>\n <td><img src=\"https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_5_0.png\" alt=\"Feature effect plot\"></td>\n <td><img src=\"https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_5_1.png\" alt=\"Feature effect plot\"></td>\n </tr>\n</table>\n\n\n#### Is it hot or cold?\n\n```python\n# plot the regional effects after the second-level splits (workingday or non-workingday and hot or cold temperature)\nfor node_idx in [3,4,5,6]:\n r_pdp.plot(\n feature=3,\n node_idx=node_idx,\n nof_ice=200,\n scale_x_list=[{\"mean\": bike_sharing.x_test_mu[i], \"std\": bike_sharing.x_test_std[i]} for i in range(X_test.shape[1])],\n scale_y={\"mean\": bike_sharing.y_test_mu, \"std\": bike_sharing.y_test_std},\n y_limits=[-200, 1000]\n )\n\n```\n\n<table>\n <tr>\n <td><img src=\"https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_0.png\" alt=\"Feature effect plot\"></td>\n <td><img src=\"https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_1.png\" alt=\"Feature effect plot\"></td>\n </tr>\n <tr>\n <td><img src=\"https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_2.png\" alt=\"Feature effect plot\"></td>\n <td><img src=\"https://raw.githubusercontent.com/givasile/effector/main/docs/docs/notebooks/quickstart/readme_example_files/readme_example_6_3.png\" alt=\"Feature effect plot\"></td>\n </tr>\n</table>\n\n---\n\n## Supported Methods\n\n`effector` implements global and regional effect methods:\n\n| Method | Global Effect | Regional Effect | Reference | ML model | Speed |\n|---------|----------------|-----------------|-----------|-------------------|----------------------------------------------|\n| PDP | `PDP` | `RegionalPDP` | [PDP](https://projecteuclid.org/euclid.aos/1013203451) | any | Fast for a small dataset |\n| d-PDP | `DerPDP` | `RegionalDerPDP`| [d-PDP](https://arxiv.org/abs/1309.6392) | differentiable | Fast for a small dataset |\n| ALE | `ALE` | `RegionalALE` | [ALE](https://academic.oup.com/jrsssb/article/82/4/1059/7056085) | any | Fast |\n| RHALE | `RHALE` | `RegionalRHALE` | [RHALE](https://ebooks.iospress.nl/doi/10.3233/FAIA230354) | differentiable | Very fast |\n| SHAP-DP | `ShapDP` | `RegionalShapDP`| [SHAP](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions) | any | Fast for a small dataset and a light ML model |\n\n---\n\n## Method Selection Guide\n\nFrom the runtime persepective there are three criterias:\n\n- is the dataset `small` (N<10K) or `large` (N>10K instances) ? \n- is the ML model `light` (runtime < 0.1s) or `heavy` (runtime > 0.1s) ?\n- is the ML model `differentiable` or `non-differentiable` ?\n\nTrust us and follow this guide:\n\n- `light` + `small` + `differentiable` = `any([PDP, RHALE, ShapDP, ALE, DerPDP])` \n- `light` + `small` + `non-differentiable`: `[PDP, ALE, ShapDP]`\n- `heavy` + `small` + `differentiable` = `any([PDP, RHALE, ALE, DerPDP])`\n- `heavy` + `small` + `non differentiable` = `any([PDP, ALE])`\n- `big` + `not differentiable` = `ALE`\n- `big` + `differentiable` = `RHALE` \n\n---\n\n## Citation\n\nIf you use `effector`, please cite it:\n\n```bibtex\n@misc{gkolemis2024effector,\n title={effector: A Python package for regional explanations},\n author={Vasilis Gkolemis et al.},\n year={2024},\n eprint={2404.02629},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n```\n\n---\n\n## References\n\n\n- [Friedman, Jerome H. \"Greedy function approximation: a gradient boosting machine.\" Annals of statistics (2001): 1189-1232.](https://projecteuclid.org/euclid.aos/1013203451)\n- [Apley, Daniel W. \"Visualizing the effects of predictor variables in black box supervised learning models.\" arXiv preprint arXiv:1612.08468 (2016).](https://arxiv.org/abs/1612.08468)\n- [Gkolemis, Vasilis, \"RHALE: Robust and Heterogeneity-Aware Accumulated Local Effects\"](https://ebooks.iospress.nl/doi/10.3233/FAIA230354)\n- [Gkolemis, Vasilis, \"DALE: Decomposing Global Feature Effects Based on Feature Interactions\"](https://proceedings.mlr.press/v189/gkolemis23a/gkolemis23a.pdf)\n- [Lundberg, Scott M., and Su-In Lee. \"A unified approach to interpreting model predictions.\" Advances in neural information processing systems. 2017.](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions)\n- [REPID: Regional Effect Plots with implicit Interaction Detection](https://proceedings.mlr.press/v151/herbinger22a.html)\n- [Decomposing Global Feature Effects Based on Feature Interactions](https://arxiv.org/pdf/2306.00541.pdf)\n- [Regionally Additive Models: Explainable-by-design models minimizing feature interactions](https://arxiv.org/abs/2309.12215)\n\n---\n\n## License\n\n`effector` is released under the [MIT License](https://github.com/givasile/effector/blob/main/LICENSE).\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python library for global and regional effects",
"version": "0.1.2",
"project_urls": {
"documentation": "https://xai-effector.github.io",
"source": "https://github.com/givasile/effector",
"tracker": "https://github.com/givasile/effector/issues"
},
"split_keywords": [
"explainability",
" machine learning",
" deep learning",
" interpretability",
" feature effect"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ec63a59b6d9b717ea9d693b2f95be0d830b6e7c34119f69e6cc12f804d8dc2b5",
"md5": "4d18161eb457181c46367a98dabfb211",
"sha256": "198b82b0574ccbcdb07e0a9c9f6909041e2a8acb21ee9b8133eba2b5b7a6e27a"
},
"downloads": -1,
"filename": "effector-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4d18161eb457181c46367a98dabfb211",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 58790,
"upload_time": "2025-02-25T10:39:45",
"upload_time_iso_8601": "2025-02-25T10:39:45.870793Z",
"url": "https://files.pythonhosted.org/packages/ec/63/a59b6d9b717ea9d693b2f95be0d830b6e7c34119f69e6cc12f804d8dc2b5/effector-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "97f7af7b5155ba68bfcbff0b5052ba223a5257faba5d63c9814a4b02af89bb73",
"md5": "7a337f9aa7dded16cce6e8f3eb46bfaa",
"sha256": "d4c1d8be40ae23cec95af5565486db0e1d7c38ebf41ab3a43866c9ae1fe3fa94"
},
"downloads": -1,
"filename": "effector-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "7a337f9aa7dded16cce6e8f3eb46bfaa",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 48857,
"upload_time": "2025-02-25T10:39:47",
"upload_time_iso_8601": "2025-02-25T10:39:47.461607Z",
"url": "https://files.pythonhosted.org/packages/97/f7/af7b5155ba68bfcbff0b5052ba223a5257faba5d63c9814a4b02af89bb73/effector-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-25 10:39:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "givasile",
"github_project": "effector",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "effector"
}